Press ESC to close

AI in OT: Opportunities and risks you need to know

Since their widespread release in November 2022, generative AI applications like ChatGPT have had a huge influence on the news cycle. These AI programs create new music, photos, and emails by generating text from massive amounts of text data. However, incorporating next-generation AI with operational technology poses ethical and practical questions regarding potential consequences, testing procedures, and effective and safe use.

AI’s impact, testing, and dependability in OT

Operations in the operational theatre (OT) sector focus on repetition and consistency, assuring constant inputs and outputs to forecast results. In critical infrastructure contexts, human operators are constantly ready to make judgements. IT operations, on the other hand, frequently have fewer implications, such as data loss, but OT operations can have life-threatening effects, environmental impacts, liability problems, and long-term brand harm. As a result, depending only on AI or other technologies for OT operations is not optimal, as mistakes might have serious implications.

Microsoft has developed a framework for public AI governance, concentrating on public policy, legislation, and regulation. Based on the AI Risk Management Framework, the plan asks for government-led AI safety protocols as well as safety brakes for key infrastructure systems. This is in reaction to increased worry about the possible negative consequences of AI in OT, as well as the uncertainties around responsibility.

Increase the importance of red and blue team exercises

The terms “red team” and “blue team” refer to distinct techniques for evaluating and upgrading a system’s or network’s security. The words first appeared in military exercises and have now spread to the cybersecurity sector.

The red and blue teams work together to safeguard OT systems by discovering and responding to vulnerabilities. Artificial intelligence may be used to mimic cyberattacks and assess system vulnerabilities. This exercise can assist in closing skill gaps, identifying attack routes, and highlighting weaknesses that were not discovered in prior assessments. AI can also be used defensively to thwart the red team’s invasive attack preparations. This method has the potential to improve overall defence and establish reaction strategies to safeguard vital infrastructure. The exercise reveals possible threats to control systems and assets and can aid in the discovery of new strategies to safeguard production systems.

AI potential for digital twins

Advanced organisations are employing AI to stress test or optimize technology by generating digital copies of their operational (OT) environments, including oil refineries or power plants. These digital twins offer a secure environment for testing and validating technology before it is used in production operations. AI can also help with cybersecurity. However, there are major dangers in real-world production contexts, and the effectiveness of testing in the digital twin has yet to be verified. Inaccurate test findings might result in blackouts, significant environmental consequences, or worse outcomes. As a result, the adoption of AI technology in OT will most likely be slow and cautious, giving time to build long-term AI governance strategies and risk management frameworks.

Optimize SOC performance and decrease operator noise

In a security operations centre (SOC) setting, AI may help to secure and expand operations technology (OT) organisations. AI technologies may be used by organizations to function as SOC analysts, examining anomalies and interpreting rule sets from multiple OT systems. This technology may help reduce noise in alarm management systems, asset visibility tools, and data review based on risk grading and rule frameworks.

What is the future of AI and OT?

AI usage is expanding in IT and OT contexts, possibly affecting operations and safety. The Colonial pipeline disaster showed the need to have checks and balances in place to assure availability. AI must be tested in a secure setting in organisations with OT labs. Closed AI developed on internal data must be utilized to safely harness AI’s capabilities without jeopardizing critical data, surroundings, or human life. This strategy should be built on air-gapped systems which are not accessible from the outside world.

Conclusion

AI and machine learning offer enormous promise for enhancing systems, safety, and efficiency, but it is critical to prioritize safety and dependability. As an industry, we must learn whether to use these technologies responsibly for their intended purposes.