Dr. Manjeet Rege on the need for the responsible development and use of AI
By Evelyn Hoover
By Evelyn Hoover
Image created with Adobe Firefly
Generative AI became mainstream in late 2022 when OpenAI launched ChatGPT into the world. The productivity gains promised by generative AI brought it quickly into use by organizations of all sizes. But, like any tool, generative AI brings increased security risks as well.
Containing these risks is a primary focus for 64% of U.S. executives surveyed by the IBM Institute for Business Value. According to “The CEO’s Guide to Generative AI,” executives say their 2023 AI cybersecurity budgets were 51% higher than they were in 2021. Those budgets are expected to climb an additional 43% by 2025.
U.S. executives report ...
Containing AI risks is a primary focus
Increase in AI cybersecurity budgets since 2021
Expected budget increase by 2025
—Sam Hector, senior strategy leader for IBM Security
While the security threats from AI aren’t necessarily new, understanding the speed and scale with which AI can be used to mount an attack is paramount to securing your enterprise.
“We don’t expect advances in AI and machine learning (ML) to dramatically change the types of attack that we see,” explains Sam Hector, senior strategy leader for IBM Security. “What we do expect is that the threat landscape is going to scale quite significantly as a result of the recent advances. And that’s primarily due to reducing barriers to entry to lower skilled attackers to be able to perform those attacks.”
AI, like any tool, can be used for good or not so good purposes. On the pro side, it’s great at summarizing longer documents, writing content, generating images, etc. But on the flip side, it’s being used by hackers to generate malicious code and malware. “It’s also being used to customize phishing attacks at a greater scale than we’ve ever seen before and make it much more highly targeted,” Hector adds.
For example, AI can be prompted to:
NewEra Software was founded with the specific goal of developing, marketing and supporting innovative mainframe system management tools and services. Thanks to the continued support of thousands of system programming professionals worldwide who have come to depend on NewEra, its products and support, the company has become an industry leader and set the standard for repair, data erasure, continuous backups and system resiliency. All of these ensure the ongoing integrity of critical Mainframe Systems worldwide.
Inspectors report findings that could result in IPL failure
The standard for z/OS recovery from a failed IPL
Web-based application to access and visualize z/OS resident data
Protect critical resources and establish a Zero Trust architecture
Register for a Webcast
With AI trained using natural language in the same way that humans are, the models are susceptible to the same weaknesses as humans. “So now that we’ve trained AI to behave more like humans, we need to actually defend AI in a way that’s more like humans as well,” Hector says.
In response to the generative AI threats, IBM Security has developed the IBM Framework for Securing Generative AI.
“We’ve been able to dramatically improve the productivity of security analysts by performing much of their routine workflow for them and doing risk analysis and data mining on their behalf,” Hector says.
“If you look at data, we’ve been able to spot patterns in behavior that would be indicative of users doing malicious things without actually triggering rule-based detections,” he adds.
“If you look at identity, we’ve been using machine learning to detect risk in user logons and things like account takeover attacks and fraudulent activity,” Hector says.
Secure the data that’s used to train your AI models then secure the model while it’s in development. “You need to have a really good DevSecOps practice in place. Once the models are in live use, there are unique types of attacks that can be leveraged against them, which require unique defenses.”
Each organization has to decide whether to expose sensitive intellectual property to AI models. For instance, if you’re a financial institution, do you want to put customer data for loan approvals or mortgages into your model?
“It really starts with understanding where your sensitive data is and how it’s being utilized by those AI models. Then putting good data security hygiene around that,” he says. “Most organizations have not got that covered to a sufficient degree.”
Secure the model while it’s in development and training. AI systems have access to API links and connect to systems within an organization that are similar to what a human would have.
Understanding the “technology supply chain security,” as Hector calls it, and what access applications have to perform actions on other systems is important. Be sure you understand how integrated the systems are and what could go wrong.
Secure the data
Secure the model once it’s deployed and in live use.
Start by identifying use cases for your organization and then analyzing the risk of those models. “The worst thing an organization can do is simply stick their head in the sand and say, ‘We are going to outright ban the use of new generative AI platforms by our employees,’ ” Hector says. Doing so breeds shadow IT that burdens their enterprise’s security team.
Beyond the technical aspects of AI and cybersecurity, Hector also drew attention to the ethical considerations, particularly in terms of privacy breaches and responsible AI implementation. IBM remains committed to openness and transparency about how their AI models are trained and the data sources used, he says.
“If you look at the history of cybersecurity attacks, data tends to be the lowest hanging fruit for attackers because there are hundreds if not thousands of examples per year of sensitive corporate information or sensitive personal information about users that end up on the dark web,” he explains.
To train AI models to detect data breaches, it’s important to be responsible and not breach privacy.