Striking a Balance: Privacy, Data, and Ethical Considerations in AI

Dr. Manjeet Rege on the need for the responsible development and use of AI

By Evelyn Hoover

With Generative AI’s Rewards Come Security Risks

IBM’s Sam Hector explains what organizations need to know about securing against the threats of generative AI

By Evelyn Hoover

Image created with Adobe Firefly

Generative AI became mainstream in late 2022 when OpenAI launched ChatGPT into the world. The productivity gains promised by generative AI brought it quickly into use by organizations of all sizes. But, like any tool, generative AI brings increased security risks as well.

Containing these risks is a primary focus for 64% of U.S. executives surveyed by the IBM Institute for Business Value. According to “The CEO’s Guide to Generative AI,” executives say their 2023 AI cybersecurity budgets were 51% higher than they were in 2021. Those budgets are expected to climb an additional 43% by 2025.

U.S. executives report ...

 

0
%

Containing AI risks is a primary focus

 

0
%

Increase in AI cybersecurity budgets since 2021

0
%

Expected budget increase by 2025

We don’t expect advances in AI and machine learning (ML) to dramatically change the types of attack that we see. What we do expect is that the threat landscape is going to scale quite significantly as a result of the recent advances.

—Sam Hector, senior strategy leader for IBM Security 

AI Cybersecurity Threats

While the security threats from AI aren’t necessarily new, understanding the speed and scale with which AI can be used to mount an attack is paramount to securing your enterprise. 

 

“We don’t expect advances in AI and machine learning (ML) to dramatically change the types of attack that we see,” explains Sam Hector, senior strategy leader for IBM Security. “What we do expect is that the threat landscape is going to scale quite significantly as a result of the recent advances. And that’s primarily due to reducing barriers to entry to lower skilled attackers to be able to perform those attacks.”

 

AI, like any tool, can be used for good or not so good purposes. On the pro side, it’s great at summarizing longer documents, writing content, generating images, etc. But on the flip side, it’s being used by hackers to generate malicious code and malware. “It’s also being used to customize phishing attacks at a greater scale than we’ve ever seen before and make it much more highly targeted,” Hector adds.

 

For example, AI can be prompted to:

Phishing ⇨
Create phishing emails that are more realistic and therefore more effective quickly
Deepfake audio ⇨
Develop deepfake audio that can trick listeners into divulging sensitive information
Poisoning ⇨
Poison your large language model (LLM) to poison your AI to turn it against you
Jailbreak icon
Jailbreaking ⇨
Jailbreak your LLM using natural language prompts to command an LLM to do what an attacker wants
SPONSORED CONTENT

Modern z/OS Resilience

NewEra Software was founded with the specific goal of developing, marketing and supporting innovative mainframe system management tools and services. Thanks to the continued support of thousands of system programming professionals worldwide who have come to depend on NewEra, its products and support, the company has become an industry leader and set the standard for repair, data erasure, continuous backups and system resiliency. All of these ensure the ongoing integrity of critical Mainframe Systems worldwide.

Securing Your AI Models

With AI trained using natural language in the same way that humans are, the models are susceptible to the same weaknesses as humans. “So now that we’ve trained AI to behave more like humans, we need to actually defend AI in a way that’s more like humans as well,” Hector says.

 

In response to the generative AI threats, IBM Security has developed the IBM Framework for Securing Generative AI

Image courtesy of IBM

Image courtesy of IBM

  • ‹ prev
  • next ›
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
The framework’s three pillars are:
Threat detection
and response

“We’ve been able to dramatically improve the productivity of security analysts by performing much of their routine workflow for them and doing risk analysis and data mining on their behalf,” Hector says.

Data
security

“If you look at data, we’ve been able to spot patterns in behavior that would be indicative of users doing malicious things without actually triggering rule-based detections,” he adds.
 

Identify
management

“If you look at identity, we’ve been using machine learning to detect risk in user logons and things like account takeover attacks and fraudulent activity,” Hector says.
 

Using this framework as a guide, Hector recommends breaking AI cybersecurity into chunks to secure your enterprise.

Secure the data
Secure the training model
Secure the live model

Secure the data

Secure the data that’s used to train your AI models then secure the model while it’s in development. “You need to have a really good DevSecOps practice in place. Once the models are in live use, there are unique types of attacks that can be leveraged against them, which require unique defenses.”

 

Each organization has to decide whether to expose sensitive intellectual property to AI models. For instance, if you’re a financial institution, do you want to put customer data for loan approvals or mortgages into your model? 

 

“It really starts with understanding where your sensitive data is and how it’s being utilized by those AI models. Then putting good data security hygiene around that,” he says. “Most organizations have not got that covered to a sufficient degree.”

Secure the data
Secure the training model
Secure the live model

Secure the training model

Secure the model while it’s in development and training. AI systems have access to API links and connect to systems within an organization that are similar to what a human would have. 

Understanding the “technology supply chain security,” as Hector calls it, and what access applications have to perform actions on other systems is important. Be sure you understand how integrated the systems are and what could go wrong.

Secure the data

Secure the training model
Secure the live model

Secure the live model

Secure the model once it’s deployed and in live use. 


Start by identifying use cases for your organization and then analyzing the risk of those models. “The worst thing an organization can do is simply stick their head in the sand and say, ‘We are going to outright ban the use of new generative AI platforms by our employees,’ ” Hector says. Doing so breeds shadow IT that burdens their enterprise’s security team. 

Start by identifying use cases for your organization and then analyzing the risk of those models. “The worst thing an organization can do is simply stick their head in the sand and say, ‘We are going to outright ban the use of new generative AI platforms by our employees,’ ” Hector says. Doing so breeds shadow IT that burdens their enterprise’s security team.  

The Ethical Considerations of AI Security 

Beyond the technical aspects of AI and cybersecurity, Hector also drew attention to the ethical considerations, particularly in terms of privacy breaches and responsible AI implementation. IBM remains committed to openness and transparency about how their AI models are trained and the data sources used, he says.

 

“If you look at the history of cybersecurity attacks, data tends to be the lowest hanging fruit for attackers because there are hundreds if not thousands of examples per year of sensitive corporate information or sensitive personal information about users that end up on the dark web,” he explains.

 

To train AI models to detect data breaches, it’s important to be responsible and not breach privacy. 

If you look at the history of cybersecurity attacks, data tends to be the lowest hanging fruit for attackers because there are hundreds if not thousands of examples per year of sensitive corporate information or sensitive personal information about users that end up on the dark web.
—Sam Hector, senior strategy leader for IBM Security
Sam Hector
Senior strategy leader for IBM Security
Sam Hector is Global Strategy Leader at IBM Security, communicating strategy and guiding future direction on areas for development, investment and M&A activity. With over a decade working for IBM and its partners, he has made significant contributions in shaping the company's security vision, led teams to deliver revenue growth and pioneered key partnerships. Outside of work, Sam is a director of a tennis non-profit, keen photographer and Formula 1 fan. 
Share this article
Share this article

Our sponsor Advanced Software Products Group, Inc. is a leader in data center software specializing in IBM Z security solutions.