NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

10-01-2024
Share
NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

The U.S. National Institute of Standards and Technology (NIST) is calling attention to the privacy and security challenges that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years.

As AI systems become integrated into online services at a rapid pace, in part driven by the emergence of generative AI systems like OpenAI ChatGPT and Google Bard, models powering these technologies face a number of threats at various stages of the machine learning operations.

These include corrupted training data, security flaws in the software components, data model poisoning, supply chain weaknesses, and privacy breaches arising as a result of prompt injection attacks.

Security and Privacy

The attacks, which can have significant impacts on availability, integrity, and privacy, are broadly classified as follows –

 

  • Evasion attacks, which aim to generate adversarial output after a model is deployed
  • Poisoning attacks, which target the training phase of the algorithm by introducing corrupted data
  • Privacy attacks, which aim to glean sensitive information about the system or the data it was trained on by posing questions that circumvent existing guardrails
  • Abuse attacks, which aim to compromise legitimate sources of information, such as a web page with incorrect pieces of information, to repurpose the system’s intended use

The development arrives more than a month after the U.K., the U.S., and international partners from 16 other countries released guidelines for the development of secure artificial intelligence (AI) systems.

 

Source: https://thehackernews.com/