Connect with us

AI

Future-Proofing AI Security: Safeguarding Systems in Today’s Evolving Landscape

Published

on


Organizational Concerns Regarding AI Quantum Resilience

Recent evidence highlighted in an eBook titled “AI Quantum Resilience,” released by Utimaco [email wall], sheds light on the primary obstacle organizations face in effectively adopting AI technologies – security risks associated with the data they handle.

The value of AI heavily relies on the data accumulated by an organization. However, there are inherent security risks involved in constructing models and training them using this data. These risks, in addition to the more commonly recognized threats to intellectual property at the point of inference, pose significant challenges.

The authors of the eBook emphasize the need for organizations to mitigate threats throughout the entire AI development and implementation processes. Moreover, they assert that companies must be prepared to adapt their security protocols, particularly in anticipation of the potential availability of quantum computing-powered decryption tools to malicious actors.

Utimaco identifies three main areas under threat:

  • Manipulation of training data by malicious entities, leading to degradation of model outputs in ways that are challenging to detect.
  • Risk of models being extracted or duplicated, resulting in erosion of intellectual property rights.
  • Potential exposure of sensitive data used during training or inference processes.

According to the report’s authors, current public key cryptography is expected to become vulnerable within the next decade, coinciding with the potential emergence of capable quantum systems. It is believed that organized groups are already collecting encrypted data for decryption purposes once quantum capabilities become accessible. Therefore, any dataset containing information of long-term sensitivity, including model training data, financial records, or intellectual property, may necessitate protection against future decryption attempts.

The transition to quantum-resistant cryptography is anticipated to impact protocols, key management, system interoperability, and performance, making the migration process likely to span several years. The authors propose the concept of ‘crypto-agility,’ which involves the ability to change cryptographic algorithms without the need for redesigning underlying systems. This approach is based on hybrid cryptography, combining established algorithms with post-quantum methods recommended by NIST.

While cryptography plays a crucial role in mitigating risks, the eBook advocates for the use of hardware-based trust devices capable of isolating cryptographic keys and sensitive operations from standard working environments.

For companies developing their AI tools and processes, protection should extend across the entire AI lifecycle, encompassing data ingestion, training, model deployment, and inference in production environments. Hardware keys used for encryption and model signing can be generated and stored within a secure boundary. This ensures that model integrity is verified prior to deployment, and sensitive data processed during inference remains safeguarded.

Hardware-based enclaves are designed to segregate workloads to prevent even system administrators with elevated privileges from accessing processed data. By employing hardware modules to verify the trusted state of data enclaves before releasing keys, a ‘chain of trust’ from hardware to application is established through external attestation.

Hardware-based key management generates tamper-resistant logs covering access and operations to support compliance frameworks such as the EU AI Act.

While many risks associated with AI systems are well-known and potentially exploited, the threat posed by quantum computing’s ability to decrypt currently secure data may not be immediate but should influence present data and infrastructure decisions, according to Utimaco. The eBook advocates:

  • Enhanced controls throughout the AI development and deployment lifecycle.
  • Implementation of ‘crypto-agility’ to enable the transition to post-quantum security measures.
  • Establishment of hardware-based trust mechanisms wherever high-value assets are involved.

(Image source: “Scanning electron micrograph of an apoptotic HeLa cell” by National Institutes of Health (NIH) is licensed under CC BY-NC 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc/2.0)

 

Interested in learning more about AI and big data from industry experts? Explore the AI & Big Data Expo events happening in Amsterdam, California, and London. These comprehensive events are part of TechEx and are co-located with other leading technology conferences. Click here for further details.

AI News is brought to you by TechForge Media. Discover upcoming enterprise technology events and webinars here.

See also  The Rise of AI Agents: Inside Druid AI's Autonomy Factory
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending