Editorial

Confidential AI: Intel seeks to overcome AI’s data protection problem

Organisations today are using AI to secure massive quantities of data but, with many still concerned their AI models’ security won’t hold up to malicious attacks, data protection remains a continuous roadblock for AI adoption. Here, Anand Pashupathy, VP & general manager, Security Software & Services Division at Intel, talks about how the convergence of AI and confidential computing can simplify and secure AI at-scale.

Posted 18 July 2024 by Christine Horton


Can you explain what Confidential AI is?

As more and more companies embrace and begin to use AI, they’ve also become more aware of how much of this processing occurs in the cloud – a concern for businesses with stringent policies to prevent the exposure of sensitive information. This complex data protection problem can be a roadblock for AI adoption and prompted Intel, who has been a leader in confidential computing for many years, to help pioneer the convergence of AI + confidential computing to help businesses address these challenges.

Confidential computing helps secure data while it is actively in-use inside the processor and memory; enabling encrypted data to be processed in memory while lowering the risk of exposing it to the rest of the system through use of a trusted execution environment (TEE). It also offers attestation, which is a process that cryptographically verifies that the TEE is genuine, launched correctly and is configured as expected. Attestation provides stakeholders assurance that they are turning their sensitive data over to an authentic TEE configured with the correct software. Confidential computing should be used in conjunction with storage and network encryption to protect data across all its states: at-rest, in-transit and in-use.

Intel offers two confidential computing technologies customers can choose from based on their workload’s security needs. Intel Trust Domain Extensions (Intel TDX) creates a TEE consisting of an entire virtual machine. This is the most straightforward deployment path and often requires few or no code changes. Intel Software Guard Extensions (Intel SGX) reduces the security perimeter down to a single application or even a single function. This may require more software steps, but it minimizes the amount of software with access to confidential data.

Confidential AI is the application of confidential computing technology to AI use cases. It is designed to help protect the security and privacy of the AI model and associated data.

Confidential AI utilises confidential computing principles and technologies to help protect data used to train LLMs, the output generated by these models and the proprietary models themselves while in use. Through vigorous isolation, encryption and attestation, confidential AI prevents malicious actors from accessing and exposing data, both inside and outside the chain of execution.

For businesses to trust in AI tools, technology must exist to protect from exposure inputs, trained data, generative models and proprietary algorithms. Confidential AI helps make that happen.

What are the challenges Intel is trying to overcome with this approach?

AI is top of mind for many organisations right now – but the ability to secure AI models and algorithms is still a top concern (86 percent of companies are concerned the AI model’s security won’t hold up to malicious attacks). And AI adoption requires that security products integrate both data science models and frameworks, as well as the connected applications that operate in the real world.

Anand Pashupathy, VP & general manager, Security Software & Services Division, Intel

Confidential AI helps customers increase the security and privacy of their AI deployments. It can be used to help protect sensitive or regulated data from being accessed during a security breach and strengthen a company’s compliance posture under regulations like HIPAA, GDPR and other AI-specific regulations. And the object of protection isn’t solely the data – confidential AI can also help protect valuable or proprietary AI models from theft or tampering. The attestation capability can be used to provide assurance that users are interacting with the model they expect, and not a modified version or imposter.

AI can enable new or better services across of range of use cases, even those that require activation of sensitive or regulated data that may give developers pause because of the risk of a breach or compliance violation. Such sensitive or regulated data could be personally identifiable information (PII), business proprietary data, confidential third-party data or a multi-company collaborative analysis.

Confidential AI enables organisations to more confidently put sensitive data to work, as well as strengthen protection of their AI models from tampering or theft.

Confidential AI can be applied throughout the AI pipeline, from data preparation and consolidation to training, inference and results delivery. Each of these stages can be vulnerable to attack, theft or manipulation, and confidential AI can help bolster protections across the spectrum.

Our approach to confidential computing and confidential AI offers benefits to industries that rely on storing and processing sensitive information, especially helpful for industries like healthcare, government, finance, retail and others. We are also committed to advancing AI technology responsibly and the industry’s collective efforts, regulations, standards and the broader use of AI will contribute to confidential AI becoming a default feature for every AI workload in the future.

Are there any examples of where this approach can be applied in the public sector?

Confidential AI offers benefits to industries that rely on storing and processing sensitive information, such as healthcare, government, finance, and retail. As governments around the world are issuing new regulations to help keep AI deployments secure and trustworthy, including the European Union’s AI Act, Intel is collaborating with technology leaders across the broader industry to make using AI more secure while helping businesses address critical privacy and regulatory concerns at scale.

Event Logo

If you are interested in this article, why not register to attend our Think Digital Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now