Researchers have been working on solutions for runtime security for years now. Computing data and deriving value from it — while also preserving its privacy — is no small challenge. Luckily, with the development of privacy-enhancing technologies, or PETs, there’s now a range of technologies that can help us resolve the tension between data privacy and utility.
The industry is tackling runtime security on all fronts — we’ve seen efforts from hardware makers, public cloud providers (PCPs) and software developers, to name a few. As the ecosystem of confidential computing solutions continues to grow, it will take real collaboration to bring this level of security to end users. It’s an undertaking that’s worth the effort.
Until the development of PETs, data in use was the Achilles’ heel of security. Privileged system software like the hypervisor, host OS, firmware and DMA-capable devices were all granted access to workload data and code. It seemed necessary for the system managing VM resources (memory, execution and hardware access) to also have access to the workload’s data. How else could it manage it, after all? At the same time, this offered an opening for various bad actors, such as malicious insiders with administrative privileges or hackers exploiting bugs or vulnerabilities in privileged system software.
Instead of trying to make all system software secure, confidential computing takes a simple and pragmatic approach to security: It decouples resource management from data access. In this new security paradigm, the hypervisor and other system software retain their responsibilities for workload scheduling, execution and memory management. However, they no longer have direct access to the data within the virtual machines. In practice, this means that even if a vulnerability existed within the hypervisor, for example, it wouldn’t be able to compromise the security of your confidential VMs.
Any organization that values its data should be interested in embracing confidential computing. Closing the security vulnerabilities that exist while data is in use has become even more critical now that workforces are growing more dispersed and more data is moving to the cloud. On top of that, more organizations are trying to leverage sensitive data with valuable generative AI applications.
However, confidential computing is still evolving. The industry must collaborate to ensure users get the strongest levels of integrity and confidentiality guarantees. This will require driving awareness of confidential computing, as well as making it more accessible with open source software, standards and tools. There’s already a collective effort to accomplish this underway: The confidential computing consortium is a project community at the Linux Foundation that is focused on accelerating the adoption of confidential computing.
Encouragingly, all corners of the computing industry are engaged in this effort. On the hardware side, silicon providers are making major investments in developing their trusted execution environment (TEE) offerings. So far, this includes Intel SGX, Intel TDX and AMD SEV on the X86 architecture; TrustZone and the upcoming ARM CCA for the ARM ecosystem; Keystone for RISC-V architectures and Nvidia H100 for GPUs.
PCPs are quickly embracing hardware-trusted execution environments. They’re encouraging their customers to run confidential workloads by enabling a “shift and lift” approach — this allows entire VMs to run unchanged within the TEE.
Confidential computing is essential in the private cloud as well. In both the public and private cloud, you have to consider the security of the privileged system software — that includes the operating system, virtual machine manager and all the platforms’ firmware embedded within. This software gets unrestricted access to user-level applications. Whether you’re using a public or private cloud, this layer of software is susceptible to the same vulnerabilities and risks. We’re making progress on this front, with Linux operating systems now available that support both AMD SEV and TDX in the public cloud.
The last component of a mature confidential computing ecosystem is sensible regulation. Two years ago, president Biden issued Executive Order 14028: Improving the Nation’s Cybersecurity, which outlined the principles of a zero-trust architecture that should be adopted by U.S. government agencies. The executive order is complemented by the NIST 800-207 standard issued by the National Institute of Standards and Technology (NIST). By laying out the rules of the road for the public sector, regulators can drive greater attention to zero-trust security and encourage its adoption more widely.
While the entire industry should be encouraging the adoption of confidential computing, the technology is by no means bulletproof. For instance, it’s still necessary to install security updates on whatever runs within the boundary of confidential VMs. It’s also important to enable a more meaningful and secure remote attestation solution. The industry is making progress in addressing these kinds of limitations with solutions like regular security patches and operating system updates. With a proactive approach, enterprises can significantly reduce their exposure to security vulnerabilities and better protect your confidential VMs.
Now is the time for all users to embrace confidential computing. Industry players are making a concerted effort to deliver innovation across all layers of the confidential computing ecosystem. By supporting that ecosystem, we can create a reality in which privacy and computation can exist in harmony.
Image source: https://vecteezy_digital-security-unlock-or-encryption-concept-secure-login_13253673_107.jpg