Trusted Execution Environments: A Paranoid Assessment

Trusted Execution Environments (TEEs) promise to protect sensitive workloads from privileged attackers through cryptographically isolated enclaves. In practice, I think they are sometimes treated as “magic security dust” that can be sprinkled on top of a system, without critically considering the limitations of TEEs. This post aims to give a brief analysis of some security gaps in TEEs that I think can easily be missed.

Cloud-Based TEEs: Not Actually Removing the Cloud from Your TCB

AWS Nitro Enclaves

AWS Nitro Enclaves are based on the closed-source AWS Nitro system. While sometimes marketed as removing AWS from the Trusted Computing Base (TCB), this claim is questionable since AWS could deploy updates to the Nitro system that compromise enclave security, and users have no visibility into or control over this process. Ultimately Nitro Enclaves require trusting the Nitro system, a closed source system created and mainained by AWS.

Info

This is a bit of a diversion, but ultimately Apple Private Compute Cloud also seems to fall into this category. Their efforts to open source components are commendable, but still ultimately rely on an unauditable CPU that they themselves maintain.

Google Confidential Computing

Google’s Confidential Computing relies on CPU-vendor-based attestations (namely AMD SEV-SNP and Intel TDX), but ultimately has similar issues. According to Google’s documentation, attestation verification depends on firmware PCR hashes that users cannot independently verify and must just download from a trusted Google source. Users must trust Google’s assertions about what constitutes valid hash values, creating a circular trust dependency on the very provider the TEE is supposed to protect against.

Azure Confidential Computing

Azure’s Confidential Computing is quite similar, and ultimately relies on trusting their “Host Compatibility Layer” that is closed source and runs within CVMs.

On-Prem TEEs: Physical Access Is Game Over

AMD SEV-SNP and Intel TDX both explicitly exclude physical attacks from their published threat models (AMD, Intel). So while TEEs are meant to defend against host-level access (i.e. an attacker who is root on the machine), they do not protect against an attacker who has physical access. Ultimately, I expect this is because a truly determined attacker can always decap a chip and use an electron microscope to extract TEE keys. This is a high bar, but it is worth noting that fully relying on TEEs to defend against physical attacks is shaky ground to build upon.

Conclusion: Useful Hardening, Not Complete Protection

Despite all of the above, I do still believe that TEEs provide valuable security hardening. But I think it is worth be aware of the ways in which they don’t deliver the complete protection often implied in marketing materials. They make attacks more difficult and require more concerted malicious action, but don’t eliminate fundamental trust issues with cloud providers or protect against physical access.