If Apple can make a cloud-based AI system that is transparent and open to security research at this level, every other firm offering such services should do the same — if they care about protecting your data.
Apple will introduce new Macs and the first services within its Apple Intelligence collection next week. To protect cloud-based requests made through Apple Intelligence, it has put industry-beating security and privacy protecting transparency in place around cloud-based requests handled by its Private Cloud Compute (PCC) system.
What that means is that Apple has pulled far ahead of the industry in a bid to build rock-solid protection around security and privacy for requests made of AI using Apple’s cloud. It’s an industry-leading move and is already delighting security researchers.
Why is that? It’s because Apple has opened the doors that secure its Private Cloud Compute systemwide to security testers in the hope that the energy of the entire infosec community will combine to help build a moat to protect the future of AI.
Make no mistake, this is what is at stake.
As AI promises to permeate everything, the choice we face is between a future of surveillance the likes of which we have never seen before, or the most powerful machine/human augmentation we can dream of. Server-based AI promises both these futures, even before mentioning that as quantum computing looms just a few hills and valleys away, the information picked up by non-private AI systems can be weaponized and exploited in ways we can’t even imagine.
That means to be secure tomorrow we must take steps today.
Protecting AI in the cloud
In part, protecting that future and ensuring it can say with total confidence that Apple Intelligence is the world’s most private and secure form of AI is what Apple is trying to do with PCC. This is the system that lets Apple run generative AI (genAI) models that need more processing power than available on the iPad, iPhone, or Mac you use to get things done. It’s the first port of call for these AI requests and has been deliberately designed to protect privacy and security. “You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior Vice President of Software Engineering Craig Federighi said when announcing PCC at WWDC.
The company promised that to “build public trust” in its cloud-based AI systems, it would allow security and privacy researchers to inspect and verify the end-to-end security and privacy of the system. The reason the security community is so excited is because Apple has exceeded that promise by making public all the resources it made available to researchers.
Security research for the rest of us
It provided the following resources:
The PCC Security Guide
Apple has published the PCC Security Guide, an extensive 100-page document including comprehensive technical details about the components of the system and how they work together to secure AI processing in the cloud. This is a deep guide that discusses built-in hardware protections and how the system handles various attack scenarios.
A Virtual Research Environment
The company has also created something security researchers might get excited about: A Virtual Research Environment (VRE) for the Apple platform. This consists of a set of tools that make it possible to perform your own security analysis of PCC using a Mac. This is a robust testing environment that runs a PCC node — basically a production machine — in a VM so you can beat it up as much as you like in search of security and privacy flaws.
You can use these tools to:
- List and inspect PCC software releases.
- Verify the consistency of the transparency log.
- Download the binaries corresponding to each release.
- Boot a release in a virtualized environment.
- Perform inference against demonstration models.
- Modify and debug the PCC software to enable deeper investigation.
Publishing the PCC source code
This is a big step in its own right and is provided under a license agreement that lets researchers dig deep for flaws. Within this set of information, the company has made source code that covers privacy, validation, and logging components. (All of this source code is available on GitHub now.)
Bounty hunters
Of course, the company understands that it must also incentivize researchers. To do so, Apple opened up a bounty system for those who succeed in finding flaws in the PCC code.
To contextualize the extent of this commitment, it is important to note that the value of these bounties is equal to what the company pays to researchers who discover iOS security flaws.
I believe that means Apple sees AI as a very important component to its future, PCC as an essential hub to drive forward to tomorrow, and that it will also now find some way to transform platform security using similar tools. Apple’s fearsome reputation for security means even its opponents have nothing but respect for the robust platforms it has made. That reputation is also why more and more enterprises are, or should be, moving to Apple’s platforms.
The mantle of protecting security is now under the passionate leadership of Ivan Krstić, who also led the design and implementation of key security tools such as Lockdown Mode, Advanced Data Protection for iCloud, and two-factor authentication for Apple ID. Krstić has previously promised that, “Apple runs one of the most sophisticated security engineering operations in the world, and we will continue to work tirelessly to protect our users from abusive state-sponsored actors like NSO Group.”
When it comes to bounties for uncovering flaws in PCC, researchers can now earn up to $1 million dollars if they find a weakness that allows arbitrary code execution with arbitrary entitlements, or a cool $250,000 if they uncover some way to access a user’s request data or sensitive information about their requests.
There are many other categories, and Apple seems really committed to ensuring it motivates even trivial discoveries: “Because we care deeply about any compromise to user privacy or security, we will consider any security issue that has a significant impact to PCC for an Apple Security Bounty reward, even if it doesn’t match a published category,” the company explains. Apple will award the biggest bounties for vulnerabilities that compromise user data and inference request data.
Apple’s gamble
It’s important to stress that in moving to deliver this degree of industry-leading transparency, Apple is gambling it can ensure that any weaknesses that do exist in its solution will be spotted and revealed, rather than being identified only to be sold on or weaponized.
The thinking is that while nation state-backed attackers might have access to resources that provide attackers with similar breadth of insight into Apple’s security protections, they will not share word of any such vulnerabilities with Apple. Such attackers, and those in the most well financed criminal or semi-criminal entities (within which I personally believe surveillance-as-a-service mercenaries belong), will spend time and money finding vulnerabilities in order to exploit them.
But there is a big world of security researchers who might also uncover weaknesses in the system who would be willing to share them, enabling Apple to patch vulnerabilities faster.
The way Apple sees it is that one way to ensure such vulnerabilities aren’t turned into privacy-destroying attacks is to make it so more people discover them at the same time; after all, even if one dodgy researcher chooses to use a weakness in an attack, another might disclose it to Apple early, effectively shutting down that route. In other words, by making these details available, Apple changes the game. In a strange irony, making these security protections open and available may well serve to make them more secure.
That’s the hope, anyway.
“We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale, and we look forward to working with the research community to build trust in the system and make it even more secure and private over time,” Apple explained.
Why this matters
It is also a defining moment in security for AI. Why? Because Apple is an industry leader that sets expectations with its actions. With these actions, the company just defined the degree of transparency to which all companies offering cloud-based AI systems should now be held. If Apple can, they can, too. And any business or individual whose data or requests is being handled by cloud based AI systems can now legitimately demand that degree of transparency and protection. Apple is making waves again.
Please follow me on LinkedIn, Mastodon, or join me in the AppleHolic’s bar & grill group on MeWe.