November 3, 2011

Real Security in Mac OS X Requires Apple-Signed Certificates

The Mac needs to be as secure as the iPhone. The good news is Apple already has the tools. The bad news is they are forcing developers to use the wrong ones.

There are three primary ways Apple increases security of applications running on the Mac and the iPhone: Sandboxing, Code Auditing, and Certification. While all these are incrementally valuable, none is perfect on its own.

The problem Mac developers are facing is that the two that Apple is enforcing on the Mac App Store (Sandboxing and Code Auditing) are implemented currently to be actively bad for developers and not particularly good for users. And the method that would provide the most benefit for developers and users (Certification) isn’t enforced broadly enough to be useful.

Part 1: Play in the Sandbox

“Sandboxing,” simplified, is when third-party developers must make a list of all the things they’d like to do in their application that are potentially “iffy,” and this list (called “entitlements”) is included inside the application when it ships. These entitlements say things like, “I’d like to occasionally use the printer,” or “I want to be able to open and save files in this directory.”

When a user runs the app on her system, Lion enforces the entitlements, which means that if the application loads some submodule from the web or is otherwise hacked, it can’t, say, start reading the user’s e-mails and mailing them to some server somewhere. This is important because certain parts of the system, like the TIFF reader or the PDF interpreter, have been shown to have vulnerabilities in the past, so a user could do something as benign-seeming as loading a TIFF from the net into a legitimate application, but code hidden inside a malignantly-crafted TIFF could wreak havoc on her system. You could say this is a straw-man argument, because certificates also have to be correctly implemented or they provide no security. My assertion here is that the certificate code is more mature, much smaller, and touches far fewer system components than sandboxing – but I admit my data on this might be incorrect.

Importantly for this to work, Lion knows – through the magic of digital signing – that the entitlements haven’t been changed since the app was posted by the developer, so if the user somehow gets a modified version of the application that’s been hacked, the signature will be off, and Lion won’t run the app.

Nifty.

Except in practice, this has issues. One is, everything a developer might want to do has to have an entitlement enabling and disabling it, and those entitlements have to work. It’s an enormous job for Apple to take the entire operating system and rewriting it in terms of entitlements. (Bertrand Serlet once told me that Mac OS X now has roughly as many instructions as we believe the human brain does. So: big job.)

As of now, that job isn’t complete. Apple was originally supposed to require all apps submitted to the Mac App Store to have entitlements enabled as of November 1st, but they have announced they are pushing that back to March 1st, because the system isn’t ready, and developers have found that their legitimate applications don’t work with entitlements turned on.

Entitlements are a binary solution – if there’s a hole anywhere in it that malware authors find, then there’s really not much Apple can do until they issue a full operating system patch. We call this kind of solution “brittle” – it requires everything to have been written perfectly, for every contingency, or it fails completely.

Yes, given infinite programming hours and infinitely perfect programmers, it’s possible, but in the meantime you end up with a bunch of hassle for programmers who are scrambling to get their applications working with entitlements.

But it’s all worth it for security, right? Well, if it were secure, which it isn’t. There are three holes I can think of immediately:

  1. The entitlement-limiting code has to be implemented perfectly by Apple, which requires perfect code. This raises the question that if Apple can write perfect code, why doesn’t it just fix the TIFF and PDF submodules so they have no bugs? The answer is: there’s no such thing as perfect code. There’s only failing gracefully.
  2. Because some apps legitimately do a lot of different things, there’s nothing stopping malware developers from writing, say, a utility which claims to scour your disk for extra-large files (and thus requests entitlements for the whole system), but in reality does something nefarious with them. Apple has said they’ll be looking closely at the apps that request a lot of entitlements, but that essentially is an admission entitlements don’t work, as it punts the entire problem over to “Code Auditing,” which I’ll talk about in a second.
  3. There’s absolutely nothing requiring applications that are not sold through the Mac App Store to use entitlements. And, even if there were such requirements, the malware makers would just distribute nefarious applications whose entitlements included everything the system could ever do.

Imagine the entitlements system as giving each application a set of keys to a building you want to be highly-secure, except 1) some of the locks don’t work and some doors aren’t locked, and there are a TON of doors to check, 2) the applications get to request any keys they want and are just subject to a pat-down if they request “too many” keys, and 3) applications can just say, “No thanks, I never want to use keys” and ignore the locks anyways.

It won’t come as a surprise from the above that malware isn’t typically distributed through the Mac App Store, although there are actually far more compelling reasons for this, which also give us the key to real security, that I’ll talk about below in “Certification.”

Part 2: Auditing and the Halting Problem

The second prong in Apple’s security tools is auditing. All apps submitted to the Mac App Store are audited by their team, in part to make sure the apps are functional and largely bug-free, but also to make sure they don’t perform any nefarious operations.

There’s a lot of good that comes from this audit, to be sure. Apple has a chance to keep a lot of crap out of their store, and it’s nice to have a store full of software that users know is at least minimally functional.

However, it would be a mistake to think an audit could find malware. There are an infinite number of ways to write malicious programs and disguise them, and the dance between malicious developers and auditors could go on forever.

I could write code which calls an undocumented method which turns out to give me privileges. The auditors search for that selector in my code. I encode my selector as a ROT13 string, They search for ROT13. I switch to a different obfuscation. They put in OS-level traps if any of a set of methods is called by me. I put in conditional code so that my malware doesn’t deploy if the address is “apple.com.” They start auditing code at a different internet address. I write a legitimate app which remembers the several internet addresses it sees for the first three months of its life, then one day mails them all to me… and on and on.

Remember, Apple has had security flaws in their code, and plenty of them. And they have the source code to their code. Does anyone really think a team of auditors can find the security holes in hundreds of thousands of third-party applications just by running them for a few days?

Oh, but their clever engineers will just write a program that detects all security holes? Well, (a) why didn’t they do that on their code, and (b) please see “the halting problem.”

No. And, just like with entitlements, your code doesn’t get audited if you don’t submit it to the Mac App Store. So, malware authors don’t bother. It’s not like they would get good reviews, anyhow.

Part 3: Certificates and Barn Doors

Certificates have a myriad of uses, so their role in security is often misunderstood. At its heart, a certificate is like a wax seal with a signet ring pressed into it: it says, “I guarantee that the contents of this container haven’t been modified since I packed them myself.”

If you were a king and about to drink some brandy, you’d feel good knowing the bottle hadn’t been modified since it left the distillery.

Likewise, if the “container” is an application, it gives the user a fuzzy feeling if she knows the contents of the application are as the author made them. Lion and earlier cats will helpfully show you, when you first launch an application, who signed the certificate on the app, so you can decide if you trust them or not.

But with the king/ring/wax analogy you can easily see the limits of certificates: You have to trust the person who put their ring in the wax, or you don’t trust the contents. Certificates are incredibly easy to get, so just having any certificate on an application doesn’t really help security much, unless you’ve somehow memorized a list of every software vendor, present and future, who you trust.

For those of us who aren’t Dr. Who, just having a certificate isn’t a panacea. It has to come from a trusted source. As it happens, Apple has a trusted source: it’s Apple. And, in fact, every Mac developer currently gets a certificate from Apple, that Apple controls.

This enables the user to say, “Well, this application came from a developer who registered with Apple, so at the very least if this is malware I’ll have a clue who did it to me.” That’s pretty neat, but it gets better.

Certificates can be “revoked” at any time by their owner (in this case, Apple), so if any developer starts (intentionally or accidentally) distributing malware Apple can instantly throw a kill-switch and Lion won’t run that app any more. Boom. Done. That’s pretty neat, and something the previous two methods don’t offer.

In fact, to me, this is the only kind of security that matters. The other two methods involve programmers doing a ton of work ahead of time to make sure there are no holes in the system, ever. Well, good luck with that, we’ve been trying to do it for 60 years. But, hey, I’m sure you have figured out the magic key.

But, in the real world, security exploits get discovered by users or researchers outside of Apple, and what’s important is having a fast response to security holes as they are discovered. Certificates give Apple this.

Now, the same refrain from the limitations of sandboxing and code auditing apply to certificates – the actually secure kind (that come from Apple) are only applied to applications that go through the Mac App Store.



It may seem like, since malware developers can only opt-out of sandboxing, code auditing, and certificates by opting out of the Mac App Store, that I’ve just made three strong arguments that Lion should refuse to run applications that aren’t from the Mac App Store.

This is, in fact, one solution, but it’s not a good one. Because when you bottleneck applications through a single point, you stifle innovation; because too much power concentrated in one place corrupts the system for everyone; because huge software makers like Adobe and Microsoft and even Valve are not eager to be required to give 30% of every sale to Apple (so we’d lose their products one by one); because we’d need all kinds of hacks so we could still run custom software; because of a thousand reasons, it’s important for Apple to have a “third rail” where crazy innovation can happen.

So, my solution (and it’s surprisingly simple): Apple should allow each developers to sign her applications with the certificates Apple provides. Lion should ONLY run applications with Apple-provided certificates, and Lion should have a control panel that says, “Here’s a list of applications you (the user) will allow to be run that don’t have trusted certificates from Apple.”

“What?” you ask, if you’re not a developer. “You just said Apple already issues developers certificates.” Yes, they do. But they currently don’t allow us to sign the apps we release ourselves with Apple’s special certificate for us. Only Apple can, and they only do it for applications we submit to the Mac App Store, that pass auditing, and give up 30% of their profits. (Which isn’t always onerous – but also isn’t catching any malware authors.)

My suggestion is for Apple to provide certificates directly to developers and allow the developers to sign their own code. And, by doing this, Apple can then reasonably say, “Ok, now we’re going to, by default, not allow the user to run any code whose certificate wasn’t issued by us and signed by a real third-party developer (except the stuff the user checks in the control panel).”

Apple then has the power, if any app is found to be malware, to shut it down remotely, immediately. This is a power Apple doesn’t have now over malware, and that won’t come from more sandboxing or more code audits. I have shown the only way to achieve it is to require developers to sign their code with a certificate from Apple.CertificatePreferenceMockupEvolution has shown us that there are no perfect systems. We didn’t evolve to never make mistakes, we evolved to learn and heal when we do. Code auditing and sandboxing are non-biomimicry – nature doesn’t try to audit every line of code, she tries to fail gracefully. Certificates alone offer a graceful failover – if a developer signs up with Apple and provides false info and manages to trick people into downloading her malware, well, we can just throw a switch and she’s done. (And she can pony up $99 to re-apply to the developer program to get another certificate, but it’s going to get expensive getting new PO boxes and identities and coughing up $99 every time you’re caught.)

To make a system secure you need to think about the future of malware, for sure, but you should also think about the past. Right now sandboxing and code auditing solve 0% of the exploits in Lion that I’ve ever heard about, because malware developers don’t bother submitting their code to the app store and Lion will run software from anywhere.

If we look at the actual malware exploits we’ve seen in the wild, like MacDefender (which is downloaded directly from websites), of the three approaches above only Apple-signed certificates would route them. (Apple’s approach to MacDefender was a fourth security technique which is kind of the flip of requiring certificates: instead of only running software that’s been signed, like, certificates, Apple has code in Lion to not run certain programs it recognizes as “bad.” The problem is the recognizer needs to be updated every time the malware maker updates their code, which was as often as every day before MacDefender got physically shut down. This isn’t problem with certificates as I suggest using them.)

Sandboxing is a useful tool in a set of tools, but it shouldn’t be treated as a panacea. I’m happy that Apple’s Safari and Mail and Preview are sandboxed, since those are the major exploits I’ve seen, outside of applications that aren’t in the Mac App Store. I’ll certainly be using sandboxing in my future software, as well, but I do not want it forced on me if it doesn’t work.

Apple’s attitude towards sandboxing needs to be, “We will turn it on in March, but if a developer’s legitimate code doesn’t run because of our bugs, we will of course make an exception and accept them as-is.

Meanwhile, we need to address real security in Mac OS X. We all want to feel as comfortable putting apps on our Macs as we do on our iPhones. I certainly admit requiring all apps come from the Mac App Store would do it, but giving developers access to their certificates is much less draconian and wouldn’t chase the free thinkers off Apple’s platform.

Reaction & Updates

• John Siracusa (@siracusa) points out that “security isn’t the only reason for sandboxing. Predictability, ease of automated [un]install and backups, etc.” This is an excellent point and a reason we shouldn’t just bail on sandboxing (and why I’ll be adopting in my apps).

• @JensQ asks “Why is registration+certificate from Apple better than just a blacklist for executables that should never run?” The reason is with a blacklist malware authors could just change their signature every day (as MacDefender did) to continually dodge the blacklist. With a whitelist they have to go through the process of re-applying to be developers and lying to Apple and paying the annual fee, which slows them down and costs them a lot.

• Zach Burlingame (@ZachBurlingame) comments “[Certificate revocation] prevents future boxes from running it, but it doesn’t necessarily kill the malware on boxes that already are or have run it.” This is true if the malware has gained enough privileges to corrupt the system’s certification-checking. But if all it has done is, say, mailed your contacts to itself, revoking the certificate will be effective.

• Jeffrey Woldan (@jwoldan) says, “I agree sandboxing doesn’t affect malware dramatically, but it should reduce the possibility of application exploits.” I agree, it’ll reduce them. My argument is that exploits against third-party apps that are not embedded in Apple’s apps are rare or nonexistent on the Mac. The sandboxing I’m talking about, for instance, doesn’t affect Flash, since Flash isn’t distributed through the Mac App Store (and couldn’t be, currently) and runs inside a browser – Safari has its own super-sandbox. (In the future this kind of attack may be more prevalent, though.)

• Robert Atkins (@ratkins) opines, “The only problem I see in your argument is that Apple could revoke certs of any software they don’t like, not just “malware”. So they still have that all-corrupting absolute power. (Even if users can say “run this app without a cert”, 99% won’t.)” This is a serious worry, but compared to the specter of all apps coming from only from the Mac App Store it’s a much lesser evil. Also I believe Apple realizes how careful they would have to be with something as heavyweight as certificate revocation, as they haven’t done it once in four years of iOS apps. Remember also that when Amazon pulled some editions of Orwell books from Kindles there was such an outcry that Amazon promised never to do it again.

• Mac Rumors just published an article which underscores my points above: a security researcher just submitted an app to the iOS app store – it was audited and approved – which sounds as though it demonstrates holes in sandboxing on iOS. Note that iOS’s auditing and sandboxing systems have a four-year head start on Lion’s.

Labels: ,