The Real Story Behind the Apple Privacy Statement

Photo Credit: PicJumbo
IMG 7446 [Editor’s note: Neither the author nor anyone associated with this blog is a lawyer of any kind. This blog is not to be taken for legal advice under any circumstances. If you have a personal privacy question of law, consult a trained and licensed attorney.]

There’s been a LOT of talk about how Apple is standing up to the Federal Government (and specifically the FBI) in the news, and it’s important to realize why the stance Apple is taking matters. This is not a blanket statement against the government cracking encryption (which is a good stance to take, but not what is at stake here).

The major issue is that what many people (even some IT Professionals) think is happening is not what is actually happening.

Basically any iPhone or iPad running iOS 8 and up produces a situation where the government cannot easily get to the data stored on a phone which has been locked with a 6 or more character passcode and disconnected from iCloud. The reasons for this are complex and highly technical, but the basic idea is that not even Apple can reverse the process of a phone locked in such a way. Mostly, this is because the phone’s own internal identification data is combined with the passcode to create a hash – a mathematical representation of the two values that makes up the key to unlock the encryption. Put in your passcode correctly, the mathematical equation output matches what the phone is expecting, and the phone unlocks. Put in the wrong passcode, and there’s no match, and the phone stays locked tight. Put in the wrong passcode enough times, and the phone forgets the key entirely, essentially permanently encrypting all the data – with the same impact as erasing all of it as far as the government is concerned.

In this case, a phone that was in the possession of one of the San Bernardino shooters has been locked with at least a 6 character passcode, and was disconnected from iCloud about a month before the shooting. That means that the government has 10 tries to get the code, or the phone irreversibly loses the encryption key, rendering all data sitting on the phone pretty much unreadable forever.

Here’s where things get tricky.

Apple is not saying they are refusing to unlock the phone for the FBI, or that they refuse to give the government anything Apple has access to directly. This is a common misconception widely reported by the media, and is flat out wrong. Apple *cannot* unlock the phone. It’s not physically or digitally possible for them to do it without changing the codebase that iOS 9 (which is on the phone) uses. Apple *can* give – and has already given – the government anything stored in iCloud. Apple has done this before when there is a valid warrant for that data, and it’s stored by Apple’s encryption, so they can reverse it and provide the info.

The issue here is that the shooter either broke iCloud backup, or manually turned it off, about a month before the shooting. That means that the majority of the information the government wants is located – and is *only* located – on the phone. Since Apple cannot reverse the locking mechanism of the phone, they do not have access to that information and can’t hand it over to the government even if they wanted to.

What Apple can do – and is refusing to do – is give the government a way to perform what is known as a “brute force” attack against the phone. A brute force attack is literally a person or computer trying combination after combination until they hit the right passcode. Normally, each try at the password takes a tiny amount of time to process, and iOS adds a tiny amount of time to that as a measure against exactly this kind of attack. To a user, this isn’t an issue, as a human entering a code won’t even notice it; but a brute force attack requires thousands of attempts to be processed automatically by a computer, and those tiny amounts of times add up to a LOT of extra time when you’re doing it at that level. The second – and more pressing – issue is that after 10 tries, the phone will never be un-encryptable. Ten tries is nowhere near enough to accomplish a brute force attack, and based on what the government is saying, they’re around try 8 right now with no success.

So what can Apple do? They can provide a signed version of the iOS software which can overwrite the restrictions in iOS which protect against such a brute force attack. Basically it would allow someone to make an infinite amount of tries, and remove the pause between attempts. This would allow a government computer the ability to try thousands of attempts, until they happen upon the right passcode and the phone unlocks itself.

This leads to the question, “If Apple could do this, why don’t they?” The answer is the heart of the matter, and a major issue in the field of personal privacy.

Apple could provide a software update to the government, which could be applied via the lightning port (just like you can do with the official software updates if you don’t want them to download right to the phone). They can create an update that allows the government to do what they’re trying to do. The problem is that doing so unleashes a genie that no one wants to see let loose. Putting that kind of software into even the US government’s hands means it is out there. In the same way as the government could use it to brute force crack a phone open when they have a valid warrant, anyone else who got their hands on the code could do the exact same thing with nothing standing in their way. Hackers the world over would quickly be able to break the phone’s security simply by physically getting the phone in their hands for a long enough period of time.

Basically, this is like the government asking Medico or Scalage or another lock maker to provide them with the means to create a key that will open every single lock that manufacturer ever made, given enough time and tries at it. While theoretically possible, it won’t be easy to do, and the harm it could do to millions of people would far outweigh the good it could possibly due for this one – albeit truly significant – criminal case. (Hat/Tip to Henry Martinez for that analogy)

Apple believes that this is a step beyond what they are reasonably expected to do, and the government’s requested methodology would leave millions of other iPhone users open to the potential to be hacked and have their phone data stolen. Once the code exists, someone will figure out how it is done and start using it to hack peoples’ devices in short order. The trade-off is simply not balanced enough to warrant first building and then giving the FBI the altered iOS software update.

Who will win? That’s up to the courts to decide. At this point both sides have valid legal standing and a lot of ground to stand on; but that means both sides could win or lose this one. Don’t be surprised if this goes all the way up to the US Supreme Court, as both sides are apparently going to fight this to the bitter end. Personal privacy and protection for everyone not involved in the crime versus the government’s lawful ability to gain evidence in a criminal case is not something that will be decided quickly or easily – but it is of vital importance to every one of us. Can the government demand something that could so easily be used for both their good and everyone else’s evil? Can Apple refuse to provide a software solution that is within their ability just because of the potential for it to be used maliciously? Unfortunately, current law has not quite kept up with the world of technology as it speeds ahead of lawmakers.

Either way, Apple is bent on fighting this as much and as long as they can, and either way, I think that shows a remarkable level of responsibility and care from them. I expect the government will also fight to the last breath, because the matter is critical to their ability to fight terrorism and other criminal activity. Bot sides are right, both sides are wrong, and I feel horrible for the judges that are going to have to figure this one out.

Locked Down Internet of Things and the Danger it Poses

Photo Credit: PicJumbo
IMG 7409 The “Internet of Things” is a real thing these days, with everything from toothbrushes to refrigerators now connected to wifi networks and spewing forth data to so many locations it’s hard to track. But a few disturbing trends in the IoT world definitely should give us all pause for thought.

First, many of these IoT devices are severely locked down. They can’t be upgraded, updated, or patched easily, and sometimes not at all by the end-user. Granted, end-users are famous for not keeping digital things updated to begin with, but not even having the option is a disturbing turn of events. When devices cannot be updated/reconfigured by the end-user, it both leads to issues during the product’s support lifetime and after as well.

During the active support lifetime of the device, the end user cannot ensure the updates work properly, roll back updates that didn’t work and/or create new issues, and control what information is kept and sent by the device itself. Manufacturers have many reasons for doing this, such as assuring a steady stream of information that they can market to others, for example. None of these reasons should be taken as valid for endangering the security of a home network, however. Malicious code that infects your connected refrigerator and cannot be removed until the manufacturer sends out an update is just not an acceptable situation.

After the lifetime of the product, even more problems arise. Manufacturers abandon products all the time, leaving these products without any updates at all going forward, and just as many people who would like to see if they can break in and wreak havoc. Thankfully many products continue to live on well past that point, taken over by community efforts and open-source projects to extend the lifetime of the codebase well beyond the lifetime of the 1st-party support. Locking down these devices so they can only ever be changed by the 1st-party developers can make continued community support impossible, blocking this ongoing benefit.

Secondly, locking down these devices also means that end-users become unable to see what communication is going on between those devices and the world at large. Data leakage will occur, and not being able to limit the data available to leak is a dangerous thing.

I’m not saying that all IoT devices need to be totally open and open-sourced. What I do believe, however, is that the consumer should have the right and the ability to say what will go where, and when it happens. This can be done with end-user accessible settings and controls, with the ability to apply patches and roll them back on demand, and the ability to keep unknown software off of them to begin with. Even Apple, famous for their closed ecosystem, does give users the ability to shut off things they’d prefer not to use. Yes, it will mean changing how we typically interact with these kinds of devices, but making them IoT has already done that; so it won’t exactly be a whole new paradigm. Support vendors who give the end-user enough control to keep themselves safe, and reject vendors that insist on locking out everyone without good reason.

Keep that in mind, when next you consider an internet connected fridge.

Be wary of sync services

Photo Credit: PicJumbo-Viktor Hanacek
IMG 5938Recently I looked into various task-management apps that will work across my Mac and mobiles (iPhone and iPad). Of course, that means I also need to synchronize data across those platforms, so that tasks created or completed on one device reflect as such on all the other devices. While that’s not generally an issue for most of the major software vendors, it does bring up some important concerns that most of those same developers have completely ignored.

Syncing data between devices requires sending that information outside of your network to a server, where it can then be accessed by the other devices and compared/added/removed. All the major vendors of task software encrypt the transmission to and from those servers with SSL, a reasonable security practice. But nearly none encrypt the data at rest. This means that they have ensured no one (or nearly no-one at any rate) can view the data in flight, but anyone who compromises their security at the server can see all the data in plain format.

As we’ve seen from the recent spate of attacks and hacks against a large number of companies, servers are compromised on an unfortunately regular basis. Having the data rest unencrypted on those servers means that your info (which might include personally identifiable information) will eventually be stolen whenever an attacker decides to focus their attentions on the software vendor in question. Let me repeat, this is not a matter of “if,” it is a matter of “when” this is going to occur.

Luckily, a few of the vendors – such as Appigo and their ToDo app – do allow for you to set up your own sync using services such as DropBox or your own WebDAV server which can be encrypted at rest. Using Dropbox isn’t perfect by any stretch, they’ve shown that their security can be compromised, typically via attack through third-party connectivity. However, they do at least attempt to keep your data safe, and it’s a far cry better than no encryption at all. Setting up your own secure WebDAV server is tricky, and not for the technological newbie, but it is another option to keep your data safe.

So, when syncing your data with any app, make sure the data is encrypted both in-flight and at-rest. “Secure Sync” may simply mean the data is transmitted securely, and it’s up to you to find out if the data is also stored securely. You may find, and in many cases will find, that the data is stored in a format that leaves you wide open.