Security

No, I will not disable my ad blocker. 0

Anyone who uses an ad blocker has no doubt seen the “placeholder” images or text that replace where the advertisement would be on popular websites. These placeholders implore us to turn off our ad blockers to give the site vital revenue, to not starve the website owners of cash. Lately, there have been even more aggressive methods to ask us to turn blocking off – pop-up or interstitial notifications to shut the blocker off, or even full-page-blocking notifications that keep you from seeing anything if an ad blocker is on.

I do not, in principle, have an issue with these notifications. I think companies and individuals who support their sites with advertising have the right to ask us to turn off the tech that keeps them from getting paid and paying their bills. However, I must regretfully inform these sites that I will not be turning off my ad blocking software, and here is why:

Ad networks (the 3rd-party companies that serve up the ads found on most websites these days) have become nothing more than the latest vector for delivering malware of many forms. In the past, an attacker had to compromise the site itself through security holes or brute force in order to turn that site into an attack vector for infecting visitors with various nasty software. Ad networks have allowed attackers to do many multiple times the damage with a fraction of the effort.

Here’s how it works: The attacker buys ad space with a network that allows Javascript or other active-code ad serving. The technology generally allows advertisers to show rich-media ads (which are annoying and should be removed from the internet anyway, but I digress). Rich-media ads have video, audio, and other eye-catching stuff built-in, but require that the website displaying them allow for the scripts to be run. They also require that the browser allow the scripts to run, which ad blockers disable. For a legitimate advertiser and the website owner, this means better conversion rates (the rate at which viewers click on the ad to see the product/service being sold) and rich-media ads have become insanely popular for advertisers themselves; and a requirement for most ad networks to support.

An attacker can create an “advertisement” that has scripting which delivers the payload of their choice. This could be malware or spyware that the user must accept and run, other malware and spyware that requires no user interaction (limiting what it can attack, but making it much more likely to execute), or more recently crypto-currency mining scripts that chew up CPU cycles and can theoretically damage a computer though overheating it. Since the ad network has no way to tell that the malicious ad is any different from any other rich-media ad (because networks don’t bother to police their customers), the ad network serves up the bad ad to hundreds of websites and infects thousands of end-users.

In short, network advertising on websites has become the new way for attackers to deliver their malware.

This “malvertising” has become so prevalent that even giant sites like Showtime have been attacked via malware in ads posted on their sites. The ad networks do nearly nothing to stop the problem, and the site owners cannot stop it short of removing the ad networks’ code from their sites.

So, until such time as ad networks begin to properly police the ads they put up on network sites, or until such time as you – the site owner – remove that code and post ads you know to be non-malicious only; I’m not turning off the ad blocker. I’m sorry that this impacts you, truly I am. However, the situation has reached a point where no site that runs network ads is safe unless that code is blocked from ever running.

PS: I do indeed subscribe to websites that offer quality content without ads, either through Patreon or directly with the site itself. I know that this limits how many sites I can possibly support, but for those that offer great content and don’t attempt to infect my system with their lax code policies, I’m more than willing to put my money where my mouth is.

Outlook for iOS just plain sucks 0

Recently, I joined a new company that uses Office365 – Microsoft’s cloud-forward platform that they believe will eventually replace the traditional licensing models for the Microsoft Office Suite, Exchange Server, SharePoint and several other products. The idea is good, as it opened the door to Microsoft finally brining its signature office applications (Word, PowerPoint, Outlook, etc.) to more platforms, like iOS devices. Word, Excel, and several others made the jump to my iPhone rather nicely. I’m pleasantly surprised at how well they translated from the big screen on my desktop to the small screen on my mobile devices.

Outlook fell out of the WTF tree and smacked into every single dumb-ass branch on the way down.

First, let’s talk about the interface. On a computer, with a keyboard and mouse, the interface for Outlook for PC and Mac is manageable and useable. I’m not a huge fan of the “put all the menu buttons in one tiny corner” school of UX design, but with keyboard shortcuts it’s a very workable solution for maximizing screen real-estate. Even Outlook for Mac – long the whipping boy for how not to port an application from Windows – the interface is clean, effective, and works. On iOS, the interface is horrible. There are no keyboard shortcuts to jump from mail to calendar to contacts, and some features like the task list are just plain missing. To be fair, tasks sync to the Reminders app in iOS – but only if you also set up your Outlook/Exchange account as an internet account on the phone.

All right, I know what you’re all saying, “It’s a scaled down version for just the essential stuff like email!” Great, let’s look at email:

No font sizing. So basically you’re going to see a set amount of info on each screen, no exceptions. Got an iPhone SE and need a bigger scale to avoid going blind? Too bad. On an iPad Pro and want to shrink stuff down so you can get more on the screen? Sucks to be you. To clarify, I am not talking about the fonts IN the emails – Outlook has little to no control over that if the email has its own formatting. I’m talking about the interface itself and the message previews in your mailbox lists.

No red squiggles. In nearly every other iOS application, when you mis-spell a word that autocorrect doesn’t murder for you (AUTOCORRECT SICKS!); you get a helpful visual indicator that something just ain’t right – the infamous red squiggle underline. It happens in the native mail app, and Airmail for iOS, and honestly every other 3rd-Party email app I’ve tried since iOS 4 was a thing. Outlook can’t get it to happen – or on the few instances they do get it to work it almost immediately stops working again. I’ve changed my keyboard settings, fiddled with autocorrect settings, etc. Nothing gets it to work reliably. Now I do a quick proof-read of emails before I hit send whenever possible because… well… AUTOCORRECT SICKS! but sometimes it’s easy to miss a spelling errer, and the red squiggly lines (like the one that’s glaring at me from that purposeful mistake in the last sentence) are extremely vital to not letting them get sent out.

No S/MIME support. What were they thinking? Outlook on the desktop has supported S/MIME in one form or another since Office 98, and done it reasonably well. Even Outlook for Mac has supported the use of signing certificates since it changed over from Entourage years ago. The native mail app supports S/MIME just fine, so the phone itself is capable of it; and other 3rd-Party mail apps seem to offer at least basic support for it, so it’s not an “Apple locked this feature away for their own use only” issue. But, alas, Outlook for iOS cannot use certificates to sign or encrypt emails, or even recognize that one is in use in an incoming email.

Not all bad news

There are some good points to Outlook for iOS as well. It’s not all doom and gloom. While the sizing is an issue, the interface is at least intuitive enough that I didn’t have to go searching through a knowledge base to figure out where things were. Not having the keyboard shortcuts as on a Mac or PC is annoying, but not something that will completely hobble you. Having email and calendars in one app is a much simpler method than downloading the .ics attachment, opening it in the Calendar app, and finally accepting it (or more often then not, finding out there is a conflict and starting the process over with the updated invite). Direct interoperability with other Office for iOS apps right out of the box is also a strong feature in Outlook’s favor. And having the licensing included in my Office365 subscription – which is handled by the iTunes App Store natively – makes things a lot simpler to manage.

I hope that Microsoft hammers out the kinks in the system. I would personally love to use Outlook for iOS for all of my work-related email; as I always keep work email and personal mail in different apps to avoid confusion and mistakes between accounts. For now though, I have to stick with Airmail for iOS. It doesn’t support S/MIME either, but can talk to Exchange online and does everything else I need except Calendars. For those who are interested, I went with BusyCal for iOS on that front.

Outlook for iOS is a flawed, half-baked product. It shouldn’t be part of the Office for iOS suite, and only serves to drag down what is otherwise a great set of apps that we’ve all been waiting for since Microsoft started looking at mobile devices. Get it together, Microsoft, and give me what I’ve had on the desktop and in other 3rd-Party email apps for years now!

Bailing S3 Buckets 0

Headlines are breaking out all over the last few weeks about high-profile data breaches caused by company databases and other information being stored in public Amazon Web Services (AWS) Simple Storage Service (S3) buckets. See here and here for two examples. The question I get most often around these breach notices is, “Why does anyone leave these buckets as public, and isn’t that AWS’s fault?” The answer is straight-forward, but comes as a bit of a shock to many – even many who work with AWS every day.

A quick refresher on S3

For those not familiar with S3 or what it is and what it does, basically S3 is an online file system of a very defined type. S3 is a cloud-based Object Storage platform. Object Storage is designed to hold un-structured collections of data; which typically are written once and read often, are overwritten in their entirety when changed, and are not time-dependent. The last one simply means that having multiple copies in multiple locations doesn’t require that they be synchronized in real-time, but rather that they can be “eventually consistent” and it won’t break whatever you’re doing with that data.

S3 organizes these objects into “buckets” – which would be the loose equivalent of a file system folder on more common operating system file systems like NTFS or EXT. Buckets contain sub-buckets and objects alike, and each level of the bucket hierarchy has security permissions associated with it that determine who can see the bucket, who can see the contents of the bucket, who can write to the bucket, and who can write to the objects. These permissions are set by S3 administrators, and can be delegated to other S3 users from the admin’s organization or other organizations/people that have authorized AWS credentials and API keys.

It’s not AWS’s fault

Let’s begin with the second half of the question. These breaches are not a failure of AWS’s security systems or of the S3 platform itself. You see, S3 buckets are *not* set to public by default. An administrator must purposely set both the bucket’s permissions to public, and also set the permissions of those objects to public – or use scripting and/or policy to make that happen. “Out of the box,” so to speak, newly created buckets can only be accessed by the owner of that bucket and those who have been granted at least read permissions on it by the owner. Since attempting to access the bucket would require those permissions and/or API keys associated with those permissions, default buckets are buttoned up and not visible to the world as a whole by default. The process to make a bucket and its objects public is also not single-step thing. You must normally designate each object as public, which is a relatively simple operation, but time consuming as it has to be done over and over. Luckily, AWS has a robust API and many different programming languages have libraries geared toward leveraging that API. This means that an administrator of a bucket can run a script that turns on the public attribute of everything within a bucket – but it still must be done as a deliberate and purposeful act.

So why make them public at all?

The first part of the question, and the most difficult to understand in many of these cases we’ve seen recently. S3 is designed to allow for the sharing of object data; either in the form of static content for websites and streaming services (think Netflix), or sharing of information between components of a cloud-based application (Box and other file sharing systems). In these instances, making the content of a bucket public (or at least visible to all users of the service) is a requirement – otherwise no one would be able to see anything or share anything. So leveraging a script to make anything that goes into a specific bucket public is not, in itself, an incorrect use of S3 and related technologies.

No, the issue here is that buckets are made public as a matter of convenience or by mistake when the data they contain should *not* be visible to the outside world. Since a non-public bucket would require explicit permissions for each and every user (be it direct end-user access or API access); there are some administrators who set buckets to public to make it easier to utilize the objects in the bucket across teams or business units. This is a huge problem, as “public” means exactly that – anyone can see and access that data no matter if they work for your organization or not.

There’s also the potential for mistakes to be made. Instead of making only certain objects in a bucket public, the administrator accidentally makes ALL objects public. They might also accidentally put non-public data in a public bucket that has a policy making objects within it visible as well. In both these cases the making of the objects public is a mistake, but the end result is the same – everyone can see the data in its entirety.

It’s important to also point out that the data from these breaches was uploaded to these public buckets in an unencrypted form. There’s lots of reasons for this, too; but encryption of data not designed for public consumption is a good design to implement – especially if you’re putting that data in the cloud. This way, even if the data is accidentally put in a public bucket, the bad actors who steal it are less likely to be able to use/sell it. Encryption isn’t foolproof and should never be used as an alternative to making sure you’re not putting sensitive information into a public bucket, but it can be used as a good safety catch should accidents happen.

No matter if the buckets were made public due to operator error or for the sake of short-sighted convenience, the fact that the buckets and their objects were made public is the prime reason for the breaches that have happened. AWS S3 sets buckets as private by default, meaning that these companies had the opportunity to just do nothing and protect the data, but for whatever reason they took the active steps required to break down the walls of security. The lesson here is to be very careful with any sensitive data that you put in a public cloud. Double-check any changes you make to security settings, limit access only to necessary users and programs by credentials and API keys, and encrypt sensitive data before uploading. Object Stores are not traditional file systems, but they still contain data that bad actors will want to get their hands on.

What is Ransomware, and how do I stop it? 0

I get asked this question a lot by folks from all over the tech industry and from non-tech people just as often. Ransomeware is not new, but several extremely high profile attacks (like the “NotPetya” attack in Europe earlier in 2017) have put the topic back on the front burner of most peoples’ minds. With that in mind, let’s take al look at how to answer the question “What is ransomeware, and how do I stop it?”

What is it?

Ransomware is a form of malware – software that is not wanted on your computer and does something detrimental to your machine or the data it holds. This particular form of malware is nastier than most, however. While many virus, trojan, and other types of malware will delete data; ransomware encrypts data on your disk, meaning the data is still there, but totally unusable by you until you decrypt it. The creator of the ransomware is effectively holding your data hostage for money.

Tech Note – Encryption:

Encryption is the process of manipulating the binary data of your files using a cypher of some form to make the data useless to anyone who cannot decrypt it with the appropriate key. Much like converting orders into code before sending them in a war zone, you can encrypt data to make it useless to anyone who doesn’t have the key. This technology lets us safely bank online, save data in the cloud, etc. and is not natively a bad thing to have.

Ransomware arrives as an email attachment, a “drive-by” download from a website (where you visit a website and are prompted to download an executable file), and sometimes it acts as a true worm which infects any computers near one which has fallen victim to the malicious code. Once the infection takes hold on a computer, the malware will look for certain types of files (most often documents, spreadsheets, database files, text files, and photos); and will then encrypt these files in such a way that they are unusable by anyone until the malware author provides you with the decryption key.

The malware creator will offer to send you the key if you pay them the amount of money they are demanding – typically via the crypto-currency Bitcoin. They’ll also provide handy information on how to obtain Bitcoin, and the current exchange rates between the Bitcoin currency and your local currency. These malware authors are of course not going to provide just the helpful information. Along with that info comes a warning that if you don’t pay them by a certain date, your data will become permanently un-decryptable and lost forever. You seem to have only two choices: Pay the ransom or lose your data.

What do you do?

First, don’t panic. The malware creators of the world rely on people getting freaked out and doing anything they say in order to make the problem go away. Take a deep breath, step away from the computer for a moment, and then let’s deal with things.

1 – DO NOT PAY THE RANSOM! I can’t stress this enough, and there are very good reasons why you should never pay the ransom no matter how tempting it might be. First, there is at least a very good chance that the malware creators won’t ever give you the decryption key. It’s depressingly common for malware authors to use ransomware as a tool to steal money; and once the malware is known about, internet service providers and security researchers take steps to remove the ability for them to actually get paid or send you the key anyway. Secondly, negotiating with bad actors only results in more bad actors. If an author of ransomware gets a ton of money from their victims, then other authors will see the money available and write more ransomware to get in on the act.

2 – Check online to see if the ransomware has already been broken. Especially for the older variants of ransomware, there is a chance a security research group has figured out what the decryption key is. Check with your anti-virus/anti-malware provider (Symantec, Sophos, etc.) and legitimate tech sites to see if the key has already been found and made available; and to get instructions on how to decrypt your files with it.

3 – If a decryption key isn’t available, then you will need to restore your data from backups AFTER you clean the malware off your system. Check with your anti-virus/anti-malware vendor or your company’s IT department to find out how to get your system cleaned up; and with your backup provider or IT team to get the last known good version of your files back.

How do we stop it?

Stopping ransomware is not easy, as a successful attack can gain the malware authors quite a bit of money. New variants are popping up often, and some of them can spread themselves from machine to machine once the first few machines are infected via email attachments, etc. So how can you help stop ransomware and make it less profitable for the authors?

1 – DO NOT PAY THE RANSOM! Seriously, this cannot be said often enough. Each time someone pays the ransom, another author sees that they can make money by creating their own ransomware and spreading it around the internet. The first step in stopping the spread of this malware is to make sure there is nothing for the criminals who create it to gain.

2 – Keep your Operating System (OS), anti-virus, and anti-malware software up to date. No matter what OS you use (Windows, Mac, Linux, etc.) you are susceptible to malware of various kinds – including ransomware. Make sure you are regularly updating any desktops, laptops, tablets, and smartphones with OS updates and app updates as they are available. Even if you don’t feel comfortable having the OS keep itself updated automatically, be sure you are manually updating on a weekly basis at least. If you don’t have an anti-malware tool (such as those from Sophos, Computer Associates, etc.), then go download one and get it installed. Keep it updated – either via the tool’s own auto-update feature or just manually checking for updates at least daily. While anti-malware tools cannot catch every single variant of every malware package, they can catch a large number of them and keep you safer than not having one at all.

3 – Back up regularly. Use a tool that stores multiple versions of your files when they change – like Carbonite (disclosure: I’m a Carbonite subscriber and used to work for one of their family of products) or other such tools. This way, if you do get hit with ransomware, you can clean your system and restore last-known-good versions of files that were lost.

4 – Practice common sense internet safety. Don’t open attachments in email messages unless you know exactly what they are, who sent them, AND that they are legitimate. If you’re not sure of all three things, don’t open it – get confirmation from the sender first. Don’t click links in email. Instead, go to the website in question manually in your web browser and then navigate to the information you need. NEVER accept or open any files that automatically download when you load a website. If you didn’t click on it, don’t accept it. Along with that, always go to the vendor page to get new software. For example, if a site says you need a new version of Flash Player, then go to http://get.adobe.com/flashplayer and check for yourself instead of clicking on the link or button.

Protect yourself from ransomware as best as you can by following common-sense internet safety rules, and keeping your system backed up. Never pay the criminals who are holding your data for ransom. Finally, spread the word that ransomware can be stopped if we all work together and take the right precautions!

The Real Story Behind the Apple Privacy Statement 0

Photo Credit: PicJumbo
IMG 7446 [Editor’s note: Neither the author nor anyone associated with this blog is a lawyer of any kind. This blog is not to be taken for legal advice under any circumstances. If you have a personal privacy question of law, consult a trained and licensed attorney.]

There’s been a LOT of talk about how Apple is standing up to the Federal Government (and specifically the FBI) in the news, and it’s important to realize why the stance Apple is taking matters. This is not a blanket statement against the government cracking encryption (which is a good stance to take, but not what is at stake here).

The major issue is that what many people (even some IT Professionals) think is happening is not what is actually happening.

Basically any iPhone or iPad running iOS 8 and up produces a situation where the government cannot easily get to the data stored on a phone which has been locked with a 6 or more character passcode and disconnected from iCloud. The reasons for this are complex and highly technical, but the basic idea is that not even Apple can reverse the process of a phone locked in such a way. Mostly, this is because the phone’s own internal identification data is combined with the passcode to create a hash – a mathematical representation of the two values that makes up the key to unlock the encryption. Put in your passcode correctly, the mathematical equation output matches what the phone is expecting, and the phone unlocks. Put in the wrong passcode, and there’s no match, and the phone stays locked tight. Put in the wrong passcode enough times, and the phone forgets the key entirely, essentially permanently encrypting all the data – with the same impact as erasing all of it as far as the government is concerned.

In this case, a phone that was in the possession of one of the San Bernardino shooters has been locked with at least a 6 character passcode, and was disconnected from iCloud about a month before the shooting. That means that the government has 10 tries to get the code, or the phone irreversibly loses the encryption key, rendering all data sitting on the phone pretty much unreadable forever.

Here’s where things get tricky.

Apple is not saying they are refusing to unlock the phone for the FBI, or that they refuse to give the government anything Apple has access to directly. This is a common misconception widely reported by the media, and is flat out wrong. Apple *cannot* unlock the phone. It’s not physically or digitally possible for them to do it without changing the codebase that iOS 9 (which is on the phone) uses. Apple *can* give – and has already given – the government anything stored in iCloud. Apple has done this before when there is a valid warrant for that data, and it’s stored by Apple’s encryption, so they can reverse it and provide the info.

The issue here is that the shooter either broke iCloud backup, or manually turned it off, about a month before the shooting. That means that the majority of the information the government wants is located – and is *only* located – on the phone. Since Apple cannot reverse the locking mechanism of the phone, they do not have access to that information and can’t hand it over to the government even if they wanted to.

What Apple can do – and is refusing to do – is give the government a way to perform what is known as a “brute force” attack against the phone. A brute force attack is literally a person or computer trying combination after combination until they hit the right passcode. Normally, each try at the password takes a tiny amount of time to process, and iOS adds a tiny amount of time to that as a measure against exactly this kind of attack. To a user, this isn’t an issue, as a human entering a code won’t even notice it; but a brute force attack requires thousands of attempts to be processed automatically by a computer, and those tiny amounts of times add up to a LOT of extra time when you’re doing it at that level. The second – and more pressing – issue is that after 10 tries, the phone will never be un-encryptable. Ten tries is nowhere near enough to accomplish a brute force attack, and based on what the government is saying, they’re around try 8 right now with no success.

So what can Apple do? They can provide a signed version of the iOS software which can overwrite the restrictions in iOS which protect against such a brute force attack. Basically it would allow someone to make an infinite amount of tries, and remove the pause between attempts. This would allow a government computer the ability to try thousands of attempts, until they happen upon the right passcode and the phone unlocks itself.

This leads to the question, “If Apple could do this, why don’t they?” The answer is the heart of the matter, and a major issue in the field of personal privacy.

Apple could provide a software update to the government, which could be applied via the lightning port (just like you can do with the official software updates if you don’t want them to download right to the phone). They can create an update that allows the government to do what they’re trying to do. The problem is that doing so unleashes a genie that no one wants to see let loose. Putting that kind of software into even the US government’s hands means it is out there. In the same way as the government could use it to brute force crack a phone open when they have a valid warrant, anyone else who got their hands on the code could do the exact same thing with nothing standing in their way. Hackers the world over would quickly be able to break the phone’s security simply by physically getting the phone in their hands for a long enough period of time.

Basically, this is like the government asking Medico or Scalage or another lock maker to provide them with the means to create a key that will open every single lock that manufacturer ever made, given enough time and tries at it. While theoretically possible, it won’t be easy to do, and the harm it could do to millions of people would far outweigh the good it could possibly due for this one – albeit truly significant – criminal case. (Hat/Tip to Henry Martinez for that analogy)

Apple believes that this is a step beyond what they are reasonably expected to do, and the government’s requested methodology would leave millions of other iPhone users open to the potential to be hacked and have their phone data stolen. Once the code exists, someone will figure out how it is done and start using it to hack peoples’ devices in short order. The trade-off is simply not balanced enough to warrant first building and then giving the FBI the altered iOS software update.

Who will win? That’s up to the courts to decide. At this point both sides have valid legal standing and a lot of ground to stand on; but that means both sides could win or lose this one. Don’t be surprised if this goes all the way up to the US Supreme Court, as both sides are apparently going to fight this to the bitter end. Personal privacy and protection for everyone not involved in the crime versus the government’s lawful ability to gain evidence in a criminal case is not something that will be decided quickly or easily – but it is of vital importance to every one of us. Can the government demand something that could so easily be used for both their good and everyone else’s evil? Can Apple refuse to provide a software solution that is within their ability just because of the potential for it to be used maliciously? Unfortunately, current law has not quite kept up with the world of technology as it speeds ahead of lawmakers.

Either way, Apple is bent on fighting this as much and as long as they can, and either way, I think that shows a remarkable level of responsibility and care from them. I expect the government will also fight to the last breath, because the matter is critical to their ability to fight terrorism and other criminal activity. Bot sides are right, both sides are wrong, and I feel horrible for the judges that are going to have to figure this one out.

Locked Down Internet of Things and the Danger it Poses 0

Photo Credit: PicJumbo
IMG 7409 The “Internet of Things” is a real thing these days, with everything from toothbrushes to refrigerators now connected to wifi networks and spewing forth data to so many locations it’s hard to track. But a few disturbing trends in the IoT world definitely should give us all pause for thought.

First, many of these IoT devices are severely locked down. They can’t be upgraded, updated, or patched easily, and sometimes not at all by the end-user. Granted, end-users are famous for not keeping digital things updated to begin with, but not even having the option is a disturbing turn of events. When devices cannot be updated/reconfigured by the end-user, it both leads to issues during the product’s support lifetime and after as well.

During the active support lifetime of the device, the end user cannot ensure the updates work properly, roll back updates that didn’t work and/or create new issues, and control what information is kept and sent by the device itself. Manufacturers have many reasons for doing this, such as assuring a steady stream of information that they can market to others, for example. None of these reasons should be taken as valid for endangering the security of a home network, however. Malicious code that infects your connected refrigerator and cannot be removed until the manufacturer sends out an update is just not an acceptable situation.

After the lifetime of the product, even more problems arise. Manufacturers abandon products all the time, leaving these products without any updates at all going forward, and just as many people who would like to see if they can break in and wreak havoc. Thankfully many products continue to live on well past that point, taken over by community efforts and open-source projects to extend the lifetime of the codebase well beyond the lifetime of the 1st-party support. Locking down these devices so they can only ever be changed by the 1st-party developers can make continued community support impossible, blocking this ongoing benefit.

Secondly, locking down these devices also means that end-users become unable to see what communication is going on between those devices and the world at large. Data leakage will occur, and not being able to limit the data available to leak is a dangerous thing.

I’m not saying that all IoT devices need to be totally open and open-sourced. What I do believe, however, is that the consumer should have the right and the ability to say what will go where, and when it happens. This can be done with end-user accessible settings and controls, with the ability to apply patches and roll them back on demand, and the ability to keep unknown software off of them to begin with. Even Apple, famous for their closed ecosystem, does give users the ability to shut off things they’d prefer not to use. Yes, it will mean changing how we typically interact with these kinds of devices, but making them IoT has already done that; so it won’t exactly be a whole new paradigm. Support vendors who give the end-user enough control to keep themselves safe, and reject vendors that insist on locking out everyone without good reason.

Keep that in mind, when next you consider an internet connected fridge.

Be wary of sync services 0

Photo Credit: PicJumbo-Viktor Hanacek
IMG 5938Recently I looked into various task-management apps that will work across my Mac and mobiles (iPhone and iPad). Of course, that means I also need to synchronize data across those platforms, so that tasks created or completed on one device reflect as such on all the other devices. While that’s not generally an issue for most of the major software vendors, it does bring up some important concerns that most of those same developers have completely ignored.

Syncing data between devices requires sending that information outside of your network to a server, where it can then be accessed by the other devices and compared/added/removed. All the major vendors of task software encrypt the transmission to and from those servers with SSL, a reasonable security practice. But nearly none encrypt the data at rest. This means that they have ensured no one (or nearly no-one at any rate) can view the data in flight, but anyone who compromises their security at the server can see all the data in plain format.

As we’ve seen from the recent spate of attacks and hacks against a large number of companies, servers are compromised on an unfortunately regular basis. Having the data rest unencrypted on those servers means that your info (which might include personally identifiable information) will eventually be stolen whenever an attacker decides to focus their attentions on the software vendor in question. Let me repeat, this is not a matter of “if,” it is a matter of “when” this is going to occur.

Luckily, a few of the vendors – such as Appigo and their ToDo app – do allow for you to set up your own sync using services such as DropBox or your own WebDAV server which can be encrypted at rest. Using Dropbox isn’t perfect by any stretch, they’ve shown that their security can be compromised, typically via attack through third-party connectivity. However, they do at least attempt to keep your data safe, and it’s a far cry better than no encryption at all. Setting up your own secure WebDAV server is tricky, and not for the technological newbie, but it is another option to keep your data safe.

So, when syncing your data with any app, make sure the data is encrypted both in-flight and at-rest. “Secure Sync” may simply mean the data is transmitted securely, and it’s up to you to find out if the data is also stored securely. You may find, and in many cases will find, that the data is stored in a format that leaves you wide open.