Tech

The Reality of the New Non-Neutral Net 0

So the FCC has repealed the regulations that mandated that all traffic on the Internet must be treated equally. The telecom/Internet Service Provider industry has touted this as a good thing, as there will now be a “fast lane” for most traffic and a “faster lane” for so-called priority traffic.

The regulations in question are long, wordy, complex, and unfortunately boring as hell. So what does this new non-neutral net mean in the real world? Let’s take a look:

If you are a tech company:
First, unless you’re well established and have rock-solid relationships with bandwidth providers, you’re in trouble. You *will* be paying more to get your traffic prioritized in a world where everything else online is going to drive up latency and bottlenecks. This means more budget for bandwidth for the life of your product line, and that means you need to start lining up additional funding right now. The impact of the regulatory change may take a few months or a few years, but it is indeed coming – start planning.

If you don’t want to pay for prioritization, then be ready to accept the fact that everyone who did pay will get lower latency and faster throughput – especially during peak operational times for your type of application or platform. So for consumer apps, your performance is going to absolutely suck from about 6PM through about 12AM local time for your customers. For business applications, the 9AM to 5PM local time frame is going to be a nightmare for you and your clients.

While non-latency-dependent or bandwidth-light applications won’t have too much of a problem, if you are streaming anything at all, this will impact your bottom line. If you’re starting up a cloud platform (especially IaaS), just give up now.

If you are a consumer:
Get ready for your Internet Service Provider (ISP) and Mobile companies to charge you more. If you are a heavy user of streaming services (Netflix, Amazon Prime, Apple Music, Spotify, and several dozen more), then you’re going to need prioritized service. After all, if everyone else in your neighborhood pays for it and you don’t, all you’re going to see is the “buffering” message or “please wait” audio prompts as their traffic gets to their devices ahead of yours.

ISP’s are already charging for high-bandwidth users, and in a world of streaming video and audio services we’re pretty much all high-bandwidth users now. If you work from home and are constantly on company applications and VPN connections, your bandwidth profile goes even higher. Have a VoIP phone or a micro-cell for your mobile phone? Higher still. Want to use that VPN for personal or business use – you’ll probably have to pay more for that privilege. There is no end to the nickel and diming that’s now available to ISP’s that they could have only dreamed of before.

A history lesson:
In our history, we have seen that giving corporations – even non-monopolistic corporations – the ability to pick and choose winners and losers exclusively by their ability to control supply doesn’t lead to good things. The punch-card era of IBM is a wonderful example. You see, while anyone could physically produce punch-cards to program and manage IBM accounting machines, only certain vendors were permitted to do so by IBM. Anyone who wanted to get into the market would have to be certified by IBM (an expensive proposition) – even though a punch-card is just a stiff piece of paper of certain physical dimensions. Eventually, other technologies got a toe-hold in the accounting machine market and overcame that restriction – but that took a generation, and caused many businesses who would have competed with official IBM punch-card vendors to go under. Since any vendor selling IBM punch-cards would not have a financial reason to produce them for other brands of accounting machines, this also meant that IBM gained the ability to become a virtual monopoly – no other machines could get anyone to make their punch-cards. Customers also got shafted, as they had to pay a premium for the officially-certified cards or risk their service contracts being voided. To put that in perspective, if your service contract was cancelled, your accounting machine pretty much stopped working.

What’s the correlation? Well, now any new business that wants high-bandwidth, low-latency throughput will have to pay to receive the blessing of an ISP above and beyond what they’re paying for that same service right now. Based on recent history, any user who wants to get the service as intended will also have to pony up some cash each month, making the actual cost of the new platform or service higher still. This will lead to situations where newer technologies may not even be developed, since it will be fiscally difficult to bring them to market successfully. The inventors won’t have the budget to pay for premium connectivity, and the end-users will be reluctant to get better cable/fibre packages to use them.

Recent innovations will wither and die when these new bandwidth fees and/or restrictions exceed their budgets; making it impossible for them to compete with players in the market who can more easily afford the fees by passing them on to their already sizable user bases, or just absorbing them as a cost of business. Google will be able to hold power over online video sharing where a newer company like Twitch may not be able to absorb the extra bandwidth costs. Amazon and Azure will ensure they have little to no competition because any cloud startup will be bankrupted by these premium fees, which would be required for things like Infrastructure as a Service to even function.

Yes, in time, newer bandwidth technologies will be created, and ISP’s will find themselves on the same losing end as the old Bell System did when it got shattered. But, ask yourself, how many innovations and new frontiers took decades longer to develop or were entirely lost when “Ma Bell” controlled almost every telephone line in the country? By allowing a very limited number of bandwidth providers to dictate fees at will – with no regulation to keep them in check – we’re quickly approaching the same situation we had with the Bell Network back in the 1980’s. Will we need to wait several decades for ISP’s to become irrelevant before we’re out of this nightmare, and how much progress will be sacrificed in the meantime?

Our government – in the form of the FCC – has sold us out. We are all going to be poorer in both actual money and in lost innovation and discovery for it.

My Take on the Amazon vs. Google Shenanigans 0

TL;DR – they’re both being insane and need to stop this crap.

In case you haven’t heard the news, Google (who owns YouTube) is pulling the ability for Amazon Echo devices and Fire devices (tablets, set-top and stick streamers, etc.) as of January 1. Some of this has already happened, as most Fire tablets and the Echo Show already have no ability to show YouTube videos, but after the 1st of the year, the entire rest of the product lines will lose the ability to serve up YouTube content – even though they are Android based, and there are Android apps for YouTube available.

Some backstory:

Amazon is a world-wide powerhouse in online retail and Cloud Services. Google owns most of the information on the Internet and is a major player in Cloud Services. Both are massive – and massively powerful – companies who can set and change the market at will. Both have services which compete with each other directly. Google has their own mobile OS (Android) and a vested interest in online retail – though indirectly as they sell advertising that leads to retail sites instead of offering a retail shop. Amazon is an online retail superstore, and has a mobile OS (FireOS) – though indirectly since FireOS is a fork of Android. Over the last couple of years, a feud has developed between them over eyeballs and ownership, and now we’re all paying the price.

The first salvo was Amazon not permitting the Google Play Store (the Android app store) on Fire devices like tablets and set-top streaming boxes. Apps had to be purchased via Amazon’s own app store functionality. Google made it well known that FireOS wasn’t considered Android anymore, but rather a fork that had branched into its own OS entirely. Some time later, Google devices (like Google Home, ChromeCast streaming sticks for TV’s, etc.) began to systematically disappear from Amazon shopping venues – while at the same time Amazon was promoting their own devices which served the same purpose. So Echo devices were available for sale but Google Home was not. FireTV set-top and stick streaming devices were still available, but ChromeCast sticks disappeared. Fooling absolutely no one with this strategy, Amazon soon caught the ire of Google, who became less and less willing to put up with Amazon’s tricks.

At around this time, FireOS tablets and other devices were using an Amazon-built YouTube application. Google claimed that this app violated their terms of service by manipulating the way in which YouTube advertising displayed, and blocked the app from functioning with YouTube. Amazon retaliated by creating an app that was just a shell to load the YouTube website – seeming taking care of the problem. Google, in a move that is controversial at best, objected to the fact that the touch-screen controls used by the new app didn’t fit their standards, and blocked the new app as well. When the Echo Show (an Echo device with a touch screen) debuted, it was quickly blocked from getting access to YouTube videos by Google, continuing the trend.

So which came first? Did Amazon piss off Google by pulling items from their storefront and manipulating how their devices accessed YouTube? Did Google piss off Amazon by developing competing product lines and limiting 3rd-Party access to their services? It’s a hard call to make, as a lot of these things happened in a very short period of time; but the end result is clear to see. YouTube – as of January 1 – will not be accessible on any Amazon device. ChromeCast and other Google-made hardware devices won’t be sold on Amazon.com – even by 3rd-Party sellers. Together, they’re tearing off their collective noses to spite their collective faces, and that doesn’t help anyone.

Amazon – you’re losing money. People will be hesitant to buy FireTV, or tablets, or the Echo Show when they cannot display the most popular video streaming site in the world. This is especially true when other devices like the Roku, AppleTV, and the majority of smart TV’s can show both Amazon content and YouTube content. You are hurting your sales and tarnishing your reputation.

Google – you are losing money. There is a large population of people who already own FireTV or Echo Show devices, and aren’t going to buy another device just to watch YouTube. That means less eyeballs, and less advertising revenue. It also means fewer people signing up for YouTube Red (the subscription service). The feud is keeping your devices off the most popular online shopping portal in most of the world, and you too are tarnishing your reputation.

Both of you are hurting your own bottom lines, and neither of you can win this in the current market. 3rd-Party devices that neither of you make money from will gain ground, and Apple is going to eventually eat your lunches when they inevitably launch their own voice assistant home device that supports both streaming platforms and doesn’t require directly dealing with either of your independent petty streams of bullshit.

Start working together. Amazon, use the YouTube native interface for touch and web. Show the ads inside of YouTube the way Google wants. Google, face the fact that Amazon sells competing hardware and isn’t going to promote your hardware. Take solace in the fact that you can buy a ChromeCast from a lot of places, and just sit back and rake in the ad revenue from ALL platforms that run YouTube. You don’t have to get along with each other, and can continue sniping at each other until the end of time – just don’t force your end users to make the difficult but inevitable choice to abandon both your platforms for the next hot hardware that comes into the market. Worse yet, don’t put a bad taste in consumers’ mouths when alternatives (like iTunes Video and Xbox Video) exist and could gain market share at your expense if you force users into new behaviors.

Bailing S3 Buckets 0

Headlines are breaking out all over the last few weeks about high-profile data breaches caused by company databases and other information being stored in public Amazon Web Services (AWS) Simple Storage Service (S3) buckets. See here and here for two examples. The question I get most often around these breach notices is, “Why does anyone leave these buckets as public, and isn’t that AWS’s fault?” The answer is straight-forward, but comes as a bit of a shock to many – even many who work with AWS every day.

A quick refresher on S3

For those not familiar with S3 or what it is and what it does, basically S3 is an online file system of a very defined type. S3 is a cloud-based Object Storage platform. Object Storage is designed to hold un-structured collections of data; which typically are written once and read often, are overwritten in their entirety when changed, and are not time-dependent. The last one simply means that having multiple copies in multiple locations doesn’t require that they be synchronized in real-time, but rather that they can be “eventually consistent” and it won’t break whatever you’re doing with that data.

S3 organizes these objects into “buckets” – which would be the loose equivalent of a file system folder on more common operating system file systems like NTFS or EXT. Buckets contain sub-buckets and objects alike, and each level of the bucket hierarchy has security permissions associated with it that determine who can see the bucket, who can see the contents of the bucket, who can write to the bucket, and who can write to the objects. These permissions are set by S3 administrators, and can be delegated to other S3 users from the admin’s organization or other organizations/people that have authorized AWS credentials and API keys.

It’s not AWS’s fault

Let’s begin with the second half of the question. These breaches are not a failure of AWS’s security systems or of the S3 platform itself. You see, S3 buckets are *not* set to public by default. An administrator must purposely set both the bucket’s permissions to public, and also set the permissions of those objects to public – or use scripting and/or policy to make that happen. “Out of the box,” so to speak, newly created buckets can only be accessed by the owner of that bucket and those who have been granted at least read permissions on it by the owner. Since attempting to access the bucket would require those permissions and/or API keys associated with those permissions, default buckets are buttoned up and not visible to the world as a whole by default. The process to make a bucket and its objects public is also not single-step thing. You must normally designate each object as public, which is a relatively simple operation, but time consuming as it has to be done over and over. Luckily, AWS has a robust API and many different programming languages have libraries geared toward leveraging that API. This means that an administrator of a bucket can run a script that turns on the public attribute of everything within a bucket – but it still must be done as a deliberate and purposeful act.

So why make them public at all?

The first part of the question, and the most difficult to understand in many of these cases we’ve seen recently. S3 is designed to allow for the sharing of object data; either in the form of static content for websites and streaming services (think Netflix), or sharing of information between components of a cloud-based application (Box and other file sharing systems). In these instances, making the content of a bucket public (or at least visible to all users of the service) is a requirement – otherwise no one would be able to see anything or share anything. So leveraging a script to make anything that goes into a specific bucket public is not, in itself, an incorrect use of S3 and related technologies.

No, the issue here is that buckets are made public as a matter of convenience or by mistake when the data they contain should *not* be visible to the outside world. Since a non-public bucket would require explicit permissions for each and every user (be it direct end-user access or API access); there are some administrators who set buckets to public to make it easier to utilize the objects in the bucket across teams or business units. This is a huge problem, as “public” means exactly that – anyone can see and access that data no matter if they work for your organization or not.

There’s also the potential for mistakes to be made. Instead of making only certain objects in a bucket public, the administrator accidentally makes ALL objects public. They might also accidentally put non-public data in a public bucket that has a policy making objects within it visible as well. In both these cases the making of the objects public is a mistake, but the end result is the same – everyone can see the data in its entirety.

It’s important to also point out that the data from these breaches was uploaded to these public buckets in an unencrypted form. There’s lots of reasons for this, too; but encryption of data not designed for public consumption is a good design to implement – especially if you’re putting that data in the cloud. This way, even if the data is accidentally put in a public bucket, the bad actors who steal it are less likely to be able to use/sell it. Encryption isn’t foolproof and should never be used as an alternative to making sure you’re not putting sensitive information into a public bucket, but it can be used as a good safety catch should accidents happen.

No matter if the buckets were made public due to operator error or for the sake of short-sighted convenience, the fact that the buckets and their objects were made public is the prime reason for the breaches that have happened. AWS S3 sets buckets as private by default, meaning that these companies had the opportunity to just do nothing and protect the data, but for whatever reason they took the active steps required to break down the walls of security. The lesson here is to be very careful with any sensitive data that you put in a public cloud. Double-check any changes you make to security settings, limit access only to necessary users and programs by credentials and API keys, and encrypt sensitive data before uploading. Object Stores are not traditional file systems, but they still contain data that bad actors will want to get their hands on.

What is Ransomware, and how do I stop it? 0

I get asked this question a lot by folks from all over the tech industry and from non-tech people just as often. Ransomeware is not new, but several extremely high profile attacks (like the “NotPetya” attack in Europe earlier in 2017) have put the topic back on the front burner of most peoples’ minds. With that in mind, let’s take al look at how to answer the question “What is ransomeware, and how do I stop it?”

What is it?

Ransomware is a form of malware – software that is not wanted on your computer and does something detrimental to your machine or the data it holds. This particular form of malware is nastier than most, however. While many virus, trojan, and other types of malware will delete data; ransomware encrypts data on your disk, meaning the data is still there, but totally unusable by you until you decrypt it. The creator of the ransomware is effectively holding your data hostage for money.

Tech Note – Encryption:

Encryption is the process of manipulating the binary data of your files using a cypher of some form to make the data useless to anyone who cannot decrypt it with the appropriate key. Much like converting orders into code before sending them in a war zone, you can encrypt data to make it useless to anyone who doesn’t have the key. This technology lets us safely bank online, save data in the cloud, etc. and is not natively a bad thing to have.

Ransomware arrives as an email attachment, a “drive-by” download from a website (where you visit a website and are prompted to download an executable file), and sometimes it acts as a true worm which infects any computers near one which has fallen victim to the malicious code. Once the infection takes hold on a computer, the malware will look for certain types of files (most often documents, spreadsheets, database files, text files, and photos); and will then encrypt these files in such a way that they are unusable by anyone until the malware author provides you with the decryption key.

The malware creator will offer to send you the key if you pay them the amount of money they are demanding – typically via the crypto-currency Bitcoin. They’ll also provide handy information on how to obtain Bitcoin, and the current exchange rates between the Bitcoin currency and your local currency. These malware authors are of course not going to provide just the helpful information. Along with that info comes a warning that if you don’t pay them by a certain date, your data will become permanently un-decryptable and lost forever. You seem to have only two choices: Pay the ransom or lose your data.

What do you do?

First, don’t panic. The malware creators of the world rely on people getting freaked out and doing anything they say in order to make the problem go away. Take a deep breath, step away from the computer for a moment, and then let’s deal with things.

1 – DO NOT PAY THE RANSOM! I can’t stress this enough, and there are very good reasons why you should never pay the ransom no matter how tempting it might be. First, there is at least a very good chance that the malware creators won’t ever give you the decryption key. It’s depressingly common for malware authors to use ransomware as a tool to steal money; and once the malware is known about, internet service providers and security researchers take steps to remove the ability for them to actually get paid or send you the key anyway. Secondly, negotiating with bad actors only results in more bad actors. If an author of ransomware gets a ton of money from their victims, then other authors will see the money available and write more ransomware to get in on the act.

2 – Check online to see if the ransomware has already been broken. Especially for the older variants of ransomware, there is a chance a security research group has figured out what the decryption key is. Check with your anti-virus/anti-malware provider (Symantec, Sophos, etc.) and legitimate tech sites to see if the key has already been found and made available; and to get instructions on how to decrypt your files with it.

3 – If a decryption key isn’t available, then you will need to restore your data from backups AFTER you clean the malware off your system. Check with your anti-virus/anti-malware vendor or your company’s IT department to find out how to get your system cleaned up; and with your backup provider or IT team to get the last known good version of your files back.

How do we stop it?

Stopping ransomware is not easy, as a successful attack can gain the malware authors quite a bit of money. New variants are popping up often, and some of them can spread themselves from machine to machine once the first few machines are infected via email attachments, etc. So how can you help stop ransomware and make it less profitable for the authors?

1 – DO NOT PAY THE RANSOM! Seriously, this cannot be said often enough. Each time someone pays the ransom, another author sees that they can make money by creating their own ransomware and spreading it around the internet. The first step in stopping the spread of this malware is to make sure there is nothing for the criminals who create it to gain.

2 – Keep your Operating System (OS), anti-virus, and anti-malware software up to date. No matter what OS you use (Windows, Mac, Linux, etc.) you are susceptible to malware of various kinds – including ransomware. Make sure you are regularly updating any desktops, laptops, tablets, and smartphones with OS updates and app updates as they are available. Even if you don’t feel comfortable having the OS keep itself updated automatically, be sure you are manually updating on a weekly basis at least. If you don’t have an anti-malware tool (such as those from Sophos, Computer Associates, etc.), then go download one and get it installed. Keep it updated – either via the tool’s own auto-update feature or just manually checking for updates at least daily. While anti-malware tools cannot catch every single variant of every malware package, they can catch a large number of them and keep you safer than not having one at all.

3 – Back up regularly. Use a tool that stores multiple versions of your files when they change – like Carbonite (disclosure: I’m a Carbonite subscriber and used to work for one of their family of products) or other such tools. This way, if you do get hit with ransomware, you can clean your system and restore last-known-good versions of files that were lost.

4 – Practice common sense internet safety. Don’t open attachments in email messages unless you know exactly what they are, who sent them, AND that they are legitimate. If you’re not sure of all three things, don’t open it – get confirmation from the sender first. Don’t click links in email. Instead, go to the website in question manually in your web browser and then navigate to the information you need. NEVER accept or open any files that automatically download when you load a website. If you didn’t click on it, don’t accept it. Along with that, always go to the vendor page to get new software. For example, if a site says you need a new version of Flash Player, then go to http://get.adobe.com/flashplayer and check for yourself instead of clicking on the link or button.

Protect yourself from ransomware as best as you can by following common-sense internet safety rules, and keeping your system backed up. Never pay the criminals who are holding your data for ransom. Finally, spread the word that ransomware can be stopped if we all work together and take the right precautions!

Locked Down Internet of Things and the Danger it Poses 0

Photo Credit: PicJumbo
IMG 7409 The “Internet of Things” is a real thing these days, with everything from toothbrushes to refrigerators now connected to wifi networks and spewing forth data to so many locations it’s hard to track. But a few disturbing trends in the IoT world definitely should give us all pause for thought.

First, many of these IoT devices are severely locked down. They can’t be upgraded, updated, or patched easily, and sometimes not at all by the end-user. Granted, end-users are famous for not keeping digital things updated to begin with, but not even having the option is a disturbing turn of events. When devices cannot be updated/reconfigured by the end-user, it both leads to issues during the product’s support lifetime and after as well.

During the active support lifetime of the device, the end user cannot ensure the updates work properly, roll back updates that didn’t work and/or create new issues, and control what information is kept and sent by the device itself. Manufacturers have many reasons for doing this, such as assuring a steady stream of information that they can market to others, for example. None of these reasons should be taken as valid for endangering the security of a home network, however. Malicious code that infects your connected refrigerator and cannot be removed until the manufacturer sends out an update is just not an acceptable situation.

After the lifetime of the product, even more problems arise. Manufacturers abandon products all the time, leaving these products without any updates at all going forward, and just as many people who would like to see if they can break in and wreak havoc. Thankfully many products continue to live on well past that point, taken over by community efforts and open-source projects to extend the lifetime of the codebase well beyond the lifetime of the 1st-party support. Locking down these devices so they can only ever be changed by the 1st-party developers can make continued community support impossible, blocking this ongoing benefit.

Secondly, locking down these devices also means that end-users become unable to see what communication is going on between those devices and the world at large. Data leakage will occur, and not being able to limit the data available to leak is a dangerous thing.

I’m not saying that all IoT devices need to be totally open and open-sourced. What I do believe, however, is that the consumer should have the right and the ability to say what will go where, and when it happens. This can be done with end-user accessible settings and controls, with the ability to apply patches and roll them back on demand, and the ability to keep unknown software off of them to begin with. Even Apple, famous for their closed ecosystem, does give users the ability to shut off things they’d prefer not to use. Yes, it will mean changing how we typically interact with these kinds of devices, but making them IoT has already done that; so it won’t exactly be a whole new paradigm. Support vendors who give the end-user enough control to keep themselves safe, and reject vendors that insist on locking out everyone without good reason.

Keep that in mind, when next you consider an internet connected fridge.

First Look: Plantronics BackBeat Pro 0

BackBeatPRO plus Spill print cmyk 28MAY15 I finally decided to join the 21st century and get a bluetooth stereo headset for my mobile devices. Up until now I’d been happy with a wired headset and a bluetooth earpiece for when I just needed to make phone calls and nothing else, but with a recent job switch that focused a lot more on my mobile phone, and all-in-one device was going to be a better fit. Looking through the available options, I found a massive choice in products, and a ton of different feature sets to pick from. Luckily for me, several co-workers had gone through this process in the recent past, and helped me narrow down the choices to about 4 selections.

My required feature-set was pretty small:

– Long battery life, a minimum of ten hours of real-world use.

– Ability to activate Siri so that I could voice-control the device.

– Complete compatibility with iDevices (including volume, play/pause, all phone commands, etc.)

– Micro-USB charging. No adapters or other widgets that I’ll lose.

– Customization. Let me choose which features I actually want to use.

– COMFORT. I had experienced some headsets that were horrific on the ears over the years.

– Voice quality. Whoever I call has to be able to clearly understand me.

– At least a little style. This wasn’t the most important feature, but one I wanted on the list.

The combination of these features narrowed the choices down to two, and from that I went with the Plantronics BackBeat Pro headset. One quick browse of Amazon later and I was waiting for the package to arrive. A few days later, and the fun began.

So, how did the headset rank against my list of requirements?

— Battery Life: I never trust the battery specs on web pages and/or box copy. Every manufacturer lies. So when I saw “up to 24 hours of playback time,” I took it with a grain of salt. However, to my surprise, these cans do seem to go for quite a long time on a 3 hour charge. I can’t attest to the claim of 24 hours, but I have run them with music on constant shuffle for 8 plus hours and they didn’t seem to be anywhere near running out of juice. My guesstimate – based on the battery stats voice prompt and my use pattern, is that they’ll clear at least 10 hours with moderate phone use and constant music playback. About the same run-time as the phone itself, so that works well. Verdict: PASSED

— Voice activation and control: The BackBeat Pro works with both Android and Apple devices, and is configured to properly activate Siri on iDevices with a long-press on the Phone button on the headset itself. What I found interesting (and sorely missing from some other wired and wireless headsets I’ve tried) is that not only do you get an audible beep when you press the button, but a second beep to alert you that you’ve held the button down long enough to initiate voice activation. That second beep is critical for me, as otherwise I tend to hold the button down too long and end up confusing the phone or (if you pair two devices) switching to another device. Voice commands were clearly picked up by the phone, and Siri had no issues with my request, beyond it’s usual foibles that have nothing to do with the headset. Verdict: PASSED

— Complete iDevice compatibility. Nearly every headset I looked at has this nailed, and the BackBeat Pro was no exception. Various buttons and dials on the headset properly and correctly activated the associated features on the phone without any issues. This included full control over the audio playback (Play/Pause, Forward, Back, Fast Forward, Reverse, volume, etc.) and phone operations (answer, hang-up, redial, etc.). Verdict: PASSED

— Micro-USB charging. A lot of the headsets required charging stands/bases, or used a proprietary charger (even in this day and age), or otherwise made life for a guy who has a habit of losing chargers on business trips a living hell. The BackBeat Pro uses a standard micro-USB plug to charge, no issues. Verdict: PASSED

— Customization. Most of the headsets I looked at were multi-function, and have so many bells and whistles they could qualify as orchestras. The problem is, some features become downright annoying, and there’s no way to disable them. Case in point, the BackBeat Pro uses Plantronics’ motion-sensing technology to do things like pause the music when you take the headset off and lay it down. I find that unnecessary and possibly even totally annoying if moving the headset out of the way to pick up the phone triggers automatic call answering. Luckily, the BackBeat Pro comes with both Windows and Mac software that communicates via the USB charging cable to enable/disable features and install firmware updates, so you can just shut that stuff off if you don’t want to use it. Verdict: PASSED – plus easy firmware updates!

— Comfort. This is a mixed bag. The headset is big, and even a little heavy. It’s very well cushioned, so you don’t really feel it, and balanced well so that everything sits properly on your head, but it’s noticeable. The cushioning itself is well done, and in all the right places, and the headset isn’t a pain (literally or figuratively) to wear, but the size/weight could be an issue for some. Verdict: MIXED – I found it very wearable, but some will definitely feel it is too heavy.

— Voice Quality. I made several test calls with the headset, and the people on the other end of the line said I sounded clear and understandable. The BackBeat Pro has noise reduction and other features, so this wasn’t a major surprise, but since there is no boom-style mic I was a bit worried. There were no complaints from my callers, though, so I’m going with Verdict: PASSED

— Style. Another mixed bag. While not being ugly, they’re also not beautiful. Aesthetics aren’t my main concern when reviewing tech, so I was ok with it. Those looking for the streamlined style of a Beats headset or the ostentatious appeal of a Sennheiser kit won’t find much to love here, but they’re definitely wearable in public without fear of attracting too many stares. Verdict: MIXED, but passable.

There were some downsides to the BackBeat Pro, however:

They come with every feature enabled, so unless you use the software to turn off the annoyances, plan on learning how to properly handle/move the things without triggering stuff. Additionally, they did NOT play well with my desktop. Audio was choppy and unreliable when attempting to stream music from my 2014 iMac, which is a problem I’ve found with many different wireless headsets. It got even worse when I had the BackBeat Pro multipoint paired (paired with two active devices simultaneously). Although Plantronics claims that multipoint isn’t a problem the headset often had a hard time figuring out which device had “right of way” at any given time.

Finally, the audio tends to pull a bit to the treble side of the equation whenever the Active Noise Cancelling is turned on. Not so much that it really impacts casual listening, but there’s no bass boost, and if you are a connoisseur of very high quality audio you will definitely notice it.

Overall Verdict: PASSED

I’d recommend this headset for anyone looking for a true mobile headset to control, talk with, and interact with mobile phones and tablets. While the audio could be a bit better with the addition of a bass boost function – especially with Noise Cancelling enabled – the audio quality for the speakers and microphone is quite good – better than many other headsets and ear-pods I’ve used over the years. They’re not cheap, but they’re definitely not overpriced for what they do, and a solid choice for mobile stereo headsets.

Cloud Condensation 0

Photo Credit: PicJumbo
HNCK7272I made a prediction a couple of years back, and we’re beginning to see signs that it might just come true, a bit sooner than I expected, but still coming true.

The public cloud market is getting more and more crowded, to the point of saturation of the marketplace by hundreds of players of various and assorted sizes. Massive media attention has brought thousands of customers into those cloud platforms, at all different levels. The result is a highly segmented, nearly fractured, industry that cannot hold in its current form. The logical conclusion of this phenomenon – to use a term coined by a co-worker of mine – will be “Cloud Condensation,” and we’re already beginning to see it.

Cloud Condensation is the phenomenon of public Infrastructure as a Service cloud shrinking and creating two types of fallout:

1 – Through mergers, acquisitions, and corporate collapse; fewer public cloud companies will exist, and

2 – Companies who had begun to move resources to public cloud will reduce the amount of resources they place there, and in fact will begin pulling back many of those resources into private datacenters and/or traditional co-location facilities.

This is not to say that cloud itself will disappear – far from it. The cloud principle is strong and will continue to grow and expand over time. Cloud Condensation simply refers to the mind-shift of moving from public cloud to private or on-prem cloud platforms. There are also a lot more types of cloud platforms than just IaaS, and public SaaS and PaaS continue strong growth.

We are, however; seeing the beginnings of Condensation in public IaaS, and there are a few strong indicators that it’s happening:

– HP dropped Helion Public Cloud late in 2015. While they will continue to focus on HP Enterprise Cloud (their private cloud offering), they began to realize that public IaaS cloud was too crowded a sector.

– Citrix sold off Cloud Platform just recently. OpenStack and CloudStack are still strong, but both are designed for hybrid clouds and converged architecture. Cloud Platform is the tool for managing public clouds in their portfolio.

– Several smaller public cloud players are being acquired by larger players. This is pretty normal in any business, and only points to Condensation when combined with other factors.

– Verizon is winding down its public cloud offerings

– Several other traditionally public cloud platforms are beginning to focus more on managed services

Taken together, there is an industry push to private and on-prem IaaS cloud, and away from public cloud. Once again, this is NOT a death-knell for cloud at all, just a shift in how the cloud looks in the modern world. I suspect we’ll continue to see more of this consolidation and contraction in the market, with larger public clouds taking over market share from smaller shops – absorbing them or driving them under – and the rise of services and platforms designed for private and managed clouds taking the fore. My revised estimate is that we’ll see Condensation kick into high gear within the next 8 months, and extend out for another 12-18 before we have the new paradigm.

Cloud – in all its forms – is here to stay. I just suspect (and we’re starting to see some indication) that we’ll see many companies moving to managed, private, and on-prem cloud platforms.

Time to update to El Capitan 0

Photo Credit: PicJumbo-Viktor Hanacek
IMG 6838While I typically wait a few months before updating to the latest major release of any OS, the time has come to start using El Capitan (OS X 10.11). The OS itself seems to have stabilized well, with the first and second major round of patching already complete an out in the wild. Additionally, there’s another pretty big reason to finally bite the bullet and upgrade:

Recently, a few apps I’d like to use have abandoned support for Yosemite (OS X 10.10), leaving users with little choice but to move to the newer version of OS X if they want to keep using the app. Since the apps in question are distributed through the Mac App Store, they simply won’t install on older versions than the developer specifies. In truth, the MAS won’t even let you purchase them on any Mac running on the earlier versions of OS X, entirely blocking you from getting the apps unless you jump into the latest OS X version.

I don’t see any major reason not to upgrade, however. The platform is getting rave reviews, and the battery life improvements will mean longer run times on my MacBook Pro. Some of the new cross-platform (OS and iOS) features are also impressive. I’ve used handoff and other tools between my Mac and mobiles since I jumped into Yosemite, and the ability to get caller ID and alerts on the Mac when I get a phone call is a nifty thing. Having the ability to send and receive both iMessage and normal SMS messages on all devices/computers is also very useful, as it means I don’t have to stop what I’m doing and grab another device just because a txt came in.

All in all, there’s no real reason not to go ahead with the upgrade, and now there are more and more reasons while taking the plunge is the best idea.

Keep your Mac from falling asleep during restore 0

Photo credit: https://www.flickr.com/photos/alancleaver/

Restoration from a Time Machine backup can be a lifesaver, but restoring the whole system after booting into Internet Restore can cause some serious issues – especially if that restore takes an extended amount of time.

Normally, the process would be to simply hold down CMD+OPT+R after the BOING and until the spinning globe shows up on the screen, this automatically starts Internet Recovery Mode, and allows you to connect to WiFi or a physical network jack and begin the restore process.  You select “Restore from Time Machine Backup,” select the appropriate image, and away you go.  When the process is finished, your Mac is back to the way it was before your unfortunate incident, with very few exceptions (if any).

There’s a catch though.  Jumping into Internet Recovery Mode also loads the default set of Power Management options, and restoration of a full Mac system these days might take several hours.  Those two factors add up to one massive headache.  Unless you keep the system awake by tapping a key or moving the mouse now and then, the system will go to sleep in about 10 minutes, and start shutting down spinning disks about 10 minutes later.  This means that your – presumably external – Time Machine drive will also get spun down, crashing the restore operation and forcing you to start all over again.

Obviously, it’s just not practical to sit there and keep the system awake for the 6+ hour restore you’re in for if your Time Machine is on a USB 2 disk and is over 500GB or so.  There is, however, a way to force the system to never sleep, even in Internet Recovery Mode.

 

First, boot into Internet Recovery Mode and wait for it to start up.  That will bring you to a screen with a window offering you the basic choices of reinstalling OS X, restoring from Time Machine, etc.  Go to the menu bar at the top of the screen, and choose Utilities, then Terminal.  This closes the first window and brings up a command-line interface (the BASH Terminal) where you can enter these three commands:

 

pmset -a sleep 0

pmset -a disksleep 0

pmset -a displaysleep 0

 

Then quit Terminal via the menu, and walk through the standard restoration operation.

Here’s what you’re doing:

pmset is a function of the underlying OS that handles setting parameters for Power Management options.  In each case you’re telling OS X to set the named Power Management option (system sleep, disk sleep, display sleep).  The “-a” tells OS X to set that option for all power profiles – while you’ll probably only use AC Power during a restore, it’s a good idea to just tell the Mac to use it for all of them.  “0” sets the time-out to zero, in other words never sleep.

The result is that the Mac will never dim the display, got to sleep, or stop the spinning disks until you a) re-set those options or b) boot into another OS instance. Since you’re going to boot into a new instance when the restore is done, you don’t have to worry about changing them back later.

Simple as that!  Open Terminal, type those three commands, and then quit Terminal and walk through the restore process from your Time Machine backup with no interruptions.

Critical Mac Security Update 0

For those of you who keep an eye out for weird pop-ups and messages, you most likely noticed a Notification or Growl message that “A critical security updated has been applied.”

When I saw that, I had a moment of panic, as I had – up until now – told OS X that I wanted to manually install patches, updates, and fixes. So this message out of the blue was a bit of a shock. After some online research (and with help from some great Twitter friends like @UberBrady ) I was able to get to the bottom of it.

First things first, if you upgraded to Yosemite from earlier versions of OS X, most of your preferences came over – but one very important one was added and is turned on by default. OS X starting in Yosemite includes an “emergency update system” that automatically downloads and applies any patches that Apple believes to be extremely critical security fixes. They have, to date, only classified one such patch in that category, and this was it. This critical update system is ENABLED by default, and frankly you should leave it enabled. But if – for some reason – you need to turn it off, jump over to Apple Menu| System Preferences| App Store and you’ll see the settings for auto-updates, including the relatively new one for emergency patches labeled “Install system data files and security updates”:

Screenshot of the App Store preferences

Even though this would appear to be for a lot of patches, note that you’ll still have to download and install “optional,” “Important,” and other patches manually if you do not check the other two boxes.

Now, onto the particulars of the update:

Apple recently announced a fix for a Network Time Protocol (NTP) system in OS X. The bug could allow an attacker to take control of system resources (which is a bad thing) with relatively little effort (which is a HORRIBLE thing). This means un-patched systems are vulnerable to attack and need to be patched immediately. Luckily, if you haven’t changed the defaults, Yosemite will patch it automatically as described above.

A more detailed explanation of what the vulnerability is can be found on Apple’s Site.

So, have no fear, the unexpected Notification is not, itself, and attack. Rather, it’s a new feature in OS X designed to help protect against attackers, and was just rather well hidden – and never before used – up to this point.

Stay Safe!