09/27/2017
Bailing S3 Buckets
0Headlines are breaking out all over the last few weeks about high-profile data breaches caused by company databases and other information being stored in public Amazon Web Services (AWS) Simple Storage Service (S3) buckets. See here and here for two examples. The question I get most often around these breach notices is, “Why does anyone leave these buckets as public, and isn’t that AWS’s fault?” The answer is straight-forward, but comes as a bit of a shock to many – even many who work with AWS every day.
A quick refresher on S3
For those not familiar with S3 or what it is and what it does, basically S3 is an online file system of a very defined type. S3 is a cloud-based Object Storage platform. Object Storage is designed to hold un-structured collections of data; which typically are written once and read often, are overwritten in their entirety when changed, and are not time-dependent. The last one simply means that having multiple copies in multiple locations doesn’t require that they be synchronized in real-time, but rather that they can be “eventually consistent” and it won’t break whatever you’re doing with that data.
S3 organizes these objects into “buckets” – which would be the loose equivalent of a file system folder on more common operating system file systems like NTFS or EXT. Buckets contain sub-buckets and objects alike, and each level of the bucket hierarchy has security permissions associated with it that determine who can see the bucket, who can see the contents of the bucket, who can write to the bucket, and who can write to the objects. These permissions are set by S3 administrators, and can be delegated to other S3 users from the admin’s organization or other organizations/people that have authorized AWS credentials and API keys.
It’s not AWS’s fault
Let’s begin with the second half of the question. These breaches are not a failure of AWS’s security systems or of the S3 platform itself. You see, S3 buckets are *not* set to public by default. An administrator must purposely set both the bucket’s permissions to public, and also set the permissions of those objects to public – or use scripting and/or policy to make that happen. “Out of the box,” so to speak, newly created buckets can only be accessed by the owner of that bucket and those who have been granted at least read permissions on it by the owner. Since attempting to access the bucket would require those permissions and/or API keys associated with those permissions, default buckets are buttoned up and not visible to the world as a whole by default. The process to make a bucket and its objects public is also not single-step thing. You must normally designate each object as public, which is a relatively simple operation, but time consuming as it has to be done over and over. Luckily, AWS has a robust API and many different programming languages have libraries geared toward leveraging that API. This means that an administrator of a bucket can run a script that turns on the public attribute of everything within a bucket – but it still must be done as a deliberate and purposeful act.
So why make them public at all?
The first part of the question, and the most difficult to understand in many of these cases we’ve seen recently. S3 is designed to allow for the sharing of object data; either in the form of static content for websites and streaming services (think Netflix), or sharing of information between components of a cloud-based application (Box and other file sharing systems). In these instances, making the content of a bucket public (or at least visible to all users of the service) is a requirement – otherwise no one would be able to see anything or share anything. So leveraging a script to make anything that goes into a specific bucket public is not, in itself, an incorrect use of S3 and related technologies.
No, the issue here is that buckets are made public as a matter of convenience or by mistake when the data they contain should *not* be visible to the outside world. Since a non-public bucket would require explicit permissions for each and every user (be it direct end-user access or API access); there are some administrators who set buckets to public to make it easier to utilize the objects in the bucket across teams or business units. This is a huge problem, as “public” means exactly that – anyone can see and access that data no matter if they work for your organization or not.
There’s also the potential for mistakes to be made. Instead of making only certain objects in a bucket public, the administrator accidentally makes ALL objects public. They might also accidentally put non-public data in a public bucket that has a policy making objects within it visible as well. In both these cases the making of the objects public is a mistake, but the end result is the same – everyone can see the data in its entirety.
It’s important to also point out that the data from these breaches was uploaded to these public buckets in an unencrypted form. There’s lots of reasons for this, too; but encryption of data not designed for public consumption is a good design to implement – especially if you’re putting that data in the cloud. This way, even if the data is accidentally put in a public bucket, the bad actors who steal it are less likely to be able to use/sell it. Encryption isn’t foolproof and should never be used as an alternative to making sure you’re not putting sensitive information into a public bucket, but it can be used as a good safety catch should accidents happen.
No matter if the buckets were made public due to operator error or for the sake of short-sighted convenience, the fact that the buckets and their objects were made public is the prime reason for the breaches that have happened. AWS S3 sets buckets as private by default, meaning that these companies had the opportunity to just do nothing and protect the data, but for whatever reason they took the active steps required to break down the walls of security. The lesson here is to be very careful with any sensitive data that you put in a public cloud. Double-check any changes you make to security settings, limit access only to necessary users and programs by credentials and API keys, and encrypt sensitive data before uploading. Object Stores are not traditional file systems, but they still contain data that bad actors will want to get their hands on.
10/12/2017
Out with LiqudSky, in with @Paperspace
1by Mike Talon • Cloud, games, Mac, Review
Those who follow me on Twitter know I have, in the past, been a big fan of LiquidSky for cloud gaming. What I’ve found over time, however, is that I can no longer support that platform. I’ve officially cancelled my subscription and been using a new platform – Paperspace and Parsec – for several months now. The reasons for the change are straight-forward, and could have been addressed by LiquidSky before I jumped ship, but were not.
So less address why I made the switch:
1 – Mac Support: LiquidSky originally had a great Mac client. It wasn’t perfect, but they were working on correcting the few issues that there were there and making it better. Then LiquidSky 2 launched without a Mac client at all. Over the remainder of 2017, we Mac users patiently waited for the next-generation Mac client, but to no avail. Update after update of the Windows client came, and an Android client finally launched, but the Mac client continued to be listed as “coming soon.” As one of the major uses of cloud gaming is allowing Linux and Mac users to play these games, this is inexcusable. The Windows client can be used on a Mac with virtualization or emulation (things like vmWare Fusion and Wine), but this requires a level of technical expertise that is beyond the majority of users – and doesn’t provide a pleasant user experience at all.
Paperspace has had a Mac client since day one of their GPU-enabled gaming desktop services. It works, and it works very well, and they’re continuing development of the platform as they move forward to make it even better. They partner with Parsec to minimize latency and maximize the gaming experience overall, and they provide complete and easy-to-follow instructions on how to install and use these tools that anyone can follow.
2 – Latency: LiquidSky has continued to get worse and worse on this front as it gets more popular. While I’m happy they’re getting more users, they’re not scaling properly to allow for the increased user base to get a good experience when they play. Overburdening of their systems is taxing their networks, causing lag that makes playing many games impossible, and most games just plain unpleasant. Even using Wine to jury-rig their client into working on a Mac, visuals are “muddy” and reaction is sluggish and painful most of the time.
Paperspace keeps their networks and platform robust as it grows. It’s not perfect – there are periods of peak activity that definitely cause hiccups, lag, and some muddiness; but they’re far fewer than I ever experienced on LiquidSky and seem to be kept short. You’ll get a few seconds of sluggishness and stutter, and then you’re back to the great desktop experience you want.
3 – Billing Experience and Support: LiquidSky just doesn’t seem to care about its customers. It pains me to say that, as this is completely different than the experience I had when I started using their service. Customer support used to be fast, efficient, and friendly. Now, it seems that they respond when they feel like it, if at all, and basically always answer with “we’re working on that.” While this answer is perfectly acceptable when a new platform launches or a major overhaul has been rolled out – that period of acceptability ended several months ago and the attitude has continued nonetheless. Billing is painful, as it is handled by a 3rd-party entirely now and not even visible on the LiquidSky site. The shift from the ability to use unlimited accounts to everyone using a points system to rent access by the hour is even more confusing; and poorly explained. Let me be clear, they needed to raise their rates – no one could hope to grow and expand with the numbers they were offering – but make it easy for people to figure out what they’re paying for. Use real-money for the per-hour fees, not a conversion first to points and then to different amounts of points for each of the sizes of machines that can be run.
Paperspace has two billing options: per-hour fees in real money and unlimited plans at a fixed amount of money per month. They do charge far more than LiquidSky for unlimited accounts, but they are available and a decent value indeed for those of us who spent a lot on our Mac or Linux desktops and do not wish to buy a Windows machine with that much horsepower just to play games. Billing is handled by Paperspace and all options are available from their own website so I can manage my account quickly and easily. Support is stellar! Paperspace requires the use of a 3rd-party service called Parsec to play games (it mitigates many of the latency issues and handles things like controller support). I have been able to get help on Parsec from Paperspace directly, even though it isn’t their code or product. Paperspace always replies quickly and in a friendly manner.
All-in-all, LiquidSky seems to have totally lost the plot when it comes to cloud gaming. They shifted their focus to gaining more users as fast as possible by offering free credits for watching ads, but didn’t plan well to handle the influx of users that brought. They lost focus on their customers and service and support suffered. They’ve outsourced their billing to a 3rd-party and detached themselves from that process, and made the new purchase plans confusing and complex. Finally, they’ve stabbed their Mac customers in the back by focusing so heavily on Windows. I do understand that the vast majority of the gaming market is Windows, so this isn’t an un-sound business decision on their part. That being said, they had a fanatically loyal user base of Mac folks, who are now abandoning the service due to neglect. They did so as several well-known names like nVidia jumped into this space to compete for those same Windows and mobile users. So they’ve given up one advantage (a dedicated and untapped market) to maximize their effort in a crowded space against major household names. That’s not the best business plan.
Paperspace, with the help of Parsec, offers the total package. High quality services, ease of use, native clients on Mac, and reasonable prices. Note that cloud gaming is currently a very expensive proposition, with monthly fees averaging about US$200/month for unlimited use and per-hour fees being higher than for commodity compute uses. It is, however, worth it – especially for occasional gamers who just want to play one or two games that are Windows-only and therefore don’t need a monthly unlimited plan. It’s not perfect. Setup can be challenging, and not all hardware is fully supported (especially USB devices like gamepads and microphones for chat) – though that’s also the case for LiquidSky and not a Paperspace-specific issue. There are instances of network congestion, and minor nitpick issues, etc. Compared to their competition, however, they’re showing themselves to be leaders in the space of cloud gaming – giving big name brands like nVidia a real challenge and proving that they know what they’re doing and will get it done. They’re also proving themselves savvy businesspeople by targeting users who want the service and have found other platforms don’t get the job done. Mac and Linux users who want to play Windows games exist, and they spend money with companies that remain loyal to them – and Paperspace is going after that loyalty while retaining Windows customers – a recipe for success.
So give Paperspace a look if you’re gaming and not on hardware that can support those games well. No matter if it’s Windows, Mac, or Linux on your desktop, they can make your experience a lot better. Start with an hourly GPU instance and see if it meets your needs. You can always graduate to a monthly plan later if that will save you money. The Paperspace team will indeed be there to help you choose, help you get set up, and help you get back in the game.
Share this: