EPEAT and Apple

Earlier this week, the media in general erupted with news that Apple was no longer going to register their company or products with the Environmental Product Evaluation and Assessment Tool (EPEAT). On the surface, this might sound like Apple is taking another step to say “screw you” to the environment, but it’s really not.

Now, before people start flaming me to death, I’ll be the first to admit that Apple has some non-eco-friendly policies. Their sourcing and manufacturing leaves a lot to be desired when it comes to that point, however this particular announcement does not add to that score at all.

EPEAT – for those who’ve never heard of them – is a non-governmental-organization that was started with funding from the US Environmental Protection Agency. The goal of the organization is to allow buyers, sellers, resellers, and consumers of electronics some way to register and track the ecological impact of the products they make and use. That’s great, and a wonderful way to show the community that your company has an eye on their eco-bottom-line as well as their monetary bottom line.

Apple, however, has policies which exceed those required by EPEAT, and in some cases do so in ways that don’t fit into the certification. I’ll let you read up on EPEAT as much as you want at their website, but wanted to point out a few things Apple is already doing which impact electronics recycling and the environment in general:

1 – Apple has taken many steps in recent years to make their business more eco-friendly. Smaller packaging, more efficient manufacturing and better energy efficiency are just the tip of the iceberg. Read more about that here. They’re still far from a stellar player in environment-friendly manufacturing, but they’re working on it.

2 – Any Apple Store will take in Apple computers for recycling – free of charge. As a matter of fact, if you bring an Apple computer (no matter how old) into a store that still has some monetary value, they’ll give you a gift certificate to use toward the purchase of new gear. The same goes for iPhones, iPods and iPads – and with phones they’re promising at least a 10% discount on new gear, even if the device is too old to be resold.

3 – Apple also allows you to bring in ANY PC or mobile phone and get at least a 10% discount, with them recycling the old gear for you. So they’ll recycle your old gear even if they didn’t sell it to you originally.

4 – All of these offers also work by mail. For larger gear (PC’s, Mac desktops and Mac Laptops) you’ll have to pay postage. For smaller items like phones and iPod’s, Apple will pay the postage.

You can get details on all of these recycling programs from the Apple Recycling Program page.

So, while Apple still has a way to go before anyone starts calling them an “eco-friendly” corporation; this particular issue is not something they should be faulted on. They already offer manufacturing and recycling options in excess of the EPEAT guidelines, so it didn’t make a lot of sense to spend a large amount of money for the re-certifications. They can, and do, make the information freely available on their website.

This was just one case where the perceived benefit of renewing the certifications far outweighed the expense in time and money that Apple would need to put out to do so. As long as they keep publicly and freely showing how they exceed the requirements, I can’t find fault here.

FYI: You can read Apple’s statement to TheLoop about the EPEAT issue here.

What is multi-tenancy?

Apartments

For virtual solutions, the idea of having multiple customers leveraging the same infrastructure is nothing new. The whole theory of operations is that instances of applications and entire OS’s can run simultaneously on one piece of physical hardware. However, with the advent of Public Cloud systems, the challenge is to let that happen when not all the users of a particular system get along or like to share.

The issue isn’t that multiple users leverage the same systems, but rather that multiple users who cannot or do not want to share data and resources are acting on the same systems at the same time. Think of Amazon Web Services: customers who do not want their data shared with each other (like Netflix and Amazon’s own streaming product line) can and do co-exist on the same data systems. AWS has to keep the platform shared, but the data and operations separated.

In addition to data segregation, administration must also remain separate. Customers A and B need to be able to monitor and maintain their instances, but cannot see or touch each others instances of apps and servers.

Finally, billing is dependent on the amount of users and/or data/storage/transmission bandwidth that each organization uses. So the service provider needs to be able to bill each customer independently, even though they’re all using the same infrastructure.

And so, multi-tenancy, according to Wikipedia:

refers to a principle in software architecture where a single instance of the software runs on a server, serving multiple client organizations (tenants).

Simply stated, multi-tenancy is what lets unique infrastructure components (like VM hosts and apps) be shared safely and effectively by multiple users and groups.

Photo Credit: Steve-h

HP Jumps in the Cloud Game

HPChipEarlier this week, HP announced it is getting into the game on cloud. In and of itself, the announcement isn’t a shock, as many hardware makers are re-tooling for the reality of hosted applications and servers in cloud configurations. However, I was impressed by the depth of what they’ve been working on at HP.

In addition to a public cloud offering – which will be the first piece of the tech they beta in May – HP is ramping up a few other services to compliment it:

CloudMap systems which create ready-to-go images and applications to encourage roll-out into cloud resources. This isn’t new, as Amazon has had pre-built images from nearly the get-go, but very nice to see.

Virtual Private Clouds for enterprises that want flexibility but don’t need or want the general public to access their cloud plant. Again, not new, but a good sign that HP realizes that just saying they have a cloud solution isn’t enough for most organizations to get on board.

Services offerings wrapped around all of this to allow an enterprise to just define what they want to put in the cloud, and have HP figure out how to get it done.

Brining both the platform and the services in-house is a welcome sign that big manufacturers have begun to truly embrace distributed resources. Just saying “We do cloud” is nice, but doesn’t help anyone get there. HP’s decision to offer hand-holding to firms that don’t have the internal resources to build out these things will make adoption in larger firms easier.

Of course, that leads to bigger contracts for HP, but everything has a trade-off.

Photo Credit: Luigi Rosa

Single-Vendor or All of Them?

Work togetherThere’s quite a few virtualization platforms out there. From VMware to Microsoft to XEN to KVM and beyond, the choices abound.

Do you want to stick to one vendor for all virtual technologies, or work with many of them at once? That’s a valid question, and one more companies are looking at every day.

Standardizing on one virtual platform has benefits. The company in question makes management tools that control their software, and having one platform means having to learn fewer tools. Also, since most vendors make entire suites of tools, you can probably find Server, Desktop and Application virtualization platforms from one vendor alone.

Spreading out also has benefits. Some platforms only make one type of virtual platform (such as hyper-visor for only server virtualization). Sticking with just one vendor would limit the tools available to you.

Cost always comes into play, as the more advanced platforms can often come with higher price-tags. So using only one vendor for all your needs might inflate your budgets dramatically – and in some cases unnecessarily as other vendors make tools that are less expensive and work great. Don’t forget training costs either, as multiple tools from multiple vendors means training your staff on multiple systems.

Which will you do? Most of the organizations I talk to started out on a single-vendor methodology. As folks like Quest Software roll out multi-vendor management solutions, they are beginning to explore having multiple vendors work in the same datacenter. This gives them flexibility to choose the best vendor for each tool they need, without losing control of the environment or having to learn a large number of tools just to keep things running.

Cross-Platform management is not 100% yet, but it is getting there, so we could easily see a day in the near future where the decision is a moot point. Until then, what’s your company doing? Sound off in the discussion section!

Photo Credit: lumaxart

Ready to hit the road?

Cat5I’m on a train.

No, really, I’m typing up this blog post as I travel from NYC to Rochester, NY.

That’s got me thinking about how we’re a mobile bunch – us IT folks – traveling anywhere we need to be to do the job we need to do.

This has got me thinking about how to manage Virtual Infrastructure while on the road, no small task, to be sure.

First, you need to have a connection to the Internet in general. On the ground, that’s not so hard, but does require some forethought. You’ll either need to know someplace where you can connect to WiFi, or else bring a mobile modem or WiFi hotspot with you where you’re going. You could tether your phone, but keep in mind that you may not be able to make or receive calls if you do that, so an independent data device may not be a bad idea if you travel a lot.

In the air, that’s a different story. Most major air carriers have WiFi on only a few – if any – flights. Check ahead to see if you’ll have access to connectivity as you fly the rarely-friendly skies.

Then, you’ll need a VPN. When doing remote admin for virtual systems, you will be talking to components like vCenter and Virtual Machine Manager, which means you’ll literally be transmitting the keys to your kingdom across the networks you’re on. Sending that data “in the clear” is a very bad idea.

Once safely linked to a network, you need the right configuration at your datacenter. For VMware, you can use the vCenter Web clients to do most things, but you may want a Remote Desktop Server to allow you to access the full versions of various tools while on the road. This might be Microsoft’s own RDP server, or could be a third-party remote-access tool to your own desktop – depending on the security policies of your organization.

For Cloud platforms, this becomes a bit easier. As these systems are typically designed to be administered via Web interfaces anyway, you won’t need the RDP server, but you still need the connectivity and security. Make sure your vendor supports linking to their tools over HTTPS/SSL and use it – always.

Once you have all these tools and tech lined up, you can administer your Virtual Infrastructure from just about anywhere you can get a mobile signal. Just remember to go slowly and ensure that you save your progress at every opportunity. You never know when the cell network will give up the ghost, leaving you with no connection and a lot of work half-done.

Photo Credit: nrkbeta

How thin are your partitions?

BalloonOne of the first things you do when configuring a new Virtual Machine is to define the storage resources that it will be using. Mostly, this is because even the most basic of VM’s will need someplace to put the Operating System files, and that place is a virtual disk or pass-through to a physical disk.

Alright, you have the requirements for your disks, and you want to use virtual storage (VMDK for VMware, VHD for Hyper-V, etc.) to house all files and data for this VM. Now you get faced with the decision of what kind of VM disk you want to use. There are two common choices, thin or thick provisioning for the data and systems volumes.

So what do those choices mean?

Thick provisioning (sometimes called static disk or fixed-size disk) is the idea of allocating all the space that the disk can take up immediately. So if you tell the system to create three 50GB thick-provisioned disks, you will see 3 VMDK or VHD files, each using 50GB of space get created. You can typically re-size the disks later, but this is a manual process.

Thin (or dynamic, or expanding) disks allocate space only when necessary, and automatically. They typically start out with a few hundred MB of space, but are capable of growing up to whatever limit you set on them as required.

So why would you choose one over the other?

Thick provisioned disks allow you to explicitly allocate storage to machines where you know that you’ll need X amount of space most of the time. In earlier versions of hypervisor tools, they also offered better performance because the hypervisor didn’t need to dynamically track each volume and expand it over time. However, most of the performance issues are no longer present, so the choice to use fixed-size volumes is more about simply knowing for certain that a particular amount of space is necessary.

But, what if you’re not sure how fast a group of five servers will grow, but you do know that only two of them will grow at all – just not which two. You don’t want to allocate all the space for all the volumes when you know that three servers won’t ever need that much space. It’s a waste of (potentially expensive) disk space. That’s where thin provisioning comes in.

Think of a thin-provisioned disk as a water ballon. It starts off as a very deflated balloon with just a tiny bit of air in it to get it started. Then, as water (data) is poured into the ballon, it swells up to the maximum size over time. You don’t have to do anything to get the balloon to grow except add more water – in much the same way as adding data to a thin disk makes its size increase.

If you have just the one water balloon, the only problem you have is trying to put in more water than it can hold at maximum. Stay below that much water, and you can add and remove water whenever you want. Thin disks are limited by the maximum amount of space you declare they can use, but can grow and shrink within those limits as necessary.

Now, back to our five server scenario. Let’s say you had five balloons in a rigid box that could only allow any two of them to grow to full size. So long as only two get that level of water, you’re fine. If a third tries to grow too big, all the balloons pop and leave you with a major mess.

In thin provisioned disks, the rigid box is the total amount of physical disk you have to work with. So as long as not all five VM’s try to use up their full allocated space, you’re fine. Have too many disks use up too much space, and boom.

For our scenario, I know that only two will ever grow to their full capacity, I’m just not sure which two it’ll be. So I put all five VM’s in the same balloon box and watch them, making sure only two fill up.

That is – of course – a gross oversimplification of how thing provisioned disk works, but you get the general idea. Each disk uses up only the space they need, and can grow within the limits of the physical disk allocated to the group of VM’s. If too many disks grow too quickly, you have to jump in and move some VM’s to other storage systems to avoid running out of room to allocate space.

The good news is that most modern hypervisors have ways to move either the storage or the entire VM with minimal downtime in these scenarios. Some can even do it automatically based on the overall load of the storage attached to each VM host.

Thin provisioning can help you avoid wasting disk, and can be a great part of an overall virtualization strategy for most organizations. Just keep in mind that you have to watch thin provisioned systems a bit more carefully than their thick provisioned brothers and sisters, and you’ll master the use of disk space in no time flat.

Photo Credit: rogerss1

Demystifying VMware’s desktop options

VmdesktopWhen it comes to running Virtual Machines (or creating, editing and managing them) on your desktop, there are several tools you can use. Some are free, others are paid-for software packages, and since a lot of folks use VMware for their server environments, they’re looking at VMware for their desktop virtualization as well.

VMware, for their part, has done quite a lot to create tools that allow you to do everything from just running a pre-configured VM on your desktop to full create/edit/manage tools. In some cases, you can just install ESX to your desktop hardware, but it is cumbersome due to hardware requirements, and is overkill for most desktop VM projects.

So, you decided you want a desktop VM suite that can give you all the tools you need, navigate to VMware’s website, and find they have more than one to choose from. Which is the right one for you?

VMware Player is designed for running VM’s created by others in a very limited capacity. Generally, it is used for demonstrating or trying out other technologies within a VM, and not for VM projects you’re managing yourself. I say this due to a few restrictions in the VM Player F.A.Q.:

– Non-commercial use only. This means that without proper authorization from VMware, you can’t use Player for any commercial use, so no using it to run business applications at work.

– No multi-snap, clone and other critical tools. Most of us want the ability to snap-back VM’s to a previous state or to quickly clone a VM for testing something new.

– No Teams or End-Point Security. Again, only critical if you’re planning on using the tool in a commercial environment, which you’re not going to be doing anyway due to the licensing restrictions.

So now that the free option is out of the way, which tools *should* you use for your desktop? That mostly depends on what OS you are running as your host machine:

Windows and Linux can use VMware Workstation.

OS X uses VMware Fusion.

Both of these products have support for running multiple VM’s in groups, snapshoting, cloning and import/export functions. VMware Fusion also has direct tie-ins to OS X that allow Windows apps to appear as if they’re part of the Mac desktop, which is handy for those of us on Apple’s platforms.

All three tools support a wide variety of guest OS’s, including Windows, various distributions of Linux, Chromium, and (in limited circumstances) OS X.

And that’s actually it! VMware has more desktop products (Like View and ACE), but these are designed for Virtual Desktop Infrastructure, not creating and running VM’s on a fully-fledged workstation or laptop with its own OS installed.

So, to sum up:

Non-Commercial light VM use: VM Player

Windows and Linux full-featured VM platform: VMware Workstation

Mac OS X (host) specific VM Platform: VMware Fusion

Have fun virtualizing on your desktops!

Photo Credit: SteveGriff.com

Do you know where your VM’s are?

GlobeVirtualization of resources bring some interesting issues to the table. Not the least of which, is where the physical locations of your compute resources are at any given moment of the day.

The point of virtualization is that the systems you use are no longer tied to a specific piece of physical hardware, things can move quickly and without notice. For example, a resource located physically next door to you today could be moved via sVmotion to a server across the country tomorrow. As long as the networking team does all the appropriate routing changes, you’d never know.

There are lots of potential issues to consider, but three are:

1 – If you’re servers are not local to you, then the staff responsible for managing those resources at the current time may also not be local. This means that you’ll have to coordinate across time zones to perform maintenance and other tasks.

2 – Flipping resources to another datacenter may mean you suddenly lose physical access to your systems. The good news is that you can always flip the resources back if something goes physically wrong and you don’t have anyone at the other location at that time to plug the wire back in.

3 – Especially for international companies, technologies that cannot be exported could accidentally end up on virtual systems housed in a non-export country. If you deal with encrypted data-sets, this could become a very serious problem.

When you discuss cloud, the situation gets even more confusing, as you may literally not know what physical location your systems reside in at any given time. SLA’s with the cloud provider become absolutely vital, and must be reviewed regularly.

Separating the compute power from physical hardware is – overall – a good thing, but for as many problems as virtualization solves, we do have to remember that there are new problems to consider. Geography is one of those problems.

Dust off your maps…

Photo Credit: Norman B. Leventhal Map Center at the BPL

Is your cloud data safe?

UnlockedI’ve had it.

Today, I went to search for some cloud-enabled task management software. My needs were simple: It had to be able to run on OS X, and it had to be able to sync with iDevices that weren’t on the same network as the Mac. There are lots of tools out there that can do this.

Then I read the fine print.

Either they sync via Bonjour – and therefore only work if you’re in the room with your Mac – or they use a cloud provider to host the data being synced. Sounds reasonable, right?

Not really.

Only one tool I found allowed for non-Bonjour sync and protected my data from being stolen at the Cloud.

Here’s what happens. When you’re doing a non-Bonjour sync, you need to send the data from your desktop to a cloud provider (typically the vendor’s own servers somewhere out on the Internet). That’s all good, and all of the vendors I looked at used https (SSL) connections to get the data to and from the servers. The problem was that the server data was not encrypted.

That’s right, vendors are making a HUGE deal of encrypting the data in-flight, but then storing the data in plain-text on their servers. Granted, they have good physical and at least good-looking digital security, but that didn’t stop anyone in the past from stealing data like credit card info from similarly shielded servers. Data thieves find a way around physical and digital security easily, and a good, encrypted data format is often the only thing that stands between a vendor and a total PR nightmare.

Before I get flamed to death in the comments section, I also realize that encryption can be broken if the thieves are dedicated enough to getting the job done. But that’s no excuse to not even TRY to keep them from reading the data if they get in.

When I went to find a syncing note-taking application, I found the same thing. The leading vendors store the note data in plain-text on their servers, easily accessible to anyone who gets past their firewall. The claim is that they cannot encrypt or else searching wouldn’t be as in-depth as it is now – but again, not offering it at all isn’t acceptable. I – and many other users – don’t use the web interfaces for these things except in dire emergencies. The whole point is that these solutions sync with desktops and smartphones, which can index locally. So web-site-based searching isn’t the biggest thing we’re looking for anyway. We’d gladly exchange a limited amount of lost functionality that we barely use, for better security overall.

Platform as a Service vendors need to wise up and start storing data in an encrypted format. I realize this means that some things like universal server-side search might suffer, but that’s better than having a data thief get their hands on everything as soon as they make it past the security by guessing some server tech’s woefully easy password.

These vendors are sitting on a time-bomb. Sooner or later some high-profile target will use their service. Thieves and hackers will go after that unencrypted data and take everyone else’s they get their hands on in the process.

So, take a few minutes and check that your PaaS vendor is keeping your data safe in the cloud. You might just be surprised to learn that their idea of “data protection” is encryption of the transmission method, but they’ve left the lock off the data sitting on their servers. Telling me that you’ve mined the road doesn’t help me when the thieves find a way through or around it, and proceed to steal all the valuables inside because the front door is made of tissue paper.

By the way, the tools I found were:

Note taking with Notational Velocity on the Mac and Notesy on the iDevices (with thanks to @BMKatz on Twitter) fits my needs. These tools sync via DropBox. While not incredibly well known for data security, DropBox does at least attempt to keep data safe on their servers. If they manage not to have any more “oops, we forgot to turn on password validation for a few hours” moments, they’re going to be doing just fine.

For task management, I use ToDo with DropBox syncing. It is available on multiple platforms and does a great job of showing what tasks I need to do now, and later.

Both sets of tools store local copies of the data too, so if I’m not connected to the net for some reason, I can still work. I can also search quite quickly and easily because they index the data locally too.

Stay safe out there.

Photo Credit: dylancantwell

Linux is coming to Azure

Well, Microsoft has been busy while we were all enjoying the holidays!

For those who aren’t in the know about Windows Azure, that’s the name that Microsoft has given to its nascent Cloud platform. Right now, the only publicly available components are SQL Azure and Azure Storage, which host SQL databases and cloud-based data storage, respectively.

Over the last couple of weeks, however, Redmond has announced that the upcoming Azure VM Role will support many other applications that can run in a Windows 2008 R2 Virtual Machine – which was expected – and also Linux Virtual Machines. This last bit was quite unexpected to many, but a welcome holiday gift from Microsoft.

Mary Jo Foley broke the news, and has a great write-up of the potential Azure VM structures, in her article from January 2nd.

Azure is going head to head with major cloud service providers like Amazon (AWS, EC2, etc.) and RackSpace; so offering Linux capabilities is a welcome move. Without Linux support, Azure was risking becoming a niche platform that would only be useful for basic Windows operations and Microsoft SQL databases.

Azure VM will be based on the Windows Hyper-V technology platform, extending that platform into the cloud. Today, Hyper-V and Hyper-V Server are slowly gaining ground in the corporate datacenter, but have not fared well against the major players like VMware. Since most cloud rollouts will be net-new implementations, Microsoft has a much better chance of becoming a large fish in a small pond by rolling out a solid Infrastructure as a Service (Iaas) platform with the Azure VM initiative, joining the Application as a Service and Database as a Service platforms already in Azure.

Now, there’s no official release date for the Azure VM Role, but it is in beta as I write this, so it does look like it will be launching at some point this year. How much of an impact Microsoft makes in the Cloud world is still to be seen. But, with the addition of multiple OS support, Azure just took one giant leap toward becoming a major player in the cloud space.