Itemized Inefficiencies: Turn Your Public Cloud Bill into Action Items

Cloud Data ManagementAn interesting benefit—and challenge—of moving to the cloud is the way it exposes longstanding inefficiencies that went unnoticed through years of on-premises IT management. We celebrate the flexibility of the cloud, and the speed and scalability, but one underappreciated advantage of bringing public cloud infrastructure to your organization is the transparency. Once you get over the shock, at least.

The transparency value is this: Every bit of cloud resources you use will appear, itemized, on your bill. You’ll see a disheartening amount of inefficiency that went unquantified in the purely on-premise days. You’ll realize that just moving to the cloud, despite the promise of reducing cost and complexity, did not magically remove those inefficiencies. That requires focused attention.

But the upside is pretty good, too: Apply that focused attention and you can eliminate costs you couldn’t even measure on-premises.

In a purely on-premises world, there usually isn’t as much oversight of resource usage. Someone requests a new database, you give them a new database. You start with a half-dozen databases, then you need one for quality assurance, and then a new dev instance or two. And so on, across years. Then you move to a public cloud service, and Amazon or Google or Microsoft starts sending you an itemized bill. Suddenly you see that you’re paying to maintain 65 databases. And you say, “Why do I even have 65 databases? How did that happen?”

In the on-premises world, your costs are fixed and don’t increase significantly with every server, virtual server, or database you might add. In the cloud, each one is a line item. This might be bad news for whoever requests that 66th database, but it’s good news in terms of identifying and eliminating waste.

Insight drives oversight

The solution is operational governance. You need a single point of control, or an approval process, for consumption of resources. Someone needs to be able to see each request in the context of the larger picture. Someone who’ll ask:

  • “Is it worth another $200 a month for this database?”
  • “If we do need it now, is there a point where we’ll be able to retire it?”
  • “How is the data here integrated with other data, or other instances of the same data?”
  • “Will it be trusted, reliable data, or will it quickly go out of sync?”

In short, someone needs to be accountable for the costs, the management of the infrastructure, and the governance of the data.

Another deficit that cloud transparency can reveal is the cost of inefficient code. If you have, say, an application or a mapping that’s written inefficiently, the code will use more space or processing power, and across a large enterprise, these inefficiencies can really add up — both in direct costs and down the road as you have to manage this inelegant collection of clumsily written software. My team is often called in to help customers with performance issues, and the most common cause of the problem isn’t the software per se, but rather the way it has been implemented. Seeing these design inefficiencies itemized, you can assess that app or mapping against best practices: “Do I need 12 nodes to make all this stuff run? Or could I write it to run only two?”

Managing your cloud

In IT, we’re always most excited to turn things on, to bring new capabilities or bandwidth to our business. No one thrills at the chance to lead a decommissioning project—but it’s an essential part of the move to the cloud. And it’s never as simple as, “Move the data to the cloud, then turn off the on-prem version.” To get off on-premises systems, you need a careful program to decommission them. Any given database will have dependencies you’re likely to miss—the little stragglers who call up in a panic when you shut down what turns out to have been their key data resource.

It’s complex, but important, to map out who’s using which data, and for what. If you’ve got a good enterprise data catalog, and you know from a master data management standpoint who’s consuming what from each of your systems, you can migrate all users appropriately before shutting things down. Without that insight, you just start unplugging things and waiting for the phone to ring.

We live in an explosion of data, and just about every enterprise is trying to capture and collate more data than ever to create new opportunities. I’ve had customers tell me, “We wanted to work with big data, and we deployed a big data solution in the cloud—but we didn’t realize it would cost this much, or that we’d have this much data.” The cloud’s revelatory transparency is an essential benefit.

That’s a point at which understanding how to maximize efficiency makes a difference. It’s one of the key reasons we talk about avoiding recreating your on-premises legacy in the cloud. You shouldn’t just “move” to the cloud—you should change the way you approach data and infrastructure to create something new and better, rather than maintaining the same legacy, just with a disturbingly itemized bill.

For more on how making the most of a hybrid infrastructure, read TDWI’s checklist report, “Data Management Best Practices for Cloud and Hybrid Architectures.” Or reach out to my Professional Services team at