The recent move by AWS from a per-hour to a per-second billing model, for its Linux-based EC2 instances, was only to be expected, reflecting the increasing demand for scalability in computing solutions. But at the end of the day will it really save you money? Could this increased flexibility in pricing create a too-relaxed attitude, amongst those responsible for procuring this expensive resource?
Part one of a two part interview series with tinkering whizz-kid and OneKloud CTO.
What’s up Xavier, can you introduce yourself?
I compiled my first Linux Kernel at eight years old. As a kid I used to disassemble just about everything I could get my hands on. At ten years old I modified the electronics of my first walkie-talkies to try to tune into the police frequency. Then I started coding, the first thing I did was was to clone my mom's cell-phone sim card so I could use her phone without her knowing. My second project was to clone the television’s satellite card so I could watch cable on my computer… I’ve always been into tinkering, fiddling. My studies were math/electronic oriented in high school and then I went on to an engineering school and then got a Masters 1 degree from the Dominican University of California, University in San Rafael, California. Today my whole house is totally controlled by computers, from the front door to the lighting and everything in between. I’m also a massive fan of raspberry pi and arduino.
Does cloud infrastructure live up to the hype?
Finding the optimal amount of resources needed to get the job done, and obtaining them at the most efficient cost —a tradeoff between SLA, availability and cost? These eternal concerns of capacity planning experts are more relevant than ever, despite the dematerialization of many of a company's “resources” as computing power migrates to the cloud. The main attributes of the cloud environment: flexibility and scalability, rhyme with unpredictability. Capacity planning is confronted with a range of new challenges as the scope shifts from hard iron to IaaS.
Just because cloud infrastructure makes it possible for businesses to adapt and scale their IT needs with more flexibility than ever before, doesn’t make it easy. While cloud migration is quickly becoming an indisputable step into the future, many companies are finding the transition comes with an unforeseen disadvantage: a debilitating lack of control and foresight.
“Capacity planning is the process of determining the production capacity needed by an organization to meet changing demands for its products. In the context of capacity planning, design capacity is the maximum amount of work that an organization is capable of completing in a given period,” says Wikipedia. So are these definitions and task frameworks relevant if we are talking about virtual IT? Absolutely.
It’s funny when you think about it, cloud infrastructure provides us with the capacity to power and host the most cutting edge computing technology, allowing the latest scientific and technological innovations to go above-and-beyond, but the way in which AWS is managed and how it interfaces with developers and companies is really pretty dated.
The little accountability paradox that’s costing your company big bucks...
As we saw in part one of this series, controlling Cloud costs is a massive problem for companies -but the way to do so isn’t obvious, and in part two we looked at why just “hoping” programmers will respect the budget isn’t realistic.
Part Two: Cloud discipline, why just “hoping” programmers will respect the budget isn’t working for anyone.
Part One in a two part series on why over thirty percent of money spent on AWS is wasted; tune in for part two next week to see what can be done about it.