I'll find some time to blow the cobwebs off here but never forget there's always the podcast. Which is now also on Google Play Music and I need to start adding the links to that.
Now, where's the main circuit breaker?
Rusty on the blogging, but lets give it a lash.
When it comes to M&A in the storage business, or even the technology business in general it’s deals which involve an acquirer buying a ‘beautiful flower,’ that generates the headlines.
A beautiful flower in tech is a company which has established a brand name product, has a solid, growing customer base and has probably been through an IPO. Beautiful flowers tend to be ornate and well tended. They’re expensive because a lot of the hard work is done. The tough decisions were made and implemented. The development pain was taken and moved beyond. These days beautiful flowers tend to be very expensive as the premium on such companies is inflated for a variety of reasons even if the revenue numbers will never justify such a price.
When it comes to traded companies, someone always has a angle.
Before a company evolves into a beautiful flower it starts out as a healthy weed. Life has sprung forth in an inhospitable environment and the healthy weed, while new and fresh but not amazing to look at right now, is thriving. Reaction time to change has to be very quick, the risk of failure due to being strangled by other healthy weeds is a daily risk. Funding could dry up at any second, you could miss something in your design which may lock you out of sales to customers or key team members could exit leaving critical work unfinished. Even your business could be a mirage where you’re seeing customers but there are none. Overall life as a healthy weed is much more exciting than life as a beautiful flower because you have to prove in every single deal with a customer that you’re worthy of love and attention.
There are no dead beautiful flowers, just beautiful flowers who’s time has passed, but tech history is littered with dead unhealthy weeds.
When the healthy weed gets customers love and attention in volume, when the dangers of extinction by the next sunrise fade away, a healthy weed tends to evolve into a beautiful flower.
Without speaking for EMC, it’s a matter of historical record that EMC has been a beautiful flower buyer. Pay top dollar for a recognisable name and a product which EMC’s legions of sales people can sell in the market place. But like the healthy weed has to change so does EMC. It’s heartening to me to see EMC spend more time looking for healthy weeds to shower love and attention on rather than scrambling around looking to add to it’s greenhouse of established beautiful flowers. You get the healthy weeds for a reasonable price, there’s still time to alter it’s evolutionary development to suit your exact needs, (beautiful flowers tend to have reached their pinnacle by the time they’re plucked), and if you do the job correctly, you’ll have grown a new beautiful flower for yourself to the design you’ll have chosen.
In the end, if this teaches you anything it’s that gardening analogies can be applied to anything.
I’ll admit to being somewhat amused by the OpenStack Summit planning call earlier today (At time of writing) because it’s possibly the smallest EMC event presence I’ve seen in a while.
And that’s a good thing.
Raise the Jolly Roger! Us brave few are sailing out from under the long corporate shadow and doing this guerrilla style.
EMC has been making contributions to OpenStack since 2013 and is a core contributor to Cinder and Manila but Paris will reflect an escalation of community involvement over time.
And hey, if I can use it as a lever to get deeper involvement with Free and open source software from more parts of the company, that’s a win too.
You can find the list of events, sessions and booth details here. I’m looking forward to joining my industry colleagues on Monday morning for the OpenStack storage panel and we’ll see if we can have some sizzle with the steak when it comes to the content.
Feel free to drop in if you’re around.
An article over the Register paints storage folks as the Mainframe Admins of the cloudy future.
At the end of this decade when some cloud company says their glorified DAS can do xyz, you’ll be able to pull your cardigan tighter and tell people the story of how that was invented first & done better by Rameses The Great, Pharaoh of Egypt and didn’t you know him well having once seen him outside waiting for a chariot back when you used to be able to go to a trade show and only not know 95% of the people there.
Then having absorbed the looks given to you by younger people (Five years younger) who just don’t know how much better things were in the old days, you’ll go back to the mystical art of carving and masking LUNs or creating file systems and enabling protocols.
But what I’ve taken from all this is we’re now all criminally underpaid.
Cough up, HR! No more college graduates to do it cheaper…
Two posts in a day. It’s like the fires of blogging have been rekindled.
Some Twittering going on about protecting against insider threats in lieu of what happened with Code Spaces, but that was an external threat made manifest so I don’t see the technical equivalence.
I do however see the dread risk fear that such an event has on people.
Evolution has programmed us to react to dread risk fears to somewhat irrational levels as we see them all as extinction threats. You might fear the aircraft you’re flying in going down, but the most dangerous part of your trip was the car ride to the airport where many more people are killed on the roads in accidents every year than they are flying on a plane.
An entire company reduced to rubble by an extortionist. What if it happened to us!?! People aren’t too worried that users and operators make a litany of errors throughout the year, all of which add up when it comes to data loss and data leakage, but it’s terrifying when the catastrophe happens all at once.
Considering Enterprise IT tends to drive with the handbrake on when it comes to the adoption and provisioning of new services, thereby feeding into the ‘IT is broken as it takes you days to do anything, Cloud is faster & better’ chorus, if you want to get any work done at all the last thing you should be doing is throwing more obstacles in the way of insiders getting their work done.
The thinking that things should be slowed down even more to protect against the phantom insider, someone willing to lose their career and go to prison if they carried out such a malicious act, will do nothing but drive even more slipshod processes out onto the public cloud where the only experience you need at running an IT operation is in entering your credit card details.
So lock your doors and windows, and put out the cat. Secure your data but be sensible around getting work done, then take the time to hire good solid people who you trust to protect it.
And keep the dread risk fear in check.
‘You think you can run a data centre better than Amazon, Google and Microsoft can?’
The knock out punch argument from public cloud pushers everywhere.
Depending on who you are the answer may be yes or no, but what is an absolute is that you value your business data much more than public cloud providers value your business data.
So what’s with the shirking of responsibility we’ve started seeing? The shortcuts that haven’t been earned?
The hijacking and destruction of Code Spaces tells me that we have a new class of born in the cloud companies who think they don’t need data protection or security professionals because they buy into ridiculous fiction that public cloud providers can do it better than they can. Something those providers don’t actually say themselves and do expect that you put things such as multi-factor authentication, RBAC and a DR plan in place to make sure your business doesn’t crater.
But some companies renting infrastructure don’t, because instead putting the work in on the boring money draining first principals they focus on the exciting part of writing and deploying money making apps. ‘It’s all built in, we’ll just use the services provided to us and we’ll be fine.’
This is the wholesale out sourcing of thinking and culpability.
An aside, while their backup strategy appears to have been focused on protecting against logical corruption, with offsite copies for additional protection, a single two factor authenticator configured for use by an authorised administrator could have secured Code Spaces top level administration account and prevented deletion of their system elements. But could have/should have/would have doesn’t take us away from didn’t.
Backup is easy, it’s frictionless on public cloud platforms due to the homogeneity of the components involved and how basic the backup options are. Data Protection regardless of where you’re doing it is hard. It’s hard because it requires thinking, it requires work which doesn’t make you money as you’re doing it and it requires people to be culpable.
Keeping versioned replicas, guaranteeing their integrity and availability, securing them and securing the external perimeter around both the replicas and the primary data is hard.
It’s hard and it’s boring and it’s necessary and it’s right.
People and companies get hacked every day. Data gets stolen every day. We have a class of criminal using denial of service attacks and hijacking rented infrastructure from people every day. And every day you need to have professionals on your side who care about your data the way you do.
You haven’t earned any shortcuts, your shortcut is a cyber criminal’s opportunity.
Code Spaces isn’t an outlier, cyber criminals look for new markets just like any legitimate business. Expect more such extortion attempts on public cloud users in the future.
Since EMC formed the Data Protection and Availability Division by combining the Data Mobility Business Unit and Backup Recovery Systems, I’ve been having a conversation over and over again.
You could say I’ve been having it continually.
So, let’s spend some time, write it down and get pedantic.
Continual means ‘repeatedly but not constantly’ while continuous is ‘ a sequence without interruption.’
The difference shoots past most people without mattering, but to us it does matter and if you can keep the difference in mind, congratulations, you’re now infinitely smarter than anyone who has ever used the term ‘Near CDP.’ Which doesn’t make any sense even if you substitute the letter C representing continuous for a letter C representing continual, so long as the letter C involved immediately follows the word ‘Near’.
That being said the conversation I’ve been having is if you make something continuously available, RTO = 0, does that make it continuously protected?
The answer is of course, no.
As we look at the cloudy application vendors re-writing their apps to span datacentres which may or may not be available on Christmas eve, thereby preventing thousands of ‘Buffy The Vampire Slayer’ binge viewing sessions, we notice that for a number of years with existing application clustering software and the distributed cache coherence technology of VPLEX, enterprises can do the same thing.
Active/Active configurations spanning datacentres with the VMware HA/FT, Oracle RAC, MS Cluster or what not, with what you have today providing continuous availability.
But just because you have continuous availability doesn’t mean you do not need data protection. Same as it ever was versioned replicas of that data are required to ensure you can meet the variety of recovery points which come with user or system errors.
That data protection might be continuous, like with RecoverPoint CDP or it might be continual, like with a backup applications such as NetWorker & Avamar or snapshots or NDMP backups and so on.
Continuous availability requires data protection regardless of if it’s in your datacentre, your own cloud or someone else's cloud. The choice of continuous or continual data protection, or a combination there of, is entirely up to you.
And now that’s written down I’m going to start asking people if they want uninterrupted data protection or repeated data protection?
Not a C in sight.
One of axioms we’re supposed to accept in the absence of any supporting data is that public cloud is cheaper than doing it yourself, always and forever.
This is a myth which is passed off like it’s a law akin to the conservation of energy.
It’s more accurate to say that one type of cloud, be it public or private, can be cheaper than the other in specific cases.
I can understand why start ups jump on the public cloud, were I one I would too.
I see the quickest way to burn precious VC cash is to write a check to a server/storage/networking vendor when instead I could rent all of that by the drop and hire a few more coders to actually build something which will start generating cash for me.
Then there’s the added upside that new services are made available providing extended functionality, so the longer I stay the more new infrastructure options I get.
But as this Gigaom article discussing the topic with people who been there and back again shows, there could come a time where it no longer makes sense for you to carry on where you started.
This also applies to workloads you might always have been running internally, it could hit a threshold where it makes economic sense to eject it out into the public cloud from now until eternity.
If you work in IT it is your job to always get the most out of every dollar spent regardless of where you’re going to spend it. That means it’s up to you to get into the weeds on the numbers. Not somebody else or the CFO, you.
Are there massive cost saving to be made in shared architectures operated at scale? Absolutely. But that goes for the public and the private cloud. And lets not forget that in the private cloud those savings are passed directly on to you and aren’t skimmed off the top as healthy provider margins.
Kids of the public cloud folks aren’t going to school without shoes on their feet. If you think you’re getting anything cheap from a public cloud you can be damn sure it’s cost them a hell of a lot less than they’re selling it to you for, because if it doesn’t they’ll be dead soon. (Nirvanix)
But it doesn’t matter where you’re running your workload, always check the meter and if the meter is running consistently on the high side you can probably do it cheaper somewhere else.
If there’s a universal law in any of this that’s probably it.
The NetWorker customer independent email list run by Stan Horowitz over at temple.edu is expanding to cover all of EMC Data Protection technologies.
NetWorker, Avamar, Data Domain, Snap & Replicate, RecoverPoint, etc.
I’ve lurked the NetWorker list for a number years as it’s one of a number of inputs I examine to see what the user base is thinking about day to day, it’s quite useful if you’re an Admin with your head down keeping things running.
If you’re interested you can subscribe here.