On Joyent and “Cloud Computing”, Part 1 of Many

With this post, I’m starting a series where we’re going to be much more explicit about what we’re thinking, what we’re doing, how we’re doing it and where we’re going. I’m not interested in any of it being thought of as impersonal “marketing material”, so I hope you’ll allow the occasional use of “I” and for the interweaving of my perspectives and perhaps a story or two. As the CTO and one of the founders whose lives has been this company, I’m going to go ahead, be bold and impose.

In May of this year, we’ll have been in business for five years, and looking back at it, it’s amazing that we’re still here. Looking back at it, it makes complete sense that we’re still here. Our idea was pretty simple: let’s take what it normally BIG infrastructure (i.e. 64GB of RAM, 16 CPU servers, 10 Gbps networking), virtualize the entire vertical stack and then go to people who normally buy some servers, some switches and some storage, and sell them complete architectures. All done and ready to go.

It is commonly said that businesses only exist out of necessity, that the best ones take away or care of “pain”. Makes sense, doctors only exist because there is disease.

It is a pain for most companies (“Enterprises”) to expertly own and operate infrastructure that can scale up or down depending on their business needs. Infrastructure is more a garden then a monument but rarely treated as one.

It is difficult to be innovative in the absence of observing tens of thousands of different applications doing tens of thousands of different things.

It is easy to make the same mistakes that others have made.

That final point is an important one, by the way. I’ll tell you why in a slightly roundabout way.

I’m a scientist by training and have always been a consumer of “large compute” and “big storage” not just a developer of them (I can write about why that is a good thing another time). While at UCLA as a younger man, I was lucky enough to have had a graduate physiology course where one of the lecturers was Jared Diamond and later on I went to an early reading of one of his books that was coming out, Guns, Germs, and Steel. What reached out and permanently imprinted itself in my mind from that book was the Anna Karenina principle. It comes from the Tolstoy quote, “Happy families are all alike; every unhappy family is unhappy in its own way.”, and simply put it means that success is purely the absence of failure.

Success is purely the absence of failure.

Education and “products” are meant to save people and enterprises from mistakes. Both the avoidance of “known” mistakes (education) and the surviving of “unknown” mistakes (luck, will power, hustle, et al).

Internally I commonly say,

It’s fine to make mistakes as long as 1) no one has ever made that mistake before, if they have, you need to question your education or get some, 2) we’re not going to be repeating this mistake, if we do, we need to question our processes, our metrics and how we communicate and educate and 3) the mistake does not kill us

This is all important because I believe the primary interest in “cloud computing” is not cost savings, it’s in the participation of a larger network of companies and people like you. It is the consumption of products that

  1. Are more accessible and easy to use (usually from a developers perspective)
  2. Are easier to operate

However, that’s not it. I can say with complete certainty that main barrier towards adoption is trust. Trust specifically around privacy, security and integrity.

The successful future “clouds” have to be more accessible, easier to use and to operate, and every single part of the infrastructure has to be addressable via software, has to be capable of being introspected into and instrumented by software and this addressability means that one can write policies around access, performance, privacy, security and integrity. For example, most of our customer really don’t care about the details, they care in knowing what is capable of providing 99.99% of their end users some great experience 99.99% of the time. These concepts have to be bake in.

These are some of the underlying ideas behind everything that we’re doing. At Joyent, we believe that a company or even a small development team should be able to

  1. Participate in a multi-tenant service
  2. Have your own instantiations of this service
  3. Install (and “buy”) the software to run on your own infrastructure
  4. Get software and APIs from Joyent that allows for the integration of all of these based on business desires and policies.

Sometimes people refer to this as the “public and private cloud”. Oooook.

I happen to believe we’re capable of doing this quite well compared to most of the large players out there. We own and operate, and we’re making those APIs and software available (often open source).

Amazon Web Services is a competent owner and operator and allows you to participate in a multi-tenant service like S3, but there are no API extensions for integrity, you cannot get your own S3 separate from other people, you cannot install “S3” in your own datacenter, and of course there is no software that allows an object to move amongst these choices based on your policies.

Question: “If you have a petabyte in S3, how do you know it’s all there without downloading it all?”, I can answer that question if I had a petabyte on Netapps in-house.

The Cisco, Sun, HPs and IBMs (I’ll toss in the AT&Ts too) of the world want to sell you more co-lo, perhaps something best called co-habitation, more hardware, more software, less innovation and no ease of operating.

Larry Ellison says that this is all a marketing term. I think he’s wrong. We think he’s wrong, and that Joyent seems to be unique in focusing on making the above a reality.

Next week, I’ll be discussing “primitives” and “architectures”.

6 responses to “On Joyent and “Cloud Computing”, Part 1 of Many”

  1. When you talk about managing your infrastructure through software, does it mean that Joyent is heading toward a day when accelerators and storage can be provisioned and rebooted from a web panel or an API?

  2. When you talk about managing your infrastructure through software, does it mean that Joyent is heading toward a day when accelerators and storage can be provisioned and rebooted from a web panel or an API?

  3. @mauricio, yes we are.

  4. Thanks for taking this step Jason. I look forward to reading your thoughts on where Joyent is headed.

  5. Thank you!

    I’m very impressed that you are being so transparent and security-focused. I would disagree that Amazon is “competent” in this case. My company is an Amazon customer, and I’ve done some research on their privacy and security:

    http://tweakandtune.blogspot.com/2009/03/it-cant-see-through-cloud.html

    Thank you for doing what the big companies will never ever do – maintaining accountability. This really means a lot, and I hope I see Joyent owning the AWS datacenters in a few years 🙂

  6. I have to say that, in the long run (3-5 years), I’m curious to see how smaller “cloud” players will survive, not being able to extract the same economies of scale than the google/amazon/microsoft of this world. The issues of security and trust can be solved contractually or architectonically, though we’re not there yet. Once there are standard patterns to store and use data on someone else’s infrastructure have been figured out, pricing is likely to become an ever increasing differentiating factor, assuming an equal ability to maintain a decent level of service — decent, not perfect.

Discover more from Jason A. Hoffman

Subscribe now to keep reading and get access to the full archive.

Continue reading