The Narcotic Of Professional Services

In the technology world, selling new products is hard. Selling to enterprises is even harder. Small companies were (relatively) easy. They took a little bit of handholding to get your SaaS/software/hardware configured “just right” for them, but most of what they wanted pretty much fit into the offering anyways; it was “on the truck.”

As you expand up-market into larger customers, customization demands increase. They need:

  • Integration with their (unique) login system
  • Special compliance controls
  • Unique flows and processes
  • Added manual approval steps
  • etc.

Big companies have big processes and lots of existing customized tools, and expect you to work with them in exchange for the commitment (and big check that goes with it).

As your service or product company moves into those markets, you start to add or expand your “Professional Services” (i.e. consulting) arm. Initially, you use your ProServ arm just to assist onboarding new customers. You even are willing to lose some money on the cost of ProServ to get the big deal for product or SaaS recurring revenue.

All along, you keep in mind that consulting firms have much lower valuation multiples than product, let alone SaaS (committed recurring revenue/CRR), firms. The rule of thumb for SaaS usually is around 13%: you can let your consulting (one-time) revenue grow to <13% of your total revenue before your valuation starts to take a serious hit. This makes a lot of sense. As long as consulting isn’t too large a part of your revenue, it really is just on-boarding assistance. More that that, you risk turning into a consulting firm.

You try to keep focused by remembering that the valuation multiple of revenue for a consulting firm usually is 1x annual revenue. A firm that sells $10MM in annual consulting revenue will be valued at ~$10MM. Some firms have special brand value that boosts it, but only by so much. You, aggressive and ambitious CEO, are looking for the nice 5-7x that “real firms” get.

Then, one day, you hit a rough patch in revenue growth. Don’t feel too badly; every company does. As a colleague of mine likes to say:

“It takes years of hard work and many failures to become and overnight success.”

What to do, though? Your board and investors are pressuring you, you worry they may force you to make cuts, your head may be on the block… and there is that ripe old plum of consulting, just ready to bring in some cash. “It will just be for one or two quarters, to get the Board off my back.”

Don’t.

Boosting revenue via consulting for “just one or two quarters” is the start-up equivalent of boosting your mood via cocaine “just to get through the next few days.”

It is no coincidence that “consulting” and “cocaine” and “chocolate chip cookies” all start with cookie monster’s favourite letter “C”: all are equally addictive.

Summary

Consulting is an important and, when performed professionally (not Marty Kaan), invaluable and honourable business. It even can play an important part in a product or services company’s onboarding strategy when the product/service or customer integration is complex.

It also has revenue-boosting appeal like the sirens of Greek mythology. Like the ancient sirens, once you get closer, it nearly is impossible to leave.

Professional Services are a narcotic. Use only with a prescription for doctor-approved purposes. For anything else, call the doctor.

Posted in business, cloud, pricing, product, technology | Comments Off on The Narcotic Of Professional Services

Tech War or Diplomacy?

Yesterday, I published an article asking, “Did Docker Declare War on RedHat and CoreOS?

I received several responses pointing out market-related developments.

  1. A number of people said they know that Docker did not intend to “declare war” on CoreOS and RedHat. Docker simply was developing its tools that they needed anyways and advanced their market.
  2. With the change in CEOs this week at Docker, highly unlikely they would start a war immediately before changing.
  3. Docker EE (commercial) is the only version available on RedHat Enterprise Linux (commercial) because “Docker-free for OS-free” and “Docker-commercial for OS-commercial”.
  4. CoreOS (the business) no longer is focused as a business model on CoreOS (the OS); their primary commercial focus is Tectonic.
  5. CoreOS (the OS) no longer is called CoreOS, but rather “Container Linux”. This renaming move is intended to separate the business from the OS, and enable their brand to be affiliated with other core (pun intended) business opportunities.

That having been said, I have found CoreOS to be very popular among new companies and new cloud deployments in companies. The philosophy of minimal OS with all variability in containers is very appealing. Personally, I recommend it in many cases. It reduces the burden of managing the underlying operating system significantly.

I hope it does stay around – I know they are investing in it, particularly on the security front – even in the face of challenges from LinuxKit. Custom distro images aren’t for everyone, but with greater simplicity may come greater adoption. Further, if engineers can build app container images, there is no reason they cannot, eventually, build app os images, eliminating a whole layer of management. That, after all, is what Docker really is all about.

Posted in business, cloud, containers, product, security, technology | Comments Off on Tech War or Diplomacy?

Did Docker Declare War on RedHat and CoreOS?

Yesterday, at DockerCon, Docker Inc announced open-sourcing its LinuxKit toolkit to build Linux operating system images. LinuxKit (the platform that has been rumoured as Moby for over a year) provides a relatively easy-to-use toolkit for building immutable operating system distributions.

Normally, an operating system is a platform that you change on a regular basis. Sure, the core itself – the kernel and modules and basic tools – are changed only when you upgrade or patch your operating system. But the software and the tools are installed directly onto the server.

Docker, of course, changed all of that by solving myriad packaging headaches. Your two (or two thousand) servers could be identical. All the “application-specific stuff” is stored in your images and run by the docker engine.

As a result two interesting things happened:

  1. Docker built a form of partnership with RedHat, which has dominated the paid support corporate Linux market via its “RedHat Enterprise Linux”, or RHEL, which is distinct from the (mostly) compatible free open-source CentOS.
  2. New Linux distributions arose that provide a lightweight and immutable core optimized to run containers, a natural evolution of “make everything custom a container image,” primarily CoreOS, although Rancher’s RancherOS, just recently GA as 1.0, also is becoming an interesting player.

CoreOS has provided a number of open-source tools built around managing infrastructure at scale, although many have withered, as well as its own container runtime rkt. In addition, it provides a number of commercial services, notably its Docker-compatible container image registry quay.io, and Tectonic, its form of managed Kubernetes.

The core business of RedHat remains Linux support, the core of CoreOS remains, well, CoreOS (recently renamed Container Linux, probably to deemphasize it).

LinuxKit clearly is not intended to be a Tomahawk launched at CoreOS and RedHat. Having seen the project – and used it – for some time, and seen the managers and contributors discuss their desires in GitHub issues and public Slack channels, I believe it is a clear attempt to simplify the underlying layer – the operating system – to make container management even more of the focus it already is. In that respect, LinuxKit makes a lot of sense.

But the net effect of LinuxKit may be to aim a few torpedoes at RedHat and CoreOS. It certainly must look that way. As new Docker CEO Steve Singh said in an interview, Docker now has 400 enterprise customers, most of whom came onboard in the last year alone.  Big inroads, impacting changes at both infrastructure and engineering departments, and a simplified OS to go with it? Add in that it is far easier to secure than full-on RHEL, and it has to make RedHat executives nervous.

Granted, LinuxKit is a toolkit, not an operating system distribution. But it is (intentionally) as straightforward to build and use an operating system image as it is to build a container image. If you can build one, you probably can build the other. I would not at all be surprised to see an OS image registry, possibly compatible with multiple clouds and VM orchestration frameworks, coming out of Docker Inc. very shortly.

From Docker’s perspective, it is straightforward. Anything that simplifies usage of containers is good; anything that gives developers deeper ability to run stuff independently is good.

From the OS market’s perspective, this reduces the uniqueness of the OS layer, and may create interesting challenges for OS companies.

While I doubt Docker intended to assault RedHat and CoreOS – they just are doing what makes sense for their business and market – I don’t think they minded the side effects.

  1. As discussed here, CoreOS has been trying to provide a real alternative to Docker for a while. I love competition, but from Docker’s perspective, anyone trying to dislodge them should be fair game. As it is, it appears that Docker and Google have made their peace, leaving CoreOS as the odd man out.
  2. If you look at the list of Docker distributions for RedHat here and here, it is pretty clear that there is a dearth of options for RHEL, essentially Docker paid Enterprise Edition only. It isn’t clear to those of us on the outside who initiated the RedHat-Docker fallout after their earlier close collaboration, but neither of them seems to like the other much nowadays.

As my friend and smart architect Josh Mahowald said to me during DockerCon:

Explain to me again why CoreOS isn’t freaking out?

 

Posted in business, cloud, containers, product, technology | Comments Off on Did Docker Declare War on RedHat and CoreOS?

You Cannot Buy Your Culture Into Nimbleness

I find it interesting when the same conversation happens with two different people in the span of just a few days.

In the past week, I had almost the exact same conversation twice, with two different people at two different companies, about culture and acquisitions. In both cases, they had initiated the topic of conversation.

The following is a common pattern:

  1. Company Small is founded to bring a product to the market.
  2. Small goes through multiple iterations and pivots, succeeds. (searching for a sustainable business model, in Steve Blank’s terminology)
  3. Small’s revenues grow, company stabilizes into market. (optimizing to execute on a known business model)
  4. Small changes its name to Big!
  5. After years, changes in market and technology threaten Big’s model.
  6. Big tries to respond by tweaking product, changing VP Sales, perhaps CEO, but struggles to respond despite heavy investment.
  7. Big decides to acquire several smaller, nimbler companies.
  8. Big still fails to return to stability, let alone growth.
  9. Big deteriorates.
  10. Big eventually collapses or is sold for a pittance.

Why does Big engage in the acquisitions? Sometimes the target is a direct competitor, acquired with the goal of reducing competition in the marketplace (at least temporarily). Sometimes the target has unique technology or other intellectual property, acquired for its valuable asset.

But in most cases, the acquisition is neither large-scale competition nor does it have unique IP. Instead, it is a much smaller, growing nimble company.

So why the acquisition? Why not develop the competitive product internally? Big usually has far more capital to invest in the product (viz. Hooli and Pied Piper), along with existing deep sales pipelines and far more direct connections to real customers.

Culture.

Big already has tried to compete, and failed. For reasons executives at Big cannot grasp, at least initially, little Nimble with a tiny war chest appears to be cleaning up. Soon it will be a direct threat.

What could it be? CMO of Big concludes it is brand and asks for another few million dollars to spend. Usually she gets the funding, which leads to a small but unsustainable bump in revenues and market share, hardly enough to justify the spend. The VP R&D concludes it must be product engineering, asks for another few million in hires. They get it, they spend it, same story. So it goes around again and again.

Eventually, the CEO of Big concludes – correctly – that it is all of the above together. What do you call the mindset in a company that does more with less, brings product to market quickly because it rewards people for risk and gives them the space to do so?

Culture.

Concludes the CEO, “if you can’t beat them… buy them.” So Big acquires Nimble.

Therein, however, lie the very roots of the future failure, a key reason why many acquisitions fail.

You cannot buy culture if you smother it.

What happens once the acquisition closes? Certainly some key people leave, due to a combination of a desire to work only at a small, nimble company with a mission and having gotten their “exit”, stay bonuses notwithstanding.

However, for the most part, the people at Nimble remain the same, with the same culture and the same drive to succeed. In nearly every acquisition I have seen, the drive of the acquired people increases. They know why Big bought their beloved Nimble, why Big is failing, and what they can do to help. They want to bring the bounty of Nimble to the larger entity, and are excited to do so.

Shortly after the acquisition, it starts. Quoth Big VP HR:

You all are employees of Big now. We are excited to have you here. As a Big company, we have longstanding policies that we worked out over the years, and work really well, so let’s get them over to you right away. We know you had a flexible vacation policy and remote work whenever needed, but we have policies we need to follow. Here are the new employee contracts (“only” 15 pages longer than the old ones), and the annual policy agreement you must sign. We are so excited for the mindset you will bring to us. My door is open for any questions.

Employees not only have reduced flexibility to do what they really want – get the job done! – but they now need to spend time reviewing contracts, figuring out benefits, understanding vacation, everything about their job except the job itself.

Only one or two go talk to the VP HR. Most sigh, some send complaints up the chain of command. These filter through to Big CEO, who responds, “well, it is just HR policy, it isn’t the core of what we do, and we have to comply with ____.”

Next comes legal.

We really appreciate how much you have gotten done on so little. You engineers, operators, salespeople, marketers, finance staff are amazing. Of course, we want you to continue, help make us as nimble as you. We are all one big company now, so here are our standard procedures for vendor contracts. I know you are used to the five-pagers you have done, but these will protect all of us.

Everyone at Nimble is pretty sure that the best use of their time is not redoing contracts and upsetting longstanding vendors who have helped them succeed. The head of R&D is certain that the reason he got the new dashboard out so quickly was because he negotiated a mutually beneficial deal under a fair contract with the software vendor, thanks to dedicated help from purchasing and legal, in 3 days flat. With the new template, it would take a month to negotiate, if Big’s lawyers and purchasing are available, and if the vendor even agreed to anything so onerous. But so be it.

Once again, a few complaints go up the chain, making it to the CEO of Big. The response? “It is just legal, one of those administrative things we have to do. Go along with it for the good of the company and all of its employees. Don’t let it worry you. Just do it and focus on your jobs.”

Next is IT. But I think the point has been made.

What the CEO fails to understand, what every executive from Big fails to understand, is that culture isn’t just if you write in Node vs Java vs C#, or run on AWS or Google or your own hardware. Those all help, but those are the results of a nimble culture, not the drivers of a nimble culture.

A nimble culture:

  • Rewards risks people take, whether or not they succeed, without making them jump through hoops.
  • Asks people to pick the best tools that they need, not that someone else has decided.
  • Asks people to do their job the best way they can, as they determine it.
  • Removes roadblocks, every single one, instead of adding them.

The CEO of Big realized that Nimble was cleaning up because they were, well, nimble. What CEO fails to realize is that nimbleness isn’t just some marketing-fu or engineering-fu or sales-magic. Nimbleness is everything about your culture, something you easily can tend to or kill with HR and Legal and IT.

CEO, you cannot buy a nimble culture; it is impossible. You can decide to become a nimble company by changing your culture, and use a great acquisition to seed the process. It only will succeed if you, personally and actively, protect the acquisition from everything that makes you un-nimble, and push those elements into your company.

Everything else is doomed to failure.

 

Posted in business, general, policy, product, technology | Comments Off on You Cannot Buy Your Culture Into Nimbleness

Getting A Header On Recruiting Engineers

As every successful CEO (and VP) will tell you, recruiting great people is their top priority. Sure, they need revenue, and deliverables, and to manage funds, and a million other things. But great people are how you get these things done.

In a competitive market, firms look for original ways to find and hire great people. Engineers, in particular, are in very high demand, and firms look not only for new ways to find them, but also exciting ways to appeal to them.

A friend of mine, who is a senior technology executive at a major financial firm, told me recently that every time he thinks he has a great New York or London hire, a tech firm we all know hires them away for a nice combination of interesting projects, casual every day, lots of public exposure in the tech community, and 20% higher compensation (none of which is deferred or subject to clawbacks, of course).

This morning, I came across an impressive new twist on technology recruiting.

Verizon, the large telecommunications firm, owns a video-streaming platform called “Uplynk“. Most great technologists try not to build everything from scratch – “reinventing the wheel” – but rather look at examples that exist and try to leverage them.

Early this morning, I had a need to look at short-term authorized content-delivery. Amazon S3 has signed URLs, essentially a URL that embeds not only the whereabouts of the content, but also credentials to access it and a time-limit in which to access it.

I assumed Uplynk did the same, but wanted to check. In typical engineering fashion, I not only downloaded data with the ubiquitous curl tool, but also checked the http headers.

curl -i https://some.url.to.something.com/some/useful/data

The “-i” option is the one that says, “show me all off the http headers.”

These “http headers” are the “meta-data”, information about the content you are getting. If you go to https://www.google.com, The content you get is the search page displayed; the meta-data, or headers, tell you information about the content. It might include the size of the content, the date, a redirection, and lots of other useful data.

While many headers are standard, there is a class of custom headers you can add to your application, that is assumed to be ignore unless your app knows about it. If you wanted, for example, to pass the date the page was created, rather than the standard date the server delivered it, you would not override the standard Date header; instead, you would use a custom header. Custom headers usually start with “X-“.

Here is where it got interesting. The results of my “curl -i”?

Access-Control-Allow-Origin: *
Content-Type: text/html
Date: Wed, 05 Apr 2017 06:09:19 GMT
Server: uplynk webStack/2.4
X-Human: Hello, fellow human! You should come work with us! uljobs@verizondigitalmedia.com. Mention this header.
X-Services: somecode
Content-Length: 0
Connection: keep-alive

Look at the 5th header listed, a custom one called “X-Human”.

Whoever embedded this in their pages deserves an award. Uplynk probably serves millions of pages per day, nearly all of them to browsers. Yet, sometimes a human will look at the headers – either from the command-line as I did, or in the browser’s developer tools – and see a message just for them. Someone who reads and knows headers is just the kind of curious and technically capable person Uplynk is looking to hire. To boot, it is a lot more enticing than yet another recruiter, or yet another ad on Facebook or Twitter or wherever companies advertise this year.

Now, if only they had been smart enough to have a special URL for it, since very few people will bother sending to the normal jobs email.

Posted in business, cloud, economy, general, technology | Comments Off on Getting A Header On Recruiting Engineers

I Have Given You a Service, If You Can Keep it

In my world of technology operations, two major themes recur again and again (redundantly):

  1. Incentives
  2. Litmus Tests

I have written about incentives extensively on this blog. In short, as the saying goes, “you get what you measure.” Don’t expect extra customer handholding if you measure your support team by time spent on issues or minimizing average ticket time. Sure, you need to operate cost-effectively, but the key word is “operate”.

It is similar to security and compliance. I have very few security and compliance people whom I really respect. Most of them are either too technical to get the business, or so security and compliance focused that they’d be in heaven if we froze everything. No business, but definitely no security issues!

The goal, in both those cases, is to operate a business. If you are 100% secure but serving no customers, the security is worthless. Similarly, if you are closing tickets at the right of 500 per minute but every customer is dissatisfied, you have great metrics… and will be out of business soon enough.

The other theme that recurs is litmus tests, evaluating how well you really operate according to your stated principles.

No matter how much you say you may operate in a given manner, there are litmus tests, evaluations, you can apply to determine if you really are operating in that manner or not.

There are litmus tests for everything from a positive workplace environment to product vs sales driven to service levels.

Companies often struggle between being product-driven and sales-driven. Sales-driven companies chase each sale, and quickly become custom shops with consulting margins, operating costs, scale issues and valuation multiples. Of course, the best product-driven companies evaluate every major deal that does not fit within its parameters to see if it is this customer or a sign of the market, but if it doesn’t fit, they just don’t do it.

The litmus test for product-vs-sales is simple. You are at $10MM and pushing for growth. In your door walks a marquee customer, perhaps Google or JPMChase or GE. They are looking for a deal worth, say, another 5-10% of your top line. The only catch is that their requirements are so onerous, and their specific needs so different from what you have on offer, that you are likely to get sucked into it for months on end. The deal is unlikely to be profitable for some time to come, if ever.

Do you take it?

Product-driven company says “no”, sales-driven company says “yes”.

It is fine to be either kind of company, as long as you are honest about what you are and are structured for it. Which are you?

A great example of the service litmus test came courtesy of CloudFlare and their issues this past week. I respect CloudFlare’s transparency about the issue in their blog post here. However, my favourite line is in the sixth paragraph:

One of the advantages of being a service is that bugs can go from reported to fixed in minutes to hours instead of months.

The litmus test of being a service is precisely that. Without killing ourselves, without downtime, without “all hands on deck”, what is your lead time “from reported to fixed”?

Microservices (or nano-services) and DevOps help tremendously, but the fundamental difference is between being a product firm and a services firm. In a way, that is another litmus test. If we had to switch from monolithic and “walls of separation” to microservices and devops, could we do it? Product companies have great difficulty, services companies much less.

CloudFlare, clearly, not only is selling a service, but they are operating a service.

Conclusion

Ask yourself, do you have a litmus test for every part of your strategy that is crucial to your success? Do you pass it?

If you don’t have it, devise it; if you don’t pass it, change whatever is necessary.

If you need help, don’t hesitate to ask us.

Posted in business, cloud, product, security, technology | Comments Off on I Have Given You a Service, If You Can Keep it

Amazon: Speed and Ease vs Vendor Lock-In

A few weeks ago, Amazon Web Services held its annual AWS re:Invent conference. Unsurprisingly, they announced, yet again, a slew of new services, all meant to ease adoption and management of technology services.

Yet, something felt a little amiss:

Not only are SaaS firms getting nervous, but plenty of large firms, as well. As Benoit Hudzia pointed out, many on-premise software giants, including Oracle/PeopleSoft and SAP, should be getting nervous (but perhaps are not):

However, as nervous as these companies must be – feeling a lot like many Independent Software Vendors (ISVs) in the 1990s in the face of Microsoft – many other technology builders, including IT departments, are faced yet again with an old dilemma:

Do we move faster and cheaper with AWS services, or do we avoid vendor lock-in?

This is a conversation I have had with technology executives over the last several years; the number of these conversations is increasing. They tend to fall into one of three camps:

  1. “Speedists” are willing to pay the price of vendor lock-in to get faster time-to-market and less infrastructure to manage.
  2. “Optionists” adopt some cloud services to gain cost/speed advantage, but want to keep their vendor options open.
  3. “Balancers” want the speed advantages of Amazon Services, but also want to take a moment to think about what their options will be in a year or three.

I have had discussions with all three types of people, and have learned from all. I believe that all three can be correct, depending on your circumstances. Or, in other terms, there is no 100% correct-all-the-time answer.

One point the optionists and balancers have made is that the gap between “have Amazon run it” and “run it yourself” is much smaller than it used to be.

In the old days (say, five years ago), if you wanted to run a performant, distrubuted in-memory key-value store, you had to pick one, configure it on multiple hosts, worry about creating a cluster, configure scaling, etc. Just using newly-announced ElastiCache was much simpler.

Nowadays, with pre-packaged and easy-to-configure container images of most open-source products, and container orchestration like Kubernetes or Swarm or Rancher to scale and schedule it for you, the incremental effort of running that key-value store remains real, but a fraction of what it once was.

It still is easier to just have Amazon run it for you, but it no longer is massively easier.

So what do I tell balancers, executives who ask, “should we launch our own or use AWS services?”

  1. For basic IaaS, go for the cloud. Except in certain circumstances (high-performance computing, low-latency trading, specialized hardware like voice, sometimes massive scale), the cost and flexibility gap between cloud and do-it-yourself remains extremely large. In addition, the lock-in is low as tools have become pretty adept at switching between cloud providers.
  2. For services peripheral to your core, use a service. Don’t build a massive scale communications engine between lightbulbs and your core application, and don’t think you are going to build the next email sender (unless that is your business). Normally, it is faster and cheaper to buy it than to build it, and it is peripheral anyways. That having been said, try to abstract it as much as possible. For example, give the software on those lightbulbs a URL you own and, if possible, use a standard API. If you ever need to switch, you won’t need to update software on 100MM lightbulbs, just a DNS entry!
  3. For core services – message queues between microservices, key-value stores for caching, etc. – well… it depends. This is where you should think long and hard about how important it is to run your app on a laptop, in AWS, in Google Cloud, Azure, Joyent, on your own hardware, and how important it is to be able to switch between them with ease.

I do not have an immediate answer to the tough core questions, but there are people I respect greatly in each of the Optionist and Speedist camps. That, alone, should give pause to run too quickly in one direction.

 

Posted in business, cloud, policy, product, security, technology | Comments Off on Amazon: Speed and Ease vs Vendor Lock-In

On to Nano-Services

A few weeks ago, I had the pleasure of meeting Pini Reznik, CTO of container consulting firm Container Solutions, in Berlin. It may appear strange that an independent consultant who spends a lot of time helping companies with development and infrastructure strategies, much of which over the last several years has involved containers, would tout another consulting firm’s services. There is, however, plenty of work to do for all of us, and I am grateful for the thoughts and ideas they shared. Perhaps we will collaborate.

The conversations we had will provide the kernels (unikernels?) of several articles here.

Pini shared the following graphic, providing a (limited) summary of application+infrastructure development trends. Image courtesy of, reprinted with permission of, and copyright Container Solutions Ltd, 2016.

16-03-23-autopilot_timeline_graphic_2_all-01

For the purposes of today’s article, we will focus on the trends on the left-hand-side of the graphic.

The two fascinating parts to me in the graphic are:

  • It ties together developments in application architecture, development processes and infrastructure;
  •  It attempts to show how developments in one area enable developments in another, and so on. Advancements are neither independent nor unidirectional, but mutually reinforcing.

At one point, I said to Pini, half in jest, “so what is the next marketing term after micro-services? Nano-services??” Turns out he and his team had been thinking exactly that.

Calling tiny (smaller than micro) services, composed of one or a few functions, a “nano-services” probably is a better term than serverless. In both micro-services running in an explicit container and functions running in a more hidden one, there is a server, and it is abstracted away. The only differences are: the level and amount of abstraction; and the size of the service. However, serverless is more likely to stick as a catchy name than “vmless” or “instanceless” ever would have been.

I also appreciate that they put “unikernels” with a question mark. I, too, question unikernels outside of a narrow range, for two simple reasons:

  1. Containers may be sufficient, with future advancements, to fully isolate services/processes.
  2. No one really knows what will succeed in the market until it happens. Even with ironclad rock-solid arguments, a little humility goes a long way.

Nonetheless, I do believe that given the constant back-and-forth between application architectures, processes and infrastructure, “nano-services” or “serverless” (or Simon Wardley’s term Framework-as-a-Service/FaaS) will lead to some material advancement in infrastructure, just as containers enabled micro-services and then serverless; rarely is it a one-way process of evolution.

Of course, I have issues with serverless, primarily around packaging, as highlighted here, as well as the mostly closed-source/proprietary nature of Lambda and others. OpenWhisk and Open Lambda may help.

Will nano-services be serverless? Will they be something else entirely? Will they have a better name? How will the packaging issues be resolved? What about networking? How about the open source question?

For most companies, micro-services and containers are sufficiently leading-edge. For newer companies attempting to lay a longer-term path with the advantage of starting in a greenfield, for CTOs and architects at established companies who need to begin clearing the path in the woods, these are important questions.

Posted in business, cloud, containers, technology | Tagged , , , , , , , | Comments Off on On to Nano-Services

Why Networking is Critical to Serverless

As readers know, I have been thinking a lot about serverless lately (along with all other forms of technology deployment and management, since it is what I do professionally).

Recently, I came at it from another angle: network latency.

Two weeks ago, I presented at LinuxCon/ConainerCon Berlin on “Networking (Containers) in Ultra-Low-Latency Environments,” slides here.

I won’t go into the details – feel free to look at the slides and data, explore the code repo, reproduce the tests yourself, and contact me for help if you need to apply it to your circumstances – but I do want to highlight one of the most important takeaways.

For the majority of customers and the majority of network designs, the choice and its latency impact simply will not matter. Whether your container or VM talks to its neighbour in 25 μsec or 50 μsec is insufficient to have any impact on your application, unless you really are dealing in ultra-low-latency, like financial applications.

Towards the end, though, I pointed out a trend that could make the differences matter even for regular applications.

With monolithic applications, you have 1 app server talking to 1 database. For a moderately complex app, maybe it is 1 front-end app server with 5 different back-ends comprised of databases and other applications. The total number of communications is 5, so a 25 μsec difference adds up to 125 μsec, or 1/8 of a millisecond. It still doesn’t matter all that much for most.

Containers, however, enable and encourage us to break down those monolithic applications into separate services, or “microservices”. Where the boundaries of those services should be is a significant topic; I recommend reading Adrian Colyer‘s “Morning Paper” on it here.

As applications are decomposed, the previous single monolithic application with a single database, and thus one back-and-forth internal communication, now becomes 10 micro services, each with its own back-end. One communication just became ten, and our simple application’s 25 μsec difference just became 250 μsec, or 1/4 of a millisecond. It still doesn’t matter all that much, but it is moving towards mattering.

Similarly, our complex 6-part application became, say, 25 microservices and backends, leading to 625 μsec of additional delay, or almost 2/3 of a millisecond. Again, it doesn’t matter all that much, but it is getting ever closer.

However, with serverless, the unit of deployment no longer is a service, or even a microservice. Rather, it is a function. Even the simplest of applications have a lot of functions. Our simple application that went from 1 app and 1 database to 10 microservices actually has a not-unreasonable 250 functions in it; some of the open-source libraries I have written single-handedly have that many! If each of these is run independently in a FaaS/serverless environment, we now have 250 items communicating with others, a minimum of 250*25 μsec = 6,250 μsec or 6.25 milliseconds delay.

For our simple application, with “just” those 250 functions, the difference of a few tens of microseconds, determined by your inter-function (inter-container, under the covers) networking choice, makes a big difference.

For our complex application, with 6-parts, each of which may have at least those 250 functions, we now have 250*6*25 μsec = 37,500 μsec or 37.5 milliseconds of additional delay. That is real time.

Of course, a serverless provider, like Amazon Lambda or Google Cloud Functions, is expected to invest the engineering effort to optimize the network so that the functions don’t simply run “anywhere” and connect “however”, creating unacceptable latency. To some degree, this is what we pay them for, and a barrier to entry for additional competitors. Packaging up a container image is easy; optimizing it to run with many others in a busy network on busy servers with minimal impact is hard.

As I have written often, PaaS and DevOps and by extension serverless will eliminate many system administration jobs, but it will create fewer but far more critical and valuable systems engineering jobs. The best sysadmins will go on to much more lucrative and, frankly, enjoyable work.

Many others will run serverless environments on their own, using OpenWhisk or other open-source products. Unlike Cloud Foundry or Deis, these will require serious design effort to ensure that applications do not end up with a painful mix of easy-to-manage, performant each part on its own, and impossibly slow in toto.

Hopefully, Amazon and Google are up to the task, as well as those deploying on their own. I hope they, and you, are, but I always am happy to offer my services to assist.

 

Posted in business, cloud, containers, technology | Tagged , , , , , , , | Comments Off on Why Networking is Critical to Serverless

Can rkt+kubernetes provide a real alternative to Docker?

Last week in LinuxCon/ContainerCon Berlin, I attended a presentation by Luca Bruno of CoreOS, where he described how kubernetes, the most popular container orchestration and scheduling service, and rkt integrate. As part of the presentation, Luca delved into the rkt architecture.

For those unaware – there are many, which is a major part of the problem – rkt (pronounced “rocket”, as in this) is CoreOS’s container management implementation. Nowadays, almost everyone who thinks containers, thinks “Docker”. Even Joyent’s Triton, while it uses SmartOS (a variant of Illumos, derived in turn from Solaris), has adopted Docker’s image format and API. You run containers on Triton by calling “docker run”, just pointing it at Triton URLs, rather than docker daemons.

I was impressed with how far CoreOS had come. I was convinced late last year that they had quietly abandoned the rkt effort in the face of the Docker steamroller. Clearly, they quietly plowed ahead, making significant advances.

As I was listening to Luca’s enjoyable presentation, the following thoughts came to mind:

  1. Docker Inc., in its search for revenue via customer capture, has expanded into the terrain of its ecosystems partners, including InfraKit (watch out Chef/Ansible/etc.) and Swarm (kubernetes). Those partners must view it as a threat. One person called it, “Docker’s IE moment,” referring to Microsoft’s attack on its software provider ecosystem when it integrated IE into Windows.
  2. Docker’s API, as good as it is, is very rarely used. With the exception of a developer running an instance manually, usually locally but sometimes on a cloud server, almost no one uses the docker remote API for real management. Almost all of the orchestration and scheduling systems use local agents: kubernetes runs kubelet on each node, Rancher’s Cattle runs rancher agent, etc.
  3. Docker is really easy to use locally, whether starting up a single container or using compose to build an app of multiple parts. Compose doesn’t work well for distributed production apps, but that is what kubernetes (and Swarm and Cattle) are there for.

As these went through my mind, I began to wonder if the backers of rkt+kubernetes intend to use rkt+kubernetes as a head-on alternative to Docker.

So… what would it take for rkt+kubernetes (or, as Luca called it, “rktnetes” pronounced “rocketnetes”) to present a viable alternative to Docker?

Ease Of Use

As described above, Docker is incredibly easy for developers to use on all three platforms – Linux, Mac and Windows – especially with the latest Docker for Mac/Windows releases. rkt, on the other hand, requires launching a VM using Vagrant, which means more work and installation, which slows the process down, which…. (you get the picture). For rkt to be a serious alternative, it must be as easy to use as Docker for developers.

Sure, in theory, it is possible to use Docker in development and rkt in production, but that is unlikely unless rkt provides some 10x advantage in production. Most companies prefer to keep things simple, and “one tool for running containers everywhere” is, well, simple. Even a company willing to make the change recognizes that the run-time parameters are different (even if rkt supports Docker image format) and the “works for me” problem can return, or at least be perceived to do so (which is as important as the reality).

Docker made headway because it won over developers, then operations and IT types.

At the same time, kubernetes is not as easy to use and has a significant learning curve. To some degree, its power and flexibility once in use make it harder to get to usage. That may (or may not) be fine for an orchestrated complex production deployment; it will not fly on a developer’s laptop or DigitalOcean droplet.

To their credit, the kubernetes team has released minikube, intended to make deployments easier. We will see how well it does. In the meantime, developers by the thousands learn how to do “docker run” and “docker-compose run” every day.

So:

  1. Starting and running containers must be made much easier.
  2. Starting and running container compositions must be made much easier.

Killer Capabilities

However, even if rkt+kubernetes manage to equal docker in ease-of-use and feature set, they still will just be playing catch-up, which is not a good game to play (ask AMD). In order to win over developers and systems engineers, rkt+kubernetes must be as easy to use as Docker and it must offer some unique capability that is not offered by Docker. Preferably, it cannot be offered by Docker without great difficulty due to some inherent market, corporate or technology architecture structure.

It needs to be something that is  inherently doable, even natural, due to rkt’s design, yet difficult due to Docker’s design. The goal would be to make Docker play challenging catch-up.

What would such a feature or capability set be? I have some early ideas, but that is the job of CoreOS’s (or Google’s or CNCF’s) product managers to figure out. That is what they pay them for.

Why Do I Want It?

I love Docker and CoreOS. I use their products on a regular basis. They have made my life, and those of many clients and colleagues, immensely easier. I have met people at both companies and respect their professional skills.

Even more than any one product or company, however, I do love competition. It benefits customers primarily, but even the competitors themselves are driven to better services and products, and hence profits.

I want rkt+kubernetes (or some other combination) to provide a serious, viable alternative to Docker to benefit me, my clients (current and future), my colleagues, all technology firms and IT departments, and especially to benefit Docker, CoreOS and Kubernetes themselves.

 

Posted in business, cloud, containers, product, technology | Tagged , , , , | Comments Off on Can rkt+kubernetes provide a real alternative to Docker?