Your Car Interior Should Be Like A Network

A lot of ink has been spilled (if that term still can be used in the digital age), on the coming driverless “revolution.”

Yet a much simpler “evolution” is long overdue for automative technology: the inside.

Anyone who has replaced any component on a car – dashboard, door panel, side-view mirror, radio, engine part, or any component at all – is familiar with the swamp of wiring that snakes its way behind every panel on the car.

Every single component has what is known as its “harness”, automative lingo for its wiring. The wiring, however, looks nothing like the simplicity that connects your home router to the cable modem or laptop to its mouse.

The following picture is the “simple” harness that I once used to connect an after-market radio to a Mitsubishi:

 

Every part in the car has its own harness: the power window, the powered mirror, the trunk light, you name it.

Look under your dashboard, behind the steering wheel and above the driver’s pedals, and you will see a forest of wires, all tightly tied together and shoved into whatever nook and cranny can be found.

If that were the entire story, it would be bad enough. Unfortunately, it is just the beginning.

Each car has its own unique cabling system, its own “harness”. Even though there are (sort of) standards in the ISO connectors, these apply primarily to audio systems. In any case, they are adopted very rarely by automobile manufacturers.

Adding insult to injury, the manufacturers change the harnesses between model years for the same car, and even between models of car for the same year.

Finally, each harness has one cable per type of data or power. Don’t try to calculate the permutations of numbers of potential harnesses; it is a terrible waste of good math.

The really sad part is that these thousands of wires and dozens of harnesses, carry just two things:

  1. Data
  2. Power

Sure, each component requires a little different data, and different levels of power, but at heart, these are just wires carrying data and power.

To understand how absurd the current automotive reality is, imagine translating it to the computer industry. We will enter a world where:

  1. Every component you connect – network, mouse, keyboard, monitor, scanner, DVD drive, hard drive – has its own connector with 10-15 different cables
  2. Each component also has its own, unique connector type
  3. Each computer manufacturer has its own connector: Lenovo uses one type, Apple another, Dell another, ASUS another.
  4. Each manufacturer uses different connectors for different components.
  5. Each manufacturer changes its connectors for that component every model year or two.

I highly doubt the computer business ever would have gotten very far!

Yet, this is precisely what occurs in the automotive components business.

In the technology industry, we have had two types of standardized cables that carry data and power for decades.

  • USB: That ubiquitous USB port on your laptop, now heading into USB-C, can carry both data and power in a single simple cable, with a simple, standard plug format. With each generation, the amount of power it can carry and the bandwidth of data has increased. The already-aging USB 2.0 standard, released as far back ago as 2000, can carry 480 Mbps. No data anywhere in a car, especially to peripherals like audio and windows, requires even a tiny fraction of that.
  • Ethernet: The Ethernet cable that links your modem to your router or your office desktop to the wall, known by its “Category” designation (you probably are using Cat-6), carries data at tremendous bandwidth and speeds, far in excess of anything your car components carry. It also has had the ability for years to carry power to end devices. Gigabit Ethernet, which is a little faster than twice the aforementioned USB 2, was released by IEEE in 1998.

 

The obvious question, then, is does it matter? Does anyone really care if the hidden cables are unwieldy, bulky, hard to figure out, expensive, hard to connect?

Definitely.

The current situation has terrible cost impacts. It increases all of the following:

  • Cost of each component;
  • Manufacturing cost of the car, due both to higher component costs and higher labour costs;
  • Amount of inventory write-down for the manufacturer and component supplier;
  • Amount of inventory write-down by spare parts suppliers;
  • Cost of maintaining the vehicle due to more time to do work (this hits you, car owner);
  • Cost of maintaining due to special skills to work with each vehicle type (you, again);
  • Cost of any changes or upgrades (and again, you).

Now imagine a different world.

  • A standard cable, similar to Ethernet or USB, but with the physical specifications to handle an automobile’s environment, connected everything.
  • A single bus (or two for redundancy) running from front of car to back.
  • A single cable from the bus to each door, with a hub to each component in that door.
  • A single cable from the bus to the trunk/hood.
  • A single cable from the bus to the stereo.
  • A single cable from the bus to the dashboard.

A power window, for example, should require a single cable that carries power and a coded signal to go up or down. An audio system should have just power and a few wires for serial data of any kind; instead, it has 10 or 15 cables!

The technology hardware industry has had standardized cables for decades (it is called a Universal Serial Bus, or USB, for a reason). It has standard connectors, standard pinout, standard sizing, and carries data and power far in excess of just about every automotive application outside of the brakes and engine.

While the big visionaries look to bring us cars that drive themselves – the name “automobile” means “self-moving” – there is much that can be done immediately to make the existing cars, and the future ones, better, faster and cheaper to build and maintain.

Posted in business, product, technology | Tagged , , , , | Comments Off on Your Car Interior Should Be Like A Network

The Problem with Serverless Is Packaging

Serverless. Framework-as-a-Service. Function-as-a-Service. Lambda. Compute Functions.

Whatever you call it, serverless is, to some degree, a natural evolution of application management.

  1. In the 90s, we had our own server rooms, managed our own servers and power and cooling and security, and deployed our software to them.
  2. In the 2000s, we used colocation providers like Equinix (many still do) to deploy our servers in our own cages or, at best, managed server providers like Rackspace.
  3. In the early 2010s, we started using infrastructure-as-a-service (IaaS) like Amazon EC2.

Over time, we have evolved to worrying less and less about the underlying infrastructure on which our software code runs, focusing more and more on the code itself. We have moved our focus further up the stack.

That was the very basis of Platform-as-a-Service (PaaS) providers, like Heroku (now part of Salesforce.com). Instead of running our code on a virtual server instance that we manage, we deploy the code unit, or “slug”, and they take care of that part as well.

However, even with a PaaS, we still have to think in server-like terms:

  1. We need to plan how many copies of our code, i.e. slugs, we need to run.
  2. We are billed by the number of instances running.

Serverless, typified by Amazon’s Lambda, attempts to change that calculus.

Once we get past worrying about servers entirely, we can focus on duplicate effort inside our applications. Rather than handling setting up the application, start up, connectivity, listening for request and routing them to the correct handler function, why not have an underlying service perform all of that? All we need to do is:

  1. Create the handler functions
  2. Declare which input event triggers which handler function

Just about any server-side app – and most client-side apps – are written following this paradigm anyways. However, we do all of that work in whatever our chosen framework: express, Rails, whatever. Serverless offers to handle all of that duplicate work as well.

The intended key benefits of serverless are threefold:

  1. Effort: Why waste time doing work that everyone else is doing anyways? Write your handlers, declare your routes, let it run.
  2. Financial: Why pay for unused server capacity? Get billed per second or even millisecond of code running.
  3. Cultural: Stop thinking about your application as a single unit. Instead, think of it as individual functions, each of which has a cost and a benefit.

However, there is a problem with serverless, and it is more fundamental than its name.

I believe that the key reason for the rapid and widespread adoption of Docker is that it solved major packaging headaches. Even the best packaging systems pre-Docker relied on the volatile and unpredictable state of the underlying host.

Docker abstracted all of that away, by putting required dependencies within the deployment artifact while simultaneously enabling the app to ignore (mostly) the state of the underlying host.

Serverless computing, including Lambda, makes packaging harder not easier. Sure, you don’t need to worry too much about what is on the server. Conflicts are avoided (using containers under the covers), while dependencies are declared and guaranteed. In that respect, it is similar to container images.

But your application isn’t made up of one handler function in isolation. It is made up of the totality of all of the functions. In containers – “serverfull”? – I can package my entire application up together. This makes moving it, deploying it and testing it easy and predictable.

In serverless, each function is a standalone unit, and the wiring up of events, like incoming http requests, to handler functions is managed by an API or UI. Lambda makes it very easy to focus on the purpose, value and cost of each function. But lambda makes it very difficult to reason about, deploy, test and manage the app in its entirety.

Serverless’s problem isn’t nomenclature; serverless’s problem is packaging.

From packaging flow all of the issues of deployment, management, testing, reasoning.

Many companies are writing small and large applications on Lambda or Compute Functions or OpenWhisk, many successfully. I have worked with some, transitioned apps to Lambda, and love the benefits, the financial and cultural ones above all.

But for many others, until the packaging becomes as simple to manage, deploy and reason about as self-contained apps as repositories for PaaS or Docker images, the costs will outweigh the benefits.

In that respect, I believe there is a space for a bold entrepreneur to “DigitalOcean” Lambda. DigitalOcean (DO) took on AWS by providing the same service but being incredibly simple to use. For large corporate entities, AWS remains the primary provider. But for companies looking for simple-to-use, simple-to-manage, great performance, DO is the superior IaaS offering.

If someone took the DO approach to IaaS and applied it to serverless – make it easy to use, easy to reason about, easy to manage – they could grab a significant chunk of the serverless market and likely drive it to the next level.

 

Posted in business, cloud, containers, pricing, product, technology | Tagged , , , , , , | Comments Off on The Problem with Serverless Is Packaging

Pilots In Habitats: Basic Unit of Application Deployment

What is the basic unit of application deployment?

Two related trends have changed the answer to this question:

  • DevOps
  • Containers

For many years, the tasks between engineer and operator were cleanly, if painfully, split:

  1. Engineer builds and delivers a package of files to deploy and run
  2. Operator deploys and runs those files in a production operating environment

In the early years, the package of files consisted of a directory with a ream of paper and instructions. Over time that improved to zip files, then proper packaging and installation tools like rpm.

Most recently, with the simplicity Docker (and others such as CoreOS’ rkt) brought to container packaging, the preferred unit of deployment has become a container image.

The goal of each step in this evolution has been to simplify two parts:

  1. Deployment: how easy is it to perform the one-time per release process of deploying it?
  2. Management: how easy is it to perform the ongoing process of resolving issues?

Container images attempt to simplify the issues further by including all of the dependencies in a single runtime file. Whereas “file copy” included instructions such as, “copy these files to the following directories on the following operating systems with the following prerequisites”, and rpms attempted to automate some of that, container images include all of the server dependencies in the right locations; just run it.

However, as we come closer to resolving lower level dependencies via container images, we have become more acutely aware that applications are more than just the process running on one single host with lower-level dependencies. They also have parallel and upstream dependencies: other processes; databases; middleware services; etc.

People often wonder why there were no reported cases of cancer one hundred years ago. “It must be our lifestyle,” or “it is pollution and our environment.” But the answer is simple: a century ago, life expectancy in the United States was  47, while the median age at cancer diagnosis is 67. Quite simply, few people lived long enough to get cancer! Once life expectancy and health improved, other illnesses had their opportunity.

Similarly, the lower-level issues of per-instance app deployment were so thorny, that higher-level cross-instance deployment coordination simply did not rise to the top of the stack (despite some attention). Now that we are solving those, higher level issues are becoming of concern.

Perhaps, then, the correct question is: given clean packaging of an app instance, what is the proper unit of complete app deployment?

I have been dealing with this in general and at some clients in working on clean, complete and self-managed deployments, as well as exploring the newer tools available to help, specifically the ContainerPilot work of Joyent and the Habitat work of Chef.

On a rather long flight last week, I listened to a podcast interview with Tim Gross of ContainerPilot, and Adam Jacob of Habitat.

In the interview, both Tim and Adam recognize similarities in each other’s issues with packaging, deployment and management, and similar solutions.

The primary argument for these solutions is, to my mind, one that requires some clarification.

The primary purpose of these tools is the one we described earlier: having solved the problem of reliable distribution, deployment and maintenance of one instance of an application, we now approach how to solve the distribution, deployment and maintenance of an entire application.

This is something that we could not do before, at least not simply.

Let’s take a simple application, a node app with a Web front-end on a static Web server and a MySQL database on the backend.

In the very old days (when, of course, neither node nor MySQL existed, but we will ignore that fact), the deployment would be as follows:

  1. Engineer packages up the static Web pages as a zip or tar file.
  2. Engineer packages up node application as zip or tar file.
  3. Engineer delivers two packages with instructions:
    1. Expand node app package on server A into the following directory with the following prerequisites
    2. Expand Web files on Web server B into the following directory with the following prerequisites
    3. Launch node app using the following command
    4. Serve up the Web files with the following configurations
    5. Configure the database to have the following
    6. Configure the node app to access the database at the following settings

Fortunately, we have come a long way since then, through many iterations. Much of the configuration, unpacking, deployment, prerequisites have been simplified dramatically. It now looks like this:

  1. Engineer packages up static Web files along with Web server in a container image
  2. Engineer packages up node application in a container image
  3. Engineer delivers two images with instructions
    1. Run node app with the following command line options and environment variables, including information to access the database
    2. Run Web server app with the following command line options and environment variables
    3. Configure the database to have the following

The number of steps is cut in half, and the complexity, and therefore opportunities for error, by much more.

Tim and Adam look at the above and say, “this isn’t enough!” The basic unit of deployment still shouldn’t be individual packages and instructions, however simplified. It should be a single deployable unit.

Entire applications should be single deployable units.

They are looking for a world that looks as follows:

  1. Engineer packages up everything – static Web files along with Web server, node application, even database, and every other reasonable upstream and downstream dependency – in a series of self-described images.
  2. Engineers delivers it with instruction: run this one command.

(Actually, they go one step further and say, “I can run this one command myself, who needs separate operators…”)

Tim and Adam are arguing that the unit of deployment for an application is the entire application. It is not each container image, however much of an improvement that is.

In a recent application for a client, we did precisely that. The entire application with all of its dependencies should be a single unit of deployment.

To do that, however, the individual units that compose the application – in our example, container images – must be able to know about each other and coordinate, without depending on external management.

The more I think about it, the more I believe that they are correct. It simply was a matter of solving the lower-level packaging issues, raising the bar to the point that we can begin to ask, “is this the best atomic deployment unit?”

Of course, an entirely different question is, are container images the future of higher-level deployment units, or will serverless, a.k.a. FaaS or Framework-as-a-Service, dominate. That is a question for a different day.

Summary

Solving the challenges of deploying a single instance frees us up to attack the problems of deploying an entire application with all of its related parts. DevOps means no longer being dependent on some infrastructure run by some operator to run your app, but being able to self-service.

How good are your deployment methodologies? Do you still “throw it over the fence”, or can you manage your apps dynamically? Ask us to help.

Posted in business, cloud, containers, product, technology | Tagged , , , , , | Comments Off on Pilots In Habitats: Basic Unit of Application Deployment

When Your Workers Love Their Job

How do you know when your workers really love their jobs? Of course, not all will, and plenty will leave over time no matter how great a working environment, but how do you know when workers really enjoy working for you?

A few weeks ago, I had the pleasure of visiting the Hallertau Brewery, just north of Auckland, New Zealand, on a Saturday night. It is in New Zealand wine country, a rural area, so they close at 10:00pm on a Saturday night. I, of course, did not realize how early they close, and arrived but a few minutes before closing. Nonetheless, the wonderful bartender/waitress graciously, and with true New Zealand friendliness, agreed to serve us at the counter. We had their “tasting paddle” of 5 beers, as well as a sampler of their apple cider (excellent!).

Since we were there pretty much at closing, we saw the many workers, mostly young, performing their end of day jobs: cleaning the counters, sweeping and mopping the floors, cleaning the beer taps, putting away dishes, tallying up the register, etc.

What we also saw, which surprised me, was a large group of workers and managers sitting together at one of the tables, having beers and snacks together, chatting and laughing.

Of course, anyone who works in a brewery is likely to enjoy good beer, especially after a long day of work, so that part isn’t surprising. I observed them closely for a while, and I found the following quite interesting:

  • Managers and workers all sat together in obvious ease and comfort.
  • Some people still were working, while others were relaxing, without the slightest signs of resentment among the workers.
  • When last lingerers (including us) required some service, the past-shift workers at the table rose with good-will and a smile and managed it, then returned to their peers.
  • They stayed to drink at their work place. A drink after work is a custom in many places, especially Britain, but a drink after work at work is not quite so common, especially in service industries.

I am certain that not all employees at the Brewery are happy, some probably want to earn more or do something different, and no one is happy with their job 100% of the time. But a team that hangs together after work, together with their bosses, in complete ease, at the work place, while some finish work without resentment, and even the off shift happily rise to assist, is a happy team indeed.

 

Posted in business, policy | Tagged , , , , | Comments Off on When Your Workers Love Their Job

SSL Is Broken, Time to Fix It

For a long time, I have felt that SSL/TLS – the protocol that secures your communications with Web sites, mail servers and most everything across the Internet –  is broken. It is broken to the point that it is fundamentally insecure, except for the most technically-aware and security-alert individuals, who also have the time to check the certificate for each and every Web site.

SSL is supposed to provide three guarantees:

  1. Confidentiality
  2. Integrity
  3. Authenticity

Confidentiality

Confidentiality answers, “how do I, as the sender of a message, know that no one but the intended recipient can read it?”

SSL uses cryptography quite well to do this job. It was a core part of the design, and ensures that when you send your password to the your bank – or your update to Facebook – that no one on the way can read it. That includes not only the owner of the coffee shop WiFi, but also their ISP, the core networks it runs through, your bank’s ISP, malicious actors spying on the way, and hopefully even government entities like NSA and GCHQ.

Integrity

Integrity answers, “how do I, as the recipient of the message, know that this is the exact, unmodified message the recipient sent?”

SSL uses cryptography quite well to do this job. It, too, was a core part of the design, and ensures that when you send your bank a transfer request of $500 – or that super-important Facebook update – that is precisely what you sent them, and no one on the way could have changed it without Facebook knowing. That includes all of the same innocent and not-so-innocent actors as before.

Authenticity

Authenticity answers, “how do I, as the person sending a message, know that the recipient really is the one I intended?”

Put in other terms, “how do I know that the Website to which I am connecting is my bank, and not someone who just copied their Web site, will steal my credentials, and log into the real one to steal my money?”

In security circles, this is known as a “man-in-the-middle” , or MITM, attack.

Here lies the problem.

Apparently, when SSL was first being created, the focus was on Confidentiality and Integrity. Authenticity was added at the last second, very much an afterthought. We might think this strange, but we are living with 20 years of public Internet. At the time of SSL creation, all of this was very new indeed.

How did they solve the authenticity problem?

They created “Certificate Authorities“, or CAs.

CAs basically are entities we trust implicitly. Their signature is embedded in every browser we use: Safari, Chrome, Brave, IE, Firefox, you name it. When someone, say Barclays Bank, wants to have a Web site others will trust really is Barclays, they go to a CA and say, “please sign a certificate saying this certificate comes from Barclays.”

When we connect securely to www.barclays.com, they present the certificate, signed by their CA, for Verisign (hence the name, “Verisign”). And, yes, I checked; they really do use Verisign. Since Verisign’s certificate is installed on our browsers – it ships with just about every operating system and browser out there – we say, “I trust Verisign, Verisign trusts that this certificate came from Barclays, therefore, I trust that this is Barclays.”

In theory, that works great.

The problem? There are a lot of CAs.

In principle, this is a very good thing. After all, we want competition. When Verisign started, the cost of a certificate was prohibitive. Nowadays, you can get one for free at letsencrypt.org.

However, since there are many, if someone from a different CA also signs a certificate claiming to represent barclays.com, then if that site presents the certificate to us, we will believe it is them and never know it!

Think it doesn’t happen?

Repressive countries like China and Iran and Russia have CAs, many of which are in your browsers. All one of them would have to do, especially when in their borders, is intercept your connection to barclays.com and present a certificate signed by their CA. Unless you know how to read certificates and remember to do so, you will be none the wiser.

CAs as authenticity are broken… fundamentally.

Why haven’t people fixed it? I am hardly the only one to notice it.

The problem isn’t a technology problem. There are many proposed solutions, some better than others.

The problem is a market problem. There are, literally, billions of devices out there, all with the SSL CA-based trust algorithm baked right in. Any new solution requires replacing billions of legacy devices.

Will it ever be fixed? Definitely. Eventually someone big enough, with enough heft to hit the massive legacy problem, will do it. It will be difficult, but it will happen.

Until then, “Houston, we have a (security) problem.”

Posted in business, cloud, security, technology | Tagged , , , , | Comments Off on SSL Is Broken, Time to Fix It

Does Open-Source Increase the Value of Talent?

For the last few weeks, I have been trying to unravel the connection between the value of talent and open-source.

Inevitably, some products have a high level of importance but few people who truly understand it. This creates high demand with low supply, increasing the value of those people. But that isn’t special to open-source; it is true for any product with high demand + low supply. These just happen to be open-source. For example, this morning, a friend of mine used Hadoop as an example of a product that is very important to many companies, yet there are very few people in the world truly capable of digging into the core code itself and fixing it. The cost of those people is quite high, and they refuse to work on it if they cannot contribute back to the community.

Since the cost of talent is largely driven by the supply and demand for the specific skills – infrastructure engineering or AWS or Java – and not by whether the product is open- vs closed-source, we cannot use those numbers as a way to evaluate the value of open-source talent.

We can, however, look at an acquisition?

The catalyst for this thought process was the acquisition of Joyent by Samsung a few weeks ago. At heart, there really only are a few reasons for a technology acquisition:

  1. Customers
  2. Competition
  3. Product
  4. Talent

Let’s work through each of those in turn for this acquisition:

Customers

You have a customer base that I want. I believe the will be more valuable inside my company than yours due to some mutual benefit that occurs when they are under the same roof: economies of scale, economies of scope, more capital at a cheaper cost, etc.

Samsung hardly is hurting for customers. It is a massive business by any standards in multiple industries. Conceivably, one could argue that Samsung is (not even) a bit player in cloud computing, as opposed to Joyent, who, while not very large or dominant still reportedly have a decent and passionately devoted customer base. Nonetheless, this is unlikely to drive valuation too high.

Competition

You and I are competitors. By acquiring you, I lock in a larger market share, enabling me to get better economies of scale and possibly the ability to raise prices and set standards.

Competition as a reason for this acquisition is a non-starter. Joyent posed no serious competitive threat to Samsung’s potential interest in being a cloud provider. If they really wanted to squeeze competition, they would have gone after the big players, Amazon Web Services followed by Azure and Google Cloud. I do not know if Samsung could have bought them, but buying someone that large would be the only way to reduce the competitive environment.

Product

You have a product I want. With that product and my company’s assets – intellectual, capital, brand or human – I have a lot of potential power in the market.

Product also is a non-starter. Almost the entire Joyent’s codebase is released open-source under the fairly liberal Mozilla license.

Samsung did not need to buy Joyent for their products; they simply could have downloaded and used them, even customizing them to suit their needs.

Talent

With the other ones pretty much discarded, talent, known as an “acquihire”, is all that remains.

What is the value, then, of acquiring Joyent’s talent?

According to Crunchbase’s Joyent entry, they have received 7 rounds of investment totaling $126MM over the last 6.5 years. We don’t know the terms of preference and built-in coupons, but it is reasonable to assume that it would take at least double the invested amount before holders of common stock – founders and employees with ESOP – would see any money.

So the purchase price had to be at least $252MM for employees to see a penny. Considering that some have been there for years, and would not be happy without a real payday, and that in the weeks since the announcement the employees appear, by all public accounts, to be a happy lot with no serious grumblings online, the purchase price had to be at least $300MM, more likely $350-400MM or higher.

I do not know how many employees they have – LinkedIn has it in the 51-200 employees category – but that is a lot of money to pay for an acqui-hire.

Yet Samsung is a smart group. They do not throw money away. They know what they are doing.

There are two possible explanations I see:

  1. The customers are worth more to Samsung as a jumpstart for their cloud computing ambitions.
  2. The talent is worth more in the open-source world.

The second possibility was suggested to me at LinuxCon/ContainerCon by Mike Williams of Codethink.

It may not be only the talent in terms of the technology per se, but also the knowledge of how to produce and manage open-source technology. Samsung not only is hiring technology talent; they are hiring knowledge of how to build and release open-source software, how to market and sell software and services when that very software is freely available, how to product manage that software, how to run operations and finance.

Mike suggested the possibility that Samsung may not only be buying talent, but a unique type of talent with which they have very little experience. In addition to jumpstarting their cloud services, Samsung may be revving their open-source engine. That is a whole lot more impactful and strategic.

Posted in business, cloud, policy, product, technology | Tagged , , , | Comments Off on Does Open-Source Increase the Value of Talent?

Why Customers Agree to Open-Source

Why do customers agree to open-source work I do?

In the past, we have discussed the benefits of open-sourcing your own software:

  • Reputation
  • Recruiting
  • Contributions

Recently, I had the pleasure of walking half an hour from a Tokyo train station with Matthew Garrett, who does some impressive work on core operating systems (pun intended; Mathew works at CoreOS). One of the thing I asked him is why a company open-sources its entire stack? I always am eager to learn more from a person who lives it daily.

He relayed those same three reasons: yhey gain a strong reputation from within the community, which in turn inspires people to contribute improvements back to their code in the form of Pull Requests, which also helps with recruiting.

However, sometimes an employee or consultant does work for a company that is not core to their business, provides no competitive advantage, and yet they agree to open-source it… under employee’s name. What drives them to do so?

When I recommend open-sourcing work, I offer three options:

  1. Release it under their name
  2. Release it under my name
  3. Release it under my name with attribution/thanks “to the good offices of Company X”

Under #1, all of the usual benefits apply.

But when released under the my name, most of those do not apply. There is some (but not much) reputation benefit when the company is listed as a contributor. They don’t recruit out of it, unless I continue my involvement with them and I know they are looking for people in that space. And the improvements in software only sometimes are pulled back into the project itself in the form of new releases.

For a company like CoreOS or Joyent or Tigera, where it is a core software product that is open-sourced, and thus they have many engineers on the project, the benefits (along with the risks) of open-source are obvious.

But in a “consultant provided this for us to solve a solution,” what do they gain?

In a word: cost.

Every piece of software that is written by a company and used in their products must be maintained by them. For the core of the business – operating systems and fleet for CoreOS, container management for Rancher – they assign engineers and product managers to it anyways. They are happy to take contributions from the community, and thereby grow, but the core of the work is their internal labour.

But for anything that is not core, and even for some elements that are, companies would prefer a ready-made component they can plug in that just works. Buy (or download) it, don’t build it.

What happens when you need something to make your product work, but it isn’t core? None of these is core to many companies’ business: a jQuery or Angular or Aurelia plugin for your UI; a network or storage plugin for Docker; a network virtualization performance test (all of which I have done for clients). Yet you may need one, and if it isn’t available, you build it from scratch or take an existing one that isn’t quite there and modify it.

Now you have an ongoing component that will have bugs, need new features, need to be maintained… and all by you.

Enter open-source.

When you open-source it, you might still need to maintain it fully yourself; you probably will. But if it was useful to you, it might be useful to one, ten or a thousand other people. They in turn will find a bug and fix it or add some feature. After all, starting with yours is faster than their building from scratch too!

When you open-source it, you have a chance to avoid all of the maintenance single-handedly. If the open-source product really takes off, you have seeded a good community product while solving your own problem and at a much reduced cost.

Releasing your work product has value to the community, but also to you as the releaser. But it also has potential benefit, even when you release it through someone else.

Posted in business, cloud, product, technology | Tagged , , | Comments Off on Why Customers Agree to Open-Source

Continuous Everything

Earlier this week, a really smart architect and I were evaluating various methods for managing software code changes, bug fixes, releases and major features. We both were in agreement with the primary direction, a popular one in nimble companies.

  1. Have a primary “trunk” or “master” branch;
  2. Any commits to “master” automatically get built and tested and ready for production (and possibly deployed);
  3. Any changes occur on “feature branches”, temporary parallels streams of development that eventually – hopefully sooner rather than later – merge into “master”, and from thence into production.

However, my colleague raised the more radical possibility: everything onto “master”.

At first, I was somewhat surprised. One mistake, one error, one complication, and all changes to production are blocked!

This may be acceptable in an old-school, “deploy every six months”, or even a slow but somewhat better “deploy every two to four weeks.” But Internet-speed companies, especially SaaS, should be deploying every day or even multiple times a day.

If a customer finds a bug in my cloud service, and we find a way to fix it in 2 hours, it should be fixed in production in… 2 hours! It is unacceptable that it wait because some other feature or release is in process.

That, indeed, is the very rationale behind the “feature branch”. To do something longer, go work on it on the side with your team, and then merge it into the primary codebase later.

And yet, the master of Continuous Development, Jez Humble, advocates precisely this workflow.

In a Twitter discussion, Jez pointed me to this blog post, where he describes the process of “branching by abstraction”, i.e. how to do major changes without blocking everyone else.

 

I won’t rehash the entire post here; for those in the business of building, delivering, operating or selling software, it is a very worthwhile read. The gist of it is:

  1. Any major product, no matter how big, can be broken down into smaller and more manageable parts.
  2. Any major change, no matter how complex, can be “fenced off”, isolated or “abstracted”, so that the changes can go on in your main code without affecting anyone.
  3. Any change can be enabled/disabled with “feature flags” so that they can go into live systems without affecting anyone until you are ready.

Solid unraveling + clean abstraction + responsible enablement = ability to work right on the mainline without affecting anyone else.

While trying to understand why Jez advocates so strongly for it, I came to a realization. The same driving force behind Continuous Delivery (CD) is also behind Continuous Merge (CM) or, if you prefer, Continuous Commit (CC).

The very reason why continuous deployment reduces risk, despite many more deployments to live running systems in the middle of the business day, is because it breaks those deployments down into tiny manageable chunks.

Smaller chunks = exponentially smaller risk.

As I have written before, combining 3 changes into a single deployment does not create three times the risk, it creates at least 32 = 9 times the risk!

  1. It is nearly impossible to know how the various changes will interact with each other.
  2. It takes longer to recognize that the post-change system is misbehaving.
  3. It takes much longer to discover which part or parts, alone or in combination, are causing the misbehaviour.

On the other hand, when a deployment is a single change, you immediately know if that change is misbehaving (or causing other parts to misbehave), and you immediately know what part to address to fix it.

In an article over a year ago, I quoted another colleague who coined the term “release spiral of death” for companies that are shaken by a release so wait longer, thus increasing risk, leading to more painful deployments, leading to longer waits, spiraling out of control.

The same risk-reducing / speed-inducing idea – small, rapid changes are far easier to manage and reason about, and therefore far less risky, even cumulatively, than one larger change, even with “all hands on deck” – applies to the pre-deployment software changes themselves.

Merges between feature branches and master are painful. They take time and effort, often among people who finished working on the relevant areas days, weeks or even months ago.

By continuously committing or merging into master on at least a daily basis, preferably more often, CC/CM gives you smaller changes to manage and a much easier time addressing smaller issues when they arise.

Summary

The right development, testing and deployment processes can:

  • Reduce your risk of service disruption;
  • Lower the impact of that disruption and time to repair;
  • Diminish your stress of deploys;
  • Increase productivity of your engineering, operations and support staff;
  • Improve customer satisfaction;
  • Raise your top and bottom lines.

How quickly do you iterate? How quickly would you like to? Ask us to evaluate your current world and bring you to a new one.

Posted in business, cloud, product, technology | Tagged , , , , , | Comments Off on Continuous Everything

The Blessings of Liberty for All Mankind

On this day, 240 years ago, the visionary gentlemen of the 13 colonies pledged their fortunes and their very lives to the cause of liberty.

Given the incredible forces arrayed against them – the British Navy was the dominant sea force of the time, and the British Army dwarfed the ragtag militia forces which combined to make the Continental Army – their audacity and vision were unbelievable.

Those 56 delegates to the Continental Congress were the co-founders of the greatest and riskiest startup of all time, one in which the battles were drawn not with marketing, product and engineering but with artillery and infantry and navies, in which loss meant not bankruptcy and “working for the man”, but brutal repression and many lives lost.

These founders truly believed in making the world a better place, not the tired line spouted by every startup that creates a new gaming app or development language, but creating real liberty, true representative government that is by its very nature constrained from exercising undue control.

These founders believed that the blessings of liberty would flow not only to the 13 colonies and, eventually, states, but to all mankind by shining the light of liberty across the globe.

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.–That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, –That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government…

Not all Americans, or all Colonists, or even all British (or “Brittish”) citizens, but all men, all created in God’s image.

These privileges are not granted by sufferance of the ruler or whim of government, but they are “unalienable Rights”.

Government that becomes oppressive, even when claimed for the good of the governed, not only should be changed but is inherently illegitimate; to fight against such government is not wrong or immoral but is “the Right of the People to alter or abolish it.”

The creation of truly liberal – grounded upon individual liberty – government has been a blessing not only for those lucky enough to reside upon the shores of the American continent or carry her citizenship, but for all mankind.

The ideal of a limited, liberal government eventually inspired countless nations around the world to become free. We live in an era when the overwhelming majority of nations are mostly free, a situation I would not have believed in my youth, when free societies were but a handful: the United States and Canada in the Americas; Australia, New Zealand and Japan in Asia; the United Kingdom, Ireland, Spain, Portugal, France, West Germany, the Benelux countries, Italy. Greece, Switzerland in Europe; Israel in the Middle East… and that was it.

Yet when the United States was founded, not one country was free. Even Britain could not be considered truly free; try criticizing King George in 1780 and see how far your freedom of speech went!

Over a period of almost two and a half centuries, the ideal of liberty has grown the moral strength, then the martial strength and the financial strength of the United States, serving as a beacon, a “shining city on a hill”, for the rest of mankind. Eventually, it brought down the repression of the great evil Soviet Union. In time, it may do the same for the remaining repressions of China and most of the Islamic states, whether religious totalitarians like Iran and Saudi Arabia, or the secular repressive states like Syria and Egypt.

It is not the financial strength of the free capitalist system that has served as a role model for other states, although it certainly has been an influence. It is doubtful China would have opened up as it has over the last several decades without seeing how the system enriched the Western democracies.

It is the moral strength of the free system, the guarantee of individual rights of speech, association, religion, property that led to the hope of billions to become free, and many more who either hope for it today, or eventually will.

It is not iPhone or Hyperloop, Levi’s or Carrier air-conditioning, Disney or Universal Studioes that inspire and give meaning to American liberty. These are, indeed, inevitable offshoots of that very liberty. The freedom to think, speak, associate, move, own property and innovate that leads to the incredible growth and blessings of prosperity.

It is the freedoms themselves that are the very end, not the means to the prosperous end.

In the last two weeks, we have seen the very British nation that once oppressed the American colonists express those same sentiments, when they decided that their “Form of Government” became “destructive of those ends”, as only they, the citizens, the People, had the right to determine them, and invoked their right to “alter or abolish” said government.

Fortunately, American liberty has since inspired the British as well and they changed government via referendum and resignation, not rebellion and bloodshed.

Many British citizens are unhappy with the Brexit choice, feel the choice unwise. Indeed, in 1776, many American colonists also felt that the rebels were unwise and foolish, or that the British government deserved the colonists loyalty.

Whether the British were right or wrong on June 23, it is the same animating passion for freedom, learned from the American colonists and held strong for 240 years, that led the British people to decide that they, and only they, had the right to decide whether they were a British state inside the European Union or a fully sovereign British country.

After 240 years of liberty and freedom – with many notorious and terrible episodes along the way perpetuated by both the people and the government – the United States of America has been and remains the beacon, the role model for the liberty of the individual and the moral good, leading to individual benefit and collective good, for all mankind.

May the blessings granted through visionary founders – the liberty to state any opinion no matter how odious, even if it causes hurt and aggression; to associate with people no matter how insubordinate; to practice any religion no matter how strange as long as it does not harm others; to hold true title in your own property; to limit government over and over again – may these liberties grow and inspire and eventually free all of mankind, and continue to keep them free for all eternity.

Posted in business | Comments Off on The Blessings of Liberty for All Mankind

When Robots Replace Burger-Flippers and Lawyers

Can robots replace burger-flippers? How about lawyers?

Tools have been around for thousands of years, making a human job faster and easier; try banging a nail in without a hammer.

Machines, complex combinations of parts that are either human-operated or human-started, have existed for far less than that. With a Gutenberg press, you can print hundreds of copies of printing with just 1-2 people operating the machine. A washing machine will wash your clothes after you just press the right buttons.

What are robots?

Robots add computers with storage an algorithms to machines so they can become autonomous.

For decades, we bought vacuum cleaners from Hoover or, more recently, Dyson. They make it easier and faster to clean floors than sweeping. We exchange an hour of sweeping labour for 20 minutes of labour plus $100 in capital to buy the machine.

iRobot’s Roomba adds knowledge and rules. We don’t tell it where to go, we just tell it, “do this floor” and it does. We trade the last 20 minutes of labour for a $500 machine.

Yet, there always has been a sense that the basic jobs replaced by robots were never those that required too much intelligence. After all, how hard is it to know where on the floor it is dirty? It is a low-intelligence job, one suited to a robot.

However, it turns out that many jobs really aren’t so much higher intelligence.

Making a good burger is an easy one. While those of us who grill take pride in our artist’s sense of making it “just right”, restaurants are looking for the right burger for the lowest cost. It has to be just the right quality for the given establishment, but for the same quality, price in artistry gives way to lower cost. Besides, artists are notoriously inconsistent, and a business owner wants nothing more than consistency.

According to Eater, The burger-making robot is coming to an Eatsa restaurant in San Francisco.

Low-cost or minimum-wage employees can be too expensive.

To some degree, this is unsurprising. San Francisco has a high minimum wage, increased today from $12.25 to $13 per hours. When your labour is expensive – and rising annually – capital looks increasingly appealing. But it is happening in other jurisdictions as well. If the R&D investment in developing the robot can be reused not just in San Francisco or California, but across the country or globally, the return gets higher.

However, it isn’t just lower-wage burger cooks who are at risk.

One would think that lawyers, with their many years of training and hard-won experience, are immune to replacement. Perhaps not.

A 19-year-old British student created a chatbot that managed to overturn over 160,000 parking tickets, with zero additional human interaction.

The bot simply did what most lawyers do:

  1. Gather information from the customer
  2. Follow the rules – as complex as they may be – for finding exemptions
  3. Submit the exemption
  4. Enjoy the fruits of victory

As humans, we have a tendency to confuse judgment with knowledge. Most lawyer tasks are no different than burger-flippers: they have knowledge and apply rules. In the case of lawyers, it took years of schooling and experience to gain sufficient knowledge, as there is a much greater fount of knowledge necessary. Nonetheless, it is just knowledge applied to a (possibly-complex) flowchart.

Knowledge plus rules of a flowchart can be replaced easily with a computer. Applying rules (algorithm) to inputs (knowledge) is precisely what a computer is designed to do.

Does this mean that many more jobs at risk? Definitely.

Does it mean that humans are entering a future of scarce jobs? Definitely not.

It means that tasks that require no judgment, only knowledge plus rules, however complex they may be, can be performed by automated systems, i.e. computers and robots.

It means that a burger that used to cost $10 will now cost $7 or $8, while providing better service and higher profit for the business owner. It means that getting a lawyer to fight your parking ticket at $75-100 will become a relic of history as you pay $10 for a chatbot to do it for you more reliably, quickly and consistently.

It means that you now have an extra $2-3 to spend from that burger and an extra $65-90 from the lawyer, not to mention higher surplus profit from the burger shoppe owner and chatbot operator.

It means many more people buying burgers or contesting tickets, as it becomes cheaper to do so.

What exactly will you spend that money on? It doesn’t matter. Whether you save it, which means higher investments, or spend it on more services, the economy grows. This leads to more demand, but especially for services that require judgment. The lawyer who tries the complex case, the chef who designs new meals, the architect who designs new building or software all become more valuable as demand for all of those increases.

The malaise of many warning about the dangerous-to-jobs rise of the robots confuse knowledge with judgment. Knowledge is always a necessary prerequisite for judgment, but knowledge alone is insufficient for payment. Knowledge plus judgment, real skills, will continue to become the valuable commodities. More opportunities will arise for using judgment, which all of us possess to varying degrees in different fields.

The real reason a great CEO gets paid millions is not for his or her knowledge, but for the ability to use that knowledge to make excellent judgments. Indeed, the smartest executives I have ever advised as a consultant were those who knew they had not the most knowledge – they paid me to bring them much more knowledge! – but those who knew how to synthesize the knowledge and make great judgments.

The future is full of robots, machines that perform knowledge-plus-rules-based actions, leaving true judgment to humans, who will find ever more opportunities to use that knowledge. Welcome it.

Posted in business, economy, general, outsource, policy, technology | Tagged , , , , | Comments Off on When Robots Replace Burger-Flippers and Lawyers