Amazon.com Widgets

Fibre cables, exchanges and perceptions

November 25th, 2014

Financial trading houses are always looking for a market advantage, no matter how small. It shouldn’t surprise us; when you are dealing in markets that move billions of dollars in short time frames, a few milliseconds of advantage can make all of the difference.

Because trading firms are incredibly sensitive to any advantage – or more correctly to feeling left behind (it is all about the feeling, isn’t it?), many trading centres and exchanges have very strict rules about what they will provide. Two of them came to light in a recent discussion I had with a friend who has been in financial services for a very long time.

The Chicago Mercantile Exchange, the CME, or as it is known, simply, “the Merc”, provides a lot of information on markets to trading and investment houses. In order to avoid any advantage to one firm over another (other than the Merc itself, of course, but that is a different story entirely), there are many rules they follow. Here are 2 of them.

  1. All equipment will be provided by the end customer. Thus if Goldman Sachs or Morgan Stanley or any other small or large firm wants a data feed from CME, they install their own equipment in the Merc’s data centre. If your server is slow, or suddenly reboots, or has a bad fibre card, it is your responsibility. You can invest as much or as little as you want in that gear, but it is your choice.
  2. Every fibre cable connecting the Merc’s feeds to your equipment will all be exactly the same distance. I think it is 100m, but the exact number is irrelevant. CME’s staff will loop cables, turn them around (but not bend!) so that every cable is exactly the same length.

To some extent, this is a bit of overkill, especially the second rule. The speed of light is 3×10^8 m/s, or 300,000,000 m/s. If one person’s cable is 80m long and the other’s is 100m long, it may be a 20% difference in cable length, but the difference in time of arrival will be 20/300,000,000 seconds or 6.7% of a microsecond, or 67 nanoseconds! This is 6.7 shakes, and simply will not make any difference.

The real reason for this, then, is not equal time to equal customers, who could not even detect a difference between 50m and 100m, but perception protection for the Merc itself.

Since every customer knows they are getting the exact same distance as everyone else, no one will have any reason to complain. When some firm gets into serious trouble, the first thing they are going to look for is to apportion the blame elsewhere; the Merc makes a nice juicy target. By clearly stating that everyone gets the exact same cable and length, the Merc eliminates the need to fight off a lawsuit or regulatory investigation.

Of course, the CME would win every lawsuit about feeding data differently by showing the simple math above. But the entire process would be a distraction and expensive. A little rigidity at fairly low cost makes it all go away.

To quote Benjamin Franklin, an ounce of prevention is worth a pound of cure.

Does Technology “Suck”?

November 24th, 2014

Last week, I was having lunch with an old friend. We worked together many years ago building some pretty cool technology at a very large financial services firm. Each of us has over 20 years in the technology industry. He has continued to manage infrastructure, and is doing some pretty impressive advanced infrastructure management. Both of us have seen the big company and the startup, and both of us have experience a broad range of technologies – consumer and business and enterprise; infrastructure and applications; hardware and software – and we both truly love technology and the changes it brings to society.

So imagine my surprise when he said: “All technology sucks.”

But as we discussed it and I thought about it more deeply, technology, because it seems so magical, actually does, well, create difficulties.

Think about the very basic telephone. For decades, we assumed “dialtone service,” which meant that when we picked up the phone, the dial tone would always be there, whether from AT&T, Bell Canada, British Telecom, Telstra or Bezeq. Of course, under the covers, telco engineers and managers exerted enormous efforts and spend great treasure to provide us with that dialtone service. Part of that stability was a near-complete lack of change. From the removal of the switchboard operator in the 1960s through the launch of touch-tone dialing in the late 1970 through the 1980s, nothing visible changed in telephone service. Nearly 20 years, and nothing visible changed!

In today’s technology world, that would be unthinkable. From the first commercial cellphone service 20 years ago to LTE today, from those first brick phones to an iPhone or Moto X, the progress has been amazing.

Of course, the telco research arms, along with a few other labs like Xerox PARC, were producing great advancements; they just weren’t deploying them at any great speed.

We now live in a society where nearly every advance can be and often is brought to the market. The speed of change brings great benefits, but comes at some price to stability: we have come to expect dropped calls, misbehaving servers, crashing Web applications, and the infamous Blue Screen of Death (BSOD) on Windows laptops.

So our technology works as expected 90% of the time (probably quite a bit more) and fails 10% of the time. Given the choice, almost everyone would prefer an iPhone with its apps and LTE connectivity that works 90% of the time and fails 10% of the time over an old rotary line phone that works 100% of the time. Our lives are better that way.

But what we really want is an iPhone  with its apps and LTE Internet connectivity that works… 100% of the time. And when we fall into the failure zone – as we inevitably do – we complain that “technology sucks”. We, in turn, press our providers to give us more and more of that missing 10%. They, in turn, press their engineers and managers, who work long and hard to solve obscure problems that will budge the needle.

Most of the time, it is a fascinating challenge, but sometimes, when the pressure builds, when that 10% is at just the wrong moment, as my friend said, “all technology sucks.”

The Death of the iPod

November 19th, 2014

A short while ago, I was looking at buying an iPod for one of my kids. It was a pretty straightforward transaction. My kid likes music, while an iPod is great for carrying lots of music around and listening to it, especially on road trips. As parents, we encourage our kids to listen to music, preferably a broad and diverse selection.

But I didn’t. And the reason is smartphones.

It wasn’t that long ago – a decade exactly – that I sat in a Duke MBA marketing class in Bangkok. Our professor asked why Apple’s stock had multiplied over the last few years, in just one word? Everyone called out, “iPod.” Apple was a computer company, but had reinvented itself as the music device, or iPod, company.

Ten years later, for all intents and purposes, the music player market is dying, and Apple launched the first broadside. The iPod company has become the iPhone company, and killed its erstwhile business on way. Sure, iPods still show nearly $1BN in annual revenue, but that number is shrinking rapidly.

When Apple first released the iPhone, the cost of an iPhone was high enough to warrant a lower-priced but still profitable music player, the iPod. As flash storage prices dropped and the iPhone took off, Apple was able to release the iPod Touch, or iTouch, essentially an iPhone without cellular radios. The distinction still holds. As of today, an iTouch with 16GB of storage costs $199, while a 16GB iPhone 5S (not exactly comparable, but close enough) costs $549. One step lower, an iPod Nano with 8GB costs $149.

With cellular capability costing an extra $350, plus monthly mobile fees, it can make a lot of sense to buy an iPod Touch if all you plan on using it for is music, and it makes a great gift.

The music player market may be shrinking, as people who would buy a smartphone and music player opt for an iPhone or Android, but the lower priced music player still had value.

And then came the cheap Androids.

A high end Android is not that different in price than an iPhone. A Moto X, Nexus 5, Samsung S5, or LG G3 is cheaper than a comparable iPhone, but not massively cheaper.

But Xiaomi, OnePlus and now, especially, Motorola are releasing low-cost high-quality Android smartphones. The Moto E can be bought, unlocked and with no contract, for around $99; the Moto G, the next level up, is available for $179.

Many people will pay something of a premium for iPhone over Android (as the 3-4 hour lines for iPhones I saw in 3 Apple stores I passed over the last 3 days indicates). If one has to choose between a $199 iTouch music player, or even a $149 iPod Nano, versus a $350 or more Android phone, a music player is still compelling to many consumers.

But almost no one will spend $199 (iTouch) or $149 (Nano)  to get just a music player when $99 (Moto E) 0r $179 (Moto G) will purchase a better quality smartphone.

The music player market is about to be eviscerated. Who would have thought left-for-the-dead Motorola would help do it?

Open-Source Microsoft Part II – Seeds for the Future

November 17th, 2014

In the previous article, I examined Microsoft’s announcement that it will open-source .NET, its impact on customers, and its more important impact on Microsoft’s business lines. In sum, I believe that Nadella may be trying to change the culture at Microsoft from one in which they depend on customers being forced to stick with its Windows line to one wherein Microsoft is driven by market forces to develop products and services that customers actively want to buy. It will hurt in the short-run as many customers who have applications written on .NET switch to non-Windows platforms but stay with the same apps, but will save the company in the long-run by giving it back its “win-the-market” competitiveness and innovation.

There is, however, another angle to the story: tech firms. Tech firms, and especially startups, completely and totally eschew Windows for production. Yes, they will use Windows on the desktop – although I heard yesterday that Google bans Microsoft across the board – and in the IT server room for Exchange and maybe SharePoint. But they would not be caught dead running production applications, let alone building their own next-generation offerings on Microsoft.

This is true for large established firms such as Facebook and Google, but even more so for the aspiring future companies, the startups. Not only is Microsoft too expensive – and despite Microsoft startup programs, almost no smart company is lured by their “free samples” that tie them in for the long-haul – it is a kiss of death. Smart engineers in Silicon Valley, New York, Los Angeles and Tel Aviv do not want to work on Microsoft. They want Rails, or Node, or Python; but Microsoft? “Sorry, I have a better offer.”

As a matter of fact, many venture capitalists will not fund companies built on a Microsoft stack. I have personally assisted companies in transitioning from an early Microsoft stack (servers, database, applications or some combination) to open-source. The motivation was as much, “we are not getting serious funding from VCs” as “we need to scale our infrastructure and don’t believe we can do it on Windows.”

There are many startups; very few will survive to adulthood. But many of the engineers and executives in these startups will go on to found or work in others, while others will go work inside existing technology firms or IT divisions of non-technology companies – health care, retail, financial, travel, etc. If this generation is anti-Microsoft wherever they go, Microsoft’s immediate future may be secured through funneling .NET customers onto Windows platforms, but its long-term is very questionable. Eventually its current champions will move on, and who will pay their licensing fees then?

By opening up .NET and putting it on multiple platforms, Microsoft is exposing it to the inspection and hacking of thousands of smart engineers across the world, working in very different and more diverse environments than Microsoft’s research centres in Redmond, WA, or Herzliya, Israel. Microsoft is trying to make .NET interesting to smart and, in the future, influential people.

As with existing customers, this is a very big risk. If .NET is poorly written, insecure, or bloated, not only will it be avoided by many in the larger engineering world; it will be mocked. But eventually it may be improved, and respected for doing so.

Will it be sufficient? Will Microsoft’s image as a platform to be avoided by dynamic engineers change as it becomes open? I don’t know. It has a long uphill battle to climb, but then again, Apple was almost wrecked when Jobs took over, and a niche but decent company before the iPod was launched. Stranger turnarounds have happened.

Whether exposing Microsoft’s insulated people to market forces to force upon them a culture of winning business, or making it interesting to the current and future technology leaders, Nadella has to be respected for taking serious risks with the present of the company to build its future.

Open-Source Microsoft? Will It Help?

November 13th, 2014

Yesterday, Satya Nadella, Microsoft’s CEO, announced that they will release the core of .NET, the Microsoft application development platform, as open-source. In addition, .NET will be ported to run on additional platforms, primarily Mac OS X and Linux.

For Microsoft, the ultimate closed-source and proprietary stack company, this is an earth-shattering move.

Developers have long had a choice of platforms on which to write applications. Java and its variants, Ruby on Rails, Node, Python, PHP, lately Go and Dart, the list goes on and on. Just about every one of those languages has 2 key features:

  1. It can run on almost any platform: Linux, BSD, Mac OS X, proprietary vendor Unix variants from Sun, HP, IBM, etc., and, yes, even Windows.
  2. The language is open-source.

Since the language creators are not operating system vendors, they want to encourage as broad adoption as possible, which means making it available, well, everywhere. Further, since those languages are open-source, someone who want to adopt the language but have it run on a different platform can help make it happen. If John releases a new language Flyer on Linux, but Jill really wants to use it on Windows, she can download the source code and tweak it until it works, at which point she re-releases it back to the community.

.Net, on the other hand, was created by Microsoft, source code a closed proprietary secret of Microsoft, and runs only on Windows. I would argue that .Net was released only to keep application developers on Windows. Company has a large farm of Windows servers, and wants to write an application. If only Java is available, and they write the application in Java, then they will continue to run a Java application on Windows servers… until a new CIO or VP of Infrastructure notices that it “runs as well on Linux, and we are due for a hardware refresh, and look how good Linux is, and how much we could save on licensing, and our apps already run on it!” Suddenly, Microsoft loses a very important stream of revenue. On the other hand, if it was written in .Net, then when the hardware refresh comes, they will have to rewrite the entire application to move off of Windows.

In short, .Net is a gateway drug to Windows Server, and an important part of driving operating system license revenues.

So… why in the world would Nadella remove the gateway feature of that important drug? Does he not risk current customers who are locked into .Net suddenly saying, “hey, it works on Linux, so…”?

I think Nadella is consciously taking an enormous risk in reinventing Microsoft, but a necessary one to change the culture of Microsoft.

If You Love Someone, Set Them Free

Microsoft’s main businesses are operating systems (Windows), office/productivity software (Word, Excel, PowerPoint), and server software (Exchange).

While many companies voluntarily purchase productivity software and server software, in the application development space, many feel trapped with Windows. They built applications on .Net, or want to, but would prefer not to run Windows server.

  1. Licensing is expensive compared to free Linux or low-cost supported Linux.
  2. Windows is harder and therefore more expensive to manage.
  3. Windows is weaker in management capabilities and features, and slower to get new ones.

Trapping your customers is a very bad idea. It leads to resentment, and a strong desire to dump the vendor as soon as is practicable. The next time such a customer has the need to significantly invest in application development, they will avoid anything Microsoft-related, just to get back at them.

Setting customers free to run their .Net apps anywhere may lead to an initial exodus of customers off of cash cow Windows, but will also force Microsoft to do one or more of two things:

  • Bring its Windows platform up to a competitive level to make it worthwhile for customers to pay for it.
  • Create additional .NET development and management tools that customers are willing to pay for.

By removing the .NET “crutch”, Nadella is forcing his product, marketing and development teams to create value that customers actually want to pay for. This is a very risky strategy, but ultimately one that is necessary for Microsoft’s long-term viability in corporations.

Nadella isn’t opening Microsoft’s business; he is changing the culture by throwing his .Net and Windows divisions into the forges of the open marketplace to make them really build customer desire, and this long-term Microsoft value.

In the next article, I will explore the other side of the coin: Microsoft and startups.

Price and Plagiarism

November 11th, 2014

The Web is a tool. Like all tools (including swords), it comes with two edges. While in the short term it appears to give advantages in one direction, over time it can surprise you and cut both ways.

Plagiarism

When my wife, the rather brilliant Dr. Deborah Deitcher, PhD, was first teaching college students at Manhattan College (which is not in Manhattan, but in Riverdale), she warned the students very clearly that she understood the Web as well as they did, understood the temptations to plagiarize, explained what plagiarism was and what it was not – lack of knowledge was not going to be an excuse – and how it would not be tolerated.

More than anything, the Web has allowed for the explosion of content. The sheer amount of content out there, combined with (and driven by) its unprecedented availability, has led to nearly free access to just about any of human knowledge at each person’s fingertips. This eases and thus creates a great temptation to plagiarize.

At the same time, the sheer amount of information has necessitated the creation of algorithms to process that information. These algorithms lead to tools to match quotes against sources, “plagiarism checkers” to ease catching plagiarism. First came the abundance of knowledge, advantage cheaters, shortly followed by tools to manage that knowledge, advantage educators.

And yet, unsurprisingly, every semester at least one student was convinced that his or her ability to use Google was somehow smarter than the professor’s combined ability to do the same plus use an automated plagiarism checker that did it for her.

Students were convinced the Web gave them a tool to beat the educational staff and avoid the hard work necessary to learn; they quickly learned that the tool cut both ways.

Price

Since the computerization of the airline industry, airlines have used demand-based pricing to modify their fares by the second. You could sit next to a person, in the same class seat, using the same routes, dates and times, but your ticket cost $200 and your neighbour’s cost $400.

However, since airlines tightly controlled their inventory and either dominated specific routes to eliminate the impact of competition or “collaborated” with their competitors using public signalling to avoid competition, these tools were not as readily available to the broader market of retailers in a far more competitive environment.

The Web has changed much of that.

Companies selling online – Amazon, WalMart, BestBuy, everyone – use similar algorithms to present different prices at different times and through different channels to different people. The same toothpaste might cost $1, $2 or $5, entirely dependent on the day, date, time and source of the user. The algorithms and digital presentation give retailers and ability to modify prices to maximize their profits on a customer-by-customer and millisecond-by-millisecond basis.

Short-term: advantage retailer.

But, as for plagiarism, so for pricing, the tool is a double-edged sword. Just as retailers can use the Web to modify pricing, it isn’t too hard to use the Web to compare pricing across retailers, and even across time within a given retailer.

Over the weekend, the Wall Street Journal released their “Christmas Sale Tracker” (for those who cannot access the article through the paywall, the tracker itself is here), which tracks the prices of 10 hot gift items from early November, through Thanksgiving, Black Friday, Cyber Monday, and the Christmas season.

It is fairly easy to see that prices for the same item at the same retailer, let alone across retailers, vary widely… and every consumer can see it as well. This type of tool significantly reduces the leeway retailers have to raise prices. And unlike in the airline industry, where competition is slim, allowing tighter price management and easier signalling, the retail industry is broad and fiercely competitive.

Initially, the tools of Price and Plagiarism give one side the Pride to believe they can Prejudice the outcome, but over time the very same technologies and sometimes even the exact same tools balance the other side. Pride indeed goeth before the fall.

Gas Stations, Electric Cars and Changing Minds

November 10th, 2014

Managing change is a process, something between a science and an art, taught in all respectable business schools and management courses. There really are 2 reasons for teaching it:

  1. Management: If you are managing a team, a division or a company, you need to understand the emotional and psychological blocks to change, and what it will take to get employees and partners to support change.
  2. Marketing: If you are responsible for marketing a product to consumers, or creating an entirely new product, you need to have a solid understanding of what inertia keeps customers in place and what it will take to change them.

In the end, it boils down to the same principle. People will not change their behaviours unless they have at least one of two “change motivators”:

  • Their current pain must be high enough to entice them to switch
  • The new product/process must be exciting enough to encourage them to switch

Either it hurts and you want better, or everything is fine but you are wowed by how much better it could be. Prime examples include the iPhone (exciting), and the vacuum cleaner (pain).

When it comes to electric cars, there is a complex process of purchasing, driving, and refueling that occurs every day among billions of people worldwide. To get them to change, an alternative would need to cover one or both of the change motivators.

  • Pain: Most Nissan Leaf or similar buyers purchase the car to manage emotional pain. They feel very strongly about the pollution caused by a tailpipe or oil refining. Buying a small electric car relieves that pain for them.
  • Exciting: Tesla has succeeded in making their cars sexy and desirable, the iPhone effect. Even those who do not feel any pain at all with the current car distribution model desire the Tesla experience.

However, neither of these will be sufficient to lead to mass adoption. The number of people who feel actual pain due to knowledge of pollution is infinitesimally small, while very few people will be “excited” enough by a cool Tesla car to change their entire car purchase and usage habits (especially at Tesla’s pricing).

What will it take to lead to the change? Some combination of the change motivators, of course.

And here, I think, is where the pain is: the gas station. No matter how you look at it, no matter how much gas stations have added self-service, drive-through coffee or any number of other distractions, car refueling is a waste of time. It steals 10-15 minutes 1-2 times per week from the average person’s life. If someone doesn’t have a gas station in their normal driving path, add 10-15 minutes to get to/from the gas station.

At the same time, cars sit idle far more time than they are used, which is the motivation for car or ride sharing apps like Lyft and Uber. Thus, I coin “Avi’s Theorem of Car Change”:

For an electric car to work, it must reduce the amount of time wasted on fueling each week without reducing the distance traveled.

Let’s take this apart:

  1. “Reduce the amount of time wasted on fueling,” means that the gas station must go away. The car must have a way of “filling” when idle. This could be in a parking spot, in a garage, at the office, or next to someone’s home. And it has to do this without complex additional work – including finding a charger and plugging it in. I park my car, walk out of it and into my office, and the car immediately starts “filling”.
  2. “without reducing the distance traveled.” The one time people are willing to spend that 10 minutes filling is on a long trip; they expect to. While doing very long distances – greater than a single tank’s worth – the return is felt immediately and thus the willingness to spend the time refueling. A transportation alternative that either cannot go the distance because of charge limitations or requires significant downtime during the “refuel/recharge” process will not be accepted into the mass market.

Both Better Place (RIP) and Tesla tried to solve the distance question with battery changes, without making it significantly worse, in terms of time wasted, than petroleum-based refueling. The solution was necessary but not sufficient. Making an electric car as easy to drive distances as a petroleum car will make it reasonably acceptable, but will not get the average consumer to change. It has to be better.

Of course, price and channel (convenience of purchase) and service considerations all come into play, as they do with any product. However, until both of these are met, any car will struggle greatly to take the mass market.

The Lost Interview, The Lost Promotion and The Lost Ark

November 4th, 2014

Today, I want to look at 2 short video clips.

The Lost Interview

 

This is a famous interview with Steve Jobs, believed lost for many years; if you want more information, imdb info is here. The interview itself is a long but great watch for anyone interested in the history of innovation and the tech markets. However, there is a short 2-minute clip on YouTube on innovation; if you have time for nothing else, watch this.

In the interview, Jobs explains that innovation dies in companies because they lock down a market, and then success is understood to be a matter purely of sales and marketing. Thus, the people who become promoted, whose opinions matter, and who eventually run the entire company, are those who can sell the current product or incremental improvements on it. A faster computer may help a computer company sell more, and thus be more successful, but a new phone or clothes washer is too far removed from their context.

As Ben Horowitz explained in “Wartime CEO/Peacetime CEO“, the right type of CEO for a company depends on its current environment. Similarly, it is eminently rational for a company that more or less owns its market, to focus on sales and marketing, promote sales and marketing types, and orient product around sales needs, primarily add-on products and incremental improvements.

Further, those companies, “executing on a known business model”, using Steve Blank’s terminology, know their markets and are just selling and delivering on it. The CFO and investor markets has gross and operating margin targets; significant investment in radically new areas will destroy those margins. Everything mitigates against promoting radical product people and for incremental sales and marketing people.

However, an almost inevitable corollary is that true innovation will wither in such companies. Of course, this is somewhat shortsighted, since eventually someone will come along to change the entire status quo, leaving the company to wonder why, with all of its resources, it couldn’t head them off.

Raiders of the Lost Ark

The same idea comes to play in any large organization, especially governments, brilliantly portrayed in the closing scene of Raiders of the Lost Ark. The Ark, source of unspeakable power, is not just a threat to the Axis Powers in WWII; it is disruptive and frightening to the organization that finally secures it: the US government. The people in position of authority feel much safer simply saying, “top men”, and then burying the Ark in an enormous warehouse, hopefully never to be found again. The organization hires and promotes those types of people, the incrementalists.

As in the Lost Interview, as in the Lost Ark, people who help a well-established organization maintain its status quo, along with incremental improvements, will be hired, promoted and eventually run the organizations, while those who truly want to change it will suffer the Lost Promotion.

Is there hope? Sure there is. Innovative product people really only can thrive in 2 situations, and should seek them out:

  1. Startups: companies that want to change the world, whose very existence depends upon radical change.
  2. Troubled companies: Eventually, even the strongest monopolist falls prey to nimble competitors. If the company recognizes the need to change and compete, to radically restructure itself, then, and only then, is it ready for the product innovator.

Nothing wrong with being either type; just know where you fit.

Know Your Numbers

November 3rd, 2014

Last week, I looked at one very small but important aspect of customer relationships – the human face – in the era of online communications, specifically chat, and how and when you market it. The Chief Marketing Officer of LiveChat, Szymon Klimczak, was kind enough to respond, as well as direct me to a few interesting metrics reports that LiveChat releases every year or two, especially the “Customer Happiness Report.”

The Happiness Report is a very well laid out summary of how chat performs for different customers across different categorizations: by industry, by geography, and even across some of the dependent variables. For example, they measure average response time and average customer happiness by geography and industry, but also measure those in reference to each other.

I like this report very much, and credit LiveChat with having the smarts to think about the value of creating it, going to the effort to publish it, and doing so in a visually pleasing manner. Most metrics are, well, mind-numbing. It takes a lot of effort to dig into them and get the valuable data. When I prepare an analysis for a client, it can take me as long to make the presentation concise, interesting and presentable as it does to gather the data and draw conclusions. As my old friend the Rabbi once said, “it takes five minutes to prepare an hour’s sermon, and an hour to prepare a five minute sermon.” Unfortunately, he said it at the beginning of a sermon for which he had been given five minutes to prepare.

Although I do not have access to the raw data, they appear to have done some good methodology work as well. They excluded customers in trial phase, and grouped together like industries. There isn’t much difference between binary options sites and retail foreign exchange sites (if you don’t know what those are, you really don’t want to know). They also excluded companies who had an average of fewer than 100 chats per month, and performed the analytics across 3,000 companies, with 14MM chats over 3 months. In other words, they wanted the inputs to be statistically significant.

This type of metrics report provides 2 valuable services:

  1. Best practices:  Across the board, if I, as a company with a customer support service, am looking for benchmarks, here they are. Can I compare phone support in a large enterprise software deal with live chat support for a Web service site? Not really. But I can know how good it can get, and might be able to find a close enough industry to draw comparisons.
  2. Leadership: This type of report establishes a thought leadership position for LiveChat. This is not direct marketing. It doesn’t say, “buy us because we are the best.” It says, “we are thinkers, we are helping the industry across the board, you want to be involved with us in any way possible.” Of course, eventually that should lead to market position, market share and sales, but it is more indirect.

The last time I saw a tech company do this really well was the famous mobile app monetization report by Greg Yardley of PinchMedia, since merged with Flurry. Greg used PinchMedia’s broad installation of toolkits across mobile apps and devices to measure app usage and ad impressions, correlate them with revenue from ad sales (free apps) and app sales (paid apps), and explain usage and purchase patterns: how price impacts acquisition; how many ad impressions really occur; potential revenue from each, etc. Back in 2009, it really was the first serious analysis of apps as a business on mobile.

Where could the Customer Happiness report go? I found only one serious weakness: it doesn’t compare to non-chat.

Sure, it is much harder to get non-chat customer satisfaction and interaction data, but the information is available. This would be invaluable in 2 distinct ways:

  1. Deeper understanding: Why does one industry have a longer response time but higher satisfaction rates? It might be because of accepted standards in the industry; it might be because of the job role of those who interact – some people have more time and patience on a job than others; it might be national cultures. There are many possibilities, most of which likely are known to someone in the industry, but we only can speculate. Correlating all channels – across “multi-channel” – would provide much deeper insight.
  2. Marketing: As a direct benefit to the chat companies and LiveChat in particular, how does chat perform compared to other channels?

The marketing is a key point. The existing report answers the customer question, “How do I do this the best way possible?” Extending it to compare with other channels answers the customer question, “Why should I use this customer communications channel? Give me hard numbers that explain how my world will get better.”

In general, though, excellent piece of analysis. I look forward to reading more reports.

If You’ve Got It, Flaunt It

October 31st, 2014

In a previous article, I discussed how small changes can make a big difference when engaging with customers. Specifically, the addition of agent pictures to LiveChat can create a closer emotional connection between the customer and the agent, leading to higher customer satisfaction and/or increased sales.

Sounds like this was a low-cost investment for LiveChat with potentially a high return.

So… why don’t they tout it? I spent some time going through LiveChat’s Web site. The Web site is good, focused, friendly and, well, fun. It even has lots of pictures of people; clearly they get the value.

Why don’t they mention the human face benefit even once? Why don’t they also mention how much more benefit LiveChat’s customers can get from using the LiveChat system? Sure, they have stories about how “Business X gets an additional $65,000 per month in leads through the efficiency of LiveChat,” or “Business Y gets an additional $3,o00.” And stories matter greatly, as they help potential customers see themselves and their problems in LiveChat customers. Once again, LiveChat really seems to get how to make the connection in the sale.

But all of these stories compare email to chat, or nothing to chat. This will help customers move into the chat space, but not necessarily choose between LiveChat and its competitors. Where are the metrics showing why LiveChat, how its exclusive features lead to measurable improvements compared to competitors?

Further, LiveChat could easily get these metrics via A/B testing. Have 50% of chats with one customer use pictures and 50% without (obviously with customer permission); do it with their own sales team on livechatinc.com. It is fairly easy for this type of system to gather the metrics needed.

Three possibilities exist:

  1. They didn’t think of it. Hopefully this article will help drive that discussion.
  2. The metrics are there but are inconclusive. I hope not, but it is a possibility.
  3. They have the numbers, they are positive… but it didn’t matter to customers.

I find the third option least likely, but it is possible. After all, Web real estate is precious, and can only focus on so much. If some other bit of content has a stronger impact on LiveChat’s sales than touting certain features, they are being rational by focusing on that content at the expense of other content.

I would find it interesting to know which of these isn’t there. In business, if you’ve got it, you should flaunt it.