“In chaos, there is opportunity.”
—Tony Curtis, Operation Petticoat (and also Sun Tzu)
“Chaos” is a pretty good word to describe the personal computer market in 2013. Microsoft is trying to tweak Windows 8 to make it acceptable to PC users (link), its Surface computers continue to crawl off the shelf (link), PC licensees are reconsidering their OS plans and business models (link), and Apple’s Macintosh business continues a genteel slide into obscurity (link, link).
No wonder many people say the personal computer is obsolete, kaput, a fading figment of the past destined to become as irrelevant as the rotary telephone and steam-powered automobile (link).
I beg to differ. Although Windows and Macintosh are both showing their age, I think there is enormous opportunity for a renaissance in personal computing. (By personal computing, I mean the use of purpose-built computers for productivity tasks including the creation and management of information.) I’m not saying there will be a renaissance, because someone has to invest in making it happen. But there could be a renaissance, and I think it would be insanely great for everyone who depends on a computer for productivity.
In this post I’ll describe the next-generation personal computing opportunity, and what could make it happen.
What drives generational change in computing?
Let’s start with a bit of background. A generational change in computing is when something new comes along that makes the current computing paradigm obsolete. The capabilities of the new computers are so superior that previous generations of apps and hardware are instantly outdated and need replacement. Most people would agree that the transition from command line computers to graphical interface (Macintosh and Windows) was a great example of generational change.
What isn’t as well understood is what triggered the graphical interface transition. It wasn’t just the invention of a new user interface. The rise of the Mac and Windows was driven by a combination of factors, including:
—A new pointing device (the mouse) that made it easier to create high-quality graphics and documents on a computer.
—Bitmapped, full-color displays that made it easy for computers to display those graphics, pictures, and eventually video. Those displays also made it easier to manage file systems and launch apps visually.
—New printing technology (dot matrix and laser printers) that made it easy to share all of those wonderful new documents and illustrations we were creating.
—A new operating system built from the ground up to support these new capabilities.
—An open applications market that enabled developers to turn all of these capabilities into compelling new apps.
All of those pieces had been around for years before graphical computing took off, but it wasn’t until Apple integrated all of them well, at an affordable price, that the new paradigm took off. The new interface and new hardware, linked by a new or rebuilt OS, let us work with new types of data. That revolutionized old apps and created whole new categories of software.
Windows and Mac took off not because they were new, but because they let us do new things.
Although later innovations, such as the Internet, added even more power to personal computing, it’s amazing how little its fundamental features and capabilities have changed since the mid-1990s. Take a computer user from 1979 and show them a PC from 1995, and they’ll be completely lost in all the change. Take a computer user from 1995 and show them a PC from 2012 and they’ll admire the improved specs but otherwise be feel very much at home.
Maybe this slowdown in qualitative change is a natural maturation of the market. After an early burst of innovation, automobiles settled down to a fairly standard design that has changed only incrementally in decades. Same thing for jetliners.
But I think it’s a mistake to look at personal computers that way. There are pending changes in interface, hardware, and software that could be just as revolutionary as graphical computing was in the 1980s. In my opinion, this would be a huge opportunity for a company that pulls them all together and makes them work.
Introducing the Sensory Computer
I call the new platform sensory computing because it makes much richer use of vision and gestures and 3D technology than anything we have today. Compared to a sensory computer, today’s PCs and even tablets look flat and uninteresting.
There are four big changes needed to implement sensory computing.
The first big change is 3D. Like desktop publishing in the 1980s, 3D computing requires a different sort of pointing device, new screen technology, and a new kind of printer. All of those components are available right now. Leap Motion is well into the development of gesture-based 3D control. 3D printers are gradually moving down to smaller sizes and more affordable price points. And 3D screens that don’t require glasses are practical, but have a limited market today because we keep trying to use them for televisions, a usage that doesn’t work with the screen’s narrow viewing angle.
But guess what sort of screen we all use with a very narrow viewing angle, our heads perched right in front of it at a fixed distance? The screen on a notebook computer.
Today we could easily create a computer that has 3D built in throughout, but we lack the OS and integrated hardware design that would glue those parts together into a solution that everyone can easily use.
You might ask what the average person would do with a 3D computer. Isn’t that just something for gamers and CAD engineers? The same sort of question was asked about desktop publishing in the 1980s. “Who needs all those fonts and fancy graphics?” many people said. “For the average person Courier is just fine, and if I need to send someone an image I’ll fax it to them.”
Like that skeptical computer user in the 1980s, we don’t know what we’ll do when everyone can use 3D. I don’t expect us to send a lot of 3D business letters, but it sure would be nice to be able to create and share 3D visualizations of business data and financial trends. I’d also like to be able to save and watch family photos and videos in 3D. How about 3D walkthroughs of hotels and tourist attractions on Trip Advisor? The camera technology for 3D photography exists; we just need an installed base of devices to edit and display those images. And although I don’t know what I’d create with a 3D printer, I’m pretty sure I’d cook up something interesting.
Every time we’ve added a major new data type to computing, we’ve found compelling mainstream uses for it. I’m confident that 3D would be the same.
The second big change is modernizing the UI. User interface is ultimately about increasing the information and command bandwidth between a person and a computer. The more easily you can get information in and out of the computer, the more work you can get done. The mouse-keyboard interface of PCs, and the touch-swipe interface of tablets, were great in their time, but dramatically constrain what we can do with computers. We can do much better.
The first step in next-generation UI is to fully integrate speech. This doesn’t mean having everything controlled by speech, but using speech technology where it’s most effective.
Think about it: What’s the fastest way to get information in and out of your head? For most of us, we can talk faster than we can type, and we can read faster than we can listen to a spoken conversation. So the most efficient UI would let us absorb information by reading text on the screen, but enter information into the computer by talking. Specifically, we should:
—Dictate text to the computer by via speech, with an option to use a keyboard if you’re in public where talking out loud would be rude.
—Have the computer present information to us as printed text on screen, even if that information came over the network as something else. For example, the computer should convert incoming voice messages to text so you can sort through them faster.
We can do all of these things through today’s computers, of course, but the apps are piecemeal, bolted on, and forced through the funnel of an old-style interface. They’re as awkward as trying to do desktop publishing on a DOS computer (something that people did try to do for years, by the way).
Combine speech with 3D gestures and you’ll start to have a computer that you can control very richly by having a conversation with it, complete with head nods and waves of the hand. Next we’ll add the emerging science of eye tracking. I’m very impressed by how much progress computer scientists are making in this area. It’s now possible to build interfaces that respond to the things we look at, to facial expressions, and even to our emotional response to the things we see. This creates an incredibly rich (and slightly creepy) opportunity to build a computer that responds to your needs almost as soon as you realize them.
Once we have fully integrated speech, gesture recognition, and eye tracking, I’m not sure how much we’ll need other input technologies. But I’d still like to have the option to use a touchscreen or stylus when I need precision control or when a task is easier to do manually (for example, selecting a cell in a spreadsheet or drawing something). And as I mentioned, you’ll need a keyboard option for text entry in public places. But these are backups, and a goal of our design should be to make them options rather than a part of the daily usage experience.
The third change is a new paradigm for user interaction In a word, it’s time to ship cyberspace. The desktop metaphor (files and folders) was driven by the capabilities of the mouse and bitmapped graphics. The icons and panels we use on tablets are an adaptation to the touchscreen. Once we have 3D and gesture recognition on a computer, we can rethink how we manage it. In the real world, we remember things spatially. For example, I remember that I put my keys on my desk, next to the sunglasses. We can tap into that mental skill by creating 3D information spaces that we move through, with recognizable landmarks that help to orient us. Those spaces can zoom or morph interactively depending on what we look at or how we gesture. Today’s interface mainstays such as start screens and desktops will be about as relevant as the flowered wallpaper in grandma’s dining room. Computer scientists and science fiction authors have played with these ideas for decades (link); now is the time to brush off the best concepts and make them real.
The fourth change is to modernize the computing ecosystem. The personal computer software ecosystem we have today combines 20-year-old operating system technology with a ten-year-old online store model created by Apple to sell music. There’s far more we could do to make software easy to develop, find, and manage. The key changes we need to make are:
—The operating system should seamlessly integrate local and networked resources. Dropbox has the right idea: you shouldn’t have to worry about where your information is, it should just be available all the time. But we should apply that idea to both storage and computer processing. We shouldn’t have web apps and native apps, we should just have apps that take advantage of both local computing power and the vast computational resources of the web. An app should be able to run some code locally and some on a server, with some data stored locally and some on the network, without the user even being aware of it. The OS should enable all of that as a matter of course.
In this sense, the advocates of the netbook have it all wrong. The future is not moving your computing onto the network; it’s melding the network and local computer to produce the best of both worlds.
—Discovery needs work. App stores are great for making apps available, but it’s very hard to find the apps that are right for you. Our next generation app store should learn your interests and usage patterns and automatically suggest applications that might fit your needs. If we do this right, the whole concept of an app store becomes less important. Rather than you going to shop for apps, information about apps will come to you naturally. I think we’ll still have app stores in the future because people like to shop, but they should become much less important: a destination you can visit rather than a bottleneck you must pass through.
—Security should be built in. The smartphone operating systems have this one right: each app should run in a separate virtual sandbox where malicious code can’t corrupt the system. No security system can be foolproof, but we can make personal computers far more secure than they are today.
—Payment should be built in as well. This is the other part of the software and content ecosystem that’s broken today. Although the app and content stores have made progress, we’re still limited to a small number of transaction types and fixed price bands. You can’t easily sell an app for half a cent per use. You can’t easily sell a software subscription with variable pricing based on usage. As an author, you can’t easily sell an ebook for more than $10 or less than 99 cents without giving up 70% of your revenue. And you can’t easily sell a subscription to your next ten short stories. Why? Because the store owners are manipulating their terms in order to micro-manage the market. They mean well, but the effect is like the worst dead-hand socialist market planning of the 1970s. The horrible irony is that it’s being practiced by tech companies that loudly preach the benefits of a free market.
It’s time for us to practice what we preach. The payment system should verify and pass through payments, period. Take a flat cut to cover your costs and otherwise get out of the way. The terms and conditions of the deal are between the buyer and the creator of the app or content. Apple or Google or Amazon has no more business controlling what you buy online than Visa has controlling what you buy in the grocery store. The free market system has been proven to produce the most efficiency and fastest innovation in the real world; let’s put it to work in the virtual world as well.
Adding it up. Let’s sum up all of these changes. Our next-generation computer now has:
—A 3D screen and 3D printing support built in, with APIs that make it easy for developers to take advantage of them.
—Speech recognition, gesture recognition, and eye tracking built in, with a new user interface that makes use of them.
—A modernized OS architecture that seamlessly blends your computer and the network, while making you more secure against malware.
—An app and content management system that makes it easy for you to find the things you like, and to pay for them in any way you and the developer agree to.
I think this adds up to a new paradigm for computing. It’s at least as revolutionary as the graphical computing revolution of the 1980s. We’ve opened up new usages for your computer, we’ve enabled developers to revolutionize today’s apps through a new interface paradigm, and we’ve made it much easier for you to find apps and content you like.
Why can’t you just do all this with a tablet? You could. Heck, you could do it with a smartphone or a television set. But by the time you finished adding all these new features and reworking the software to make full use of them, you would have completely rebuilt the whole device and operating system. You’ll no longer have a cost-efficient tablet, but you’ll still have all the flaws and limitations of the old system, jury-rigged and still adding cost and inefficiency. Windows 8, anyone?
It’ll be faster and cheaper just to design our new system from scratch.
When will we get a sensory computer?
If you agree that we’re overdue for a new computing paradigm, the next question is when it’ll arrive. Unfortunately, the answer is that it may not happen for decades. Major paradigm changes in technology tend to creep forward at a snail’s pace unless some company takes on the very substantial task of integrating and marketing them. Do you think ebooks would be taking off now if Amazon hadn’t done Kindle? Do you think the tablet market would be exploding now if Apple hadn’t done the iPad? I don’t think so, and the proof is that you could have built either product five years earlier, but no one did it.
So the real question is not when we’ll get it, but who might build it. And that’s where I get stuck.
Microsoft could do it, and probably should. But I doubt it will. Microsoft is so tangled up now in tablet computing and Windows 8 that I find it almost impossible to believe that it could take on another “replace the computer” initiative. I think there’s a very good argument that Microsoft should have done a sensory computer instead of Windows 8, but now that the decision’s made, I don’t think it can change course.
Google could do it, but I don’t think it will. Google is heavily invested in the Chrome netbook idea. It’s almost a religious issue for Google: as a web software company, the idea of a computer that executes apps on the web seems incredibly logical, and is emotionally attractive. Google also seems to be hypnotized by the idea that reducing the cost of a PC to $200 will somehow convert hundreds of millions of computer users to netbooks. I doubt it; PC users have been turning up their noses for decades at inexpensive computers that force them to compromise on features. The thing they will consider is something at the same price as a PC but much more capable. But I don’t think Google wants to build that.
One of the PC companies might take a stab at it. Several PC companies have tried from time to time to sell computers with 3D screens. Theoretically, one of those companies could put together all the features of a sensory computer. I think HP is the best candidate. It already plans to build the Leap Motion controller into some of its computers, and I can imagine a beautiful scenario in which HP creates a new ecosystem of sensory computers, low-cost home 3D printers that render in plastic, and service bureaus where you can get your kid’s science fair design converted to aluminum (titanium if you’re rich). It would be glorious.
But it’s not likely. To work right, a sensory computer requires a prodigious amount of new software, very careful integration of hardware and software features, and the courage to spend years kick-starting the ecosystem. I don’t think HP has the focus and patience to do it, not to mention the technical chops, alas. Same thing for the other PC companies.
Meg, please prove me wrong.
Apple is the company that ought to do it, but does it have the will? Apple has the expertise and the market presence to execute a paradigm change, and its history is studded with market-changing products. I love the idea of Apple putting a big 3D printer at the back of every Apple store. Maybe you could let Sensory Mac users sell their designs online, with pickup of the finished goods at any Apple store, and Apple (naturally) taking a 30% cut...
But I’m not sure if today’s Apple has the vision to carry out something like that. The company is heavily invested in smartphone and tablet computing, with an ongoing hobby around reinventing television. There’s not much executive bandwidth left for personal computing. The company’s evolution has taken it steadily toward mobile devices and entertainment, and away from productivity.
Think of it this way: If Apple were really focused on personal computing innovation, would it be letting HP take the lead in integrating the Leap Motion controller? Wouldn’t it have bought Leap Motion a year ago to keep it away from other companies?
I think personal computing is a legacy market to Apple, an aging cash cow it’ll gently milk but won’t lead. I hope I’m wrong.
We’re out of champions, unless... At this point we’ve disposed of most of the companies that have the expertise and clout to drive sensory computing. I could make up scenarios in which an outlier like Amazon would lead, but they’re not very credible. I think the other realistic question is whether a startup could do it.
It’s hard. Conventional wisdom says that you need $50 million to fund a computer system startup, and that sort of capital is absolutely positively not available for a company making hardware. But I think the $50 million figure is outdated. The cost of hardware development has been dropping rapidly, driven by flexible manufacturing processes and the ability to rapidly make prototypes. You could theoretically create a sensory computer and build it in small lots, responding to demand as orders come in. This would help you avoid the inventory carrying costs that make hardware startups so deadly.
The other big barrier to hardware startups has been the need to get into the retail channel, which requires huge investments in marketing, sales support, and even more inventory. Here too the situation has been changing. People today are more willing to buy hardware online, without touching it in a store first. And crowdfunding is making it more possible for a company to build up a market before it ships, including taking orders. That business model actually works pretty well today for a $100 gizmo, but will it work for a $2,000 productivity tool?
Maybe. I’m hopeful that some way or another we’ll get a sensory computer built in this decade. At this point, the best chance of making it happen is to talk up the idea so one or more companies will make it happen. Which is the point of this post.
[Thanks to Chris Dunphy for reviewing an early draft of this post. He fixed several glaring errors. Any errors that remain are my fault, not his.]
What do you think? Is there an opening for a sensory computer? Would you buy one? What features would you want to see in it? Who do you think could build it? Please post a comment and share your thoughts.
Sunday, 5 May 2013
Wednesday, 17 April 2013
The Dell Buyout: Storm Warning for the Tech Industry
Michael Dell is engaged in a lengthy struggle to take his company private, and if you’re focused on the smartphone and tablet markets, you probably don’t care. It’s hard to picture an old PC company like Dell pushing the envelope in tech, so from one perspective it doesn’t really matter who runs the company or whether it stays public or private. But I think Dell’s situation is important because it shows how the decline of Windows is changing the tech industry, and hints at much more dramatic changes that could affect all of us in the future. In this post I’ll talk about what’s happening to Dell, why it matters, and what may happen next.
Why is Dell going private?
I should start with a quick recap of Dell’s situation: Michael Dell and tech investment firm Silver Lake Partners have proposed to take Dell private in a transaction funded in part by a $2 billion loan from Microsoft. The proposal has angered shareholders who believe the company is worth more than what was offered, and two competing proposals have emerged from Carl Ichan and Blackstone Group. Dell now apparently faces an extended period of limbo while the competing proposals are evaluated.
Given how messy this process could be, it’s reasonable to ask why Michael Dell started it in the first place. I’m surprised at how many conflicting explanations have surfaced:
—The deal is largely a tax avoidance scheme, according to Slate (link). Like many tech companies, Dell has accumulated a large pool of profit overseas which it can’t bring back into the United States without paying 35% income tax on it. If Dell takes itself private, it can use that money to pay off the interest from the buyout without paying tax on it.
—It’s a financial shell game according to some financial analysts, including Richard Windsor, formerly of Nomura. His scenario is that after Dell takes the company private, it will sell or spin out the PC half of the company to pay off the buyout. That will leave Michael Dell and his partners owning Dell’s IT services business at low cost (link).
—It’s a way for Michael Dell to get some peace. In this scenario, Michael Dell is a sensitive man who’s grown tired of taking criticism from investors. The buyout is a way to get away from them. This explanation showed up in a large number of press reports immediately after the proposal. For example, here’s PC World: “Michael Dell apparently grew tired of running his company to the whims of a stock market that often favors immediate return over long-term investment.” (link)
—It’s a necessary prelude to broad organizational changes at Dell. The Economist put it this way: “Making the kind of wrenching operational changes Silver Lake typically prescribes would be tricky for a public company anxious not to panic shareholders.” (link)
—Michael Dell did it to save his job. According to BusinessWeek, Michael Dell was afraid that an activist shareholder might take over the company and force him out as CEO. So he proposed the deal as a pre-emptive strike. (link).
The problem with analyzing a company’s motivations is that you tend to assume there’s a logical explanation for the things it did. Often there’s not. Company managers are frequently fearful or misinformed, and sometimes they just make dumb mistakes. It’s possible that’s happening with Dell. But if we assume a basic level of rationality, then we can probably discount some of the proposed explanations. For example, I personally doubt Dell can pay off the deal by selling the PC business, because I don’t think anyone would buy it. It’s not like there’s another Lenovo out there hungry to get into PCs, and Google already bought one floundering hardware company; I doubt it has the appetite for another.
I’m also skeptical that after a lifetime in business Michael Dell is so thin-skinned that he can’t stand shareholder criticism. If you have the ego and drive to build up a company from scratch to the size of Dell, you usually don’t care much about complaints from puny mundane humans.
And I find it hard to believe that Dell had to take the company private in order to reorganize it. If Dell took a machete to the PC business, I think most investors would cheer rather than panicking.
The explanation I lean toward is that Michael Dell was afraid he wouldn’t be left in charge long enough to finish transforming the company. You can make a case that as 15% owner and with a base of investors focused on long-term gains, his position was secure from takeover threats. But after I looked in more detail at the company’s finances, and some market trends, I started to suspect that he felt a lot less secure than you’d expect. There are big storm clouds on the horizon for Dell, and they’re darkening rapidly. Those trends also threaten the rest of the PC industry.
A storm’s a-brewin’
Dell’s problems have been developing for years. The company’s power probably hit its peak in about 2005, when it was the world’s #1 PC vendor with about 17% of the market. Dell was the upstart beast that had dethroned the PC powers like Compaq, HP, and IBM. But after 2005, the PC industry adapted many of the flexible manufacturing practices that had made Dell so powerful. PC sales also shifted toward notebooks, which are much less customizable than the desktop computers that made Dell successful. The company’s market share started to erode. Dell tried for several years to turn around the PC business through innovation and new product categories, with no effect. Then in late 2008 it changed strategy and started evolving itself into an IT services company (like IBM, but supposedly aimed more at small and medium businesses). Starting with Perot Systems, Dell made a long series of IT services acquisitions, a process that has continued to this day.
Throughout this process, Dell gradually lost PC share, dropping to 12% by 2011. But because the PC market was growing, Dell’s actual PC shipments were more or less flat, giving the company a financial cushion to fund its transition to services.
Then in 2012, the situation changed. For the first time in years, overall PC unit sales shrank. What’s more, Lenovo (the new upstart beast in the PC market) was taking share from the other leaders. The combination of a shrinking market and a growing Lenovo caused a big drop in Dell’s PC sales.
Worldwide PC (desktop and notebook) unit sales
This chart shows worldwide PC revenue for calendar 2006-12. Until 2012, PC sales were growing fairly steadily, and I'm sure the management of Microsoft and the big PC companies found that reassuring. But in 2012, total PC unit sales dropped while Lenovo (the green wedge) continued to grow. This combination put huge pressure on sales of the other PC leaders, including Dell. (Source: Gartner Group)
Dell revenue (fiscal years)
This chart shows what that did to Dell’s revenue. The new parts of the company -- storage, services, and software -- were flat to slightly up last year. Servers grew as well. But they couldn’t grow quickly enough to offset the major declines in desktop and notebook computers. Dell’s total revenue dropped substantially. (The chart shows Dell’s financial years, which are about a year ahead of the calendar. So FY 2013 in this chart is roughly calendar 2012. Note that Dell did not break out its revenue by product line in FY 2010.) (Source: Dell financial reports)
I think the most disturbing thing for Dell about this revenue drop is that it happened in the face of the launch of Windows 8. Traditionally, new Windows launches have usually led to a nice uptick in PC sales as customers buy new hardware to go with the new software. Even the unpopular Windows Vista didn’t reduce PC sales. I’m sure Dell was expecting some sort of Windows 8 bounce, or at least a flattening in any decline. Instead, as we learned from the latest PC shipment reports, PC shipments dropped after the launch of Windows 8 (link). That indicates that the channel was probably stuffed with new Windows 8 PCs that have not yet sold through.
People who live in the world of smartphones and tablets are probably saying “so what?” But I doubt that was the reaction at Dell.
If you haven’t worked at a PC company, you’ll have trouble understanding how profoundly disturbing the current sales situation is for Windows licensees. The PC companies married themselves to the Microsoft-Intel growth engine years ago. In exchange for riding the Wintel wave, they long ago gave up on independent innovation and market-building. In many ways, they outsourced their product development brains to Microsoft so they could focus on operations and cost control. They trusted Microsoft to grow the market. Microsoft is now failing to deliver on its side of the bargain. Unless there's a stunning turnaround in Windows 8 demand, I think it’s now looking increasingly likely that we’ll see a sustained year over year drop in PC sales for at least several more quarters.
This is an existential shock for the PC companies. It’s like discovering that your house was built over a vast, crumbling sinkhole.
Prior to the PC sales decline, I think Michael Dell probably assumed that his PC business could continue to fund its growth in services for the foreseeable future. He has probably now reconsidered that assumption. If Lenovo continues to grow and the market continues to shrink, Dell’s revenue will drop further, and the company could be in a world of financial trouble a year from now. It’s the sort of trouble that can get a CEO fired even if he does own 15% of the company.
So here’s the sequence of events: By fall of last year, the troubles with Windows 8 were already becoming clear to the PC companies (remember, the Windows licensees have much better information on customer purchase plans than we get from the analysts). Michael Dell must have realized that he was headed for a significant decline in revenue. At the same time, we now know, one of the company’s major shareholders approached Michael Dell to float the idea of a buyout. That was apparently the trigger that started the whole buyout process.
Put yourself in Michael Dell’s shoes: the shareholders are getting restless already, and you know the situation is likely to get worse in the next year. Proposing a buyout now would be a pre-emptive strike to keep control over the company you founded. That’s what I think happened.
What happens next? After more confusion, someone will eventually win the bidding Dell. All of the bidders seem to agree that Dell should continue to invest in services, so the real debate is over what happens to the PC business. Michael Dell says if he wins, Dell will re-engage with the PC market (link):
In other words, Dell will cut its PC prices.
It seems strange that Dell would want to refocus on PCs after treating them like a cash cow for years. If the business was unattractive when PC sales were growing, why would it be attractive now? Maybe Dell decided that it needs strong PC sales to get its foot in the door to sell services. That seems like a reasonable idea. But shouldn’t the company have known that years ago?
Or maybe Dell feels that the interest and principal payments on its buyout will be smaller than the profits required of a public company. That might allow Dell to compete more aggressively in PCs while it still invests in services.
Maybe that’s the purpose of Microsoft’s $2 billion loan, to let Dell stay in PCs while it also grows services. It says something sad (and alarming) about Microsoft’s business if it now needs to pay companies to stay in the PC market.
What it means to the rest of us
I think the Dell deal is just the beginning of the Windows 8 fallout. There are several other, bigger, shoes waiting to drop.
What will the other major PC licensees do? If you’re working at a company like HP or Acer, everything about this situation feels ugly. Your faith in Windows has been broken, you’re losing share to Lenovo, and now Microsoft is subsidizing one of your biggest competitors. I’d be tempted to fly out to Redmond and demand my own handout. And I’d also be willing to look at more radical options. There are several possibilities:
—Exit the PC market. HP considered this in 2011, but backed away after a change in CEO. I wonder if the company will think about it again. Meg Whitman says no, that the PC business is important to HP’s other businesses, such as servers, because they buy many of the same parts. Exit PCs and you costs will go up because you won’t have the same purchase volumes. That’s a pretty backward endorsement of the PC business, but I guess it’s possible.
Acer doesn’t really have the option of dumping PCs. They make up most of its business, so it has to stay in computing hardware, one way or another.
—Find a new plough horse. In this option, you replace Windows with a platform that has better growth prospects. That lets you continue to use your clone vendor skills, but in a market that’s growing. Acer and HP are both dabbling in Chrome netbooks (link) and Android tablets. I wouldn’t be surprised to see many more experiments along these lines. But it’s not clear how much market momentum Google can generate for its tablets and netbooks. HP and Acer could easily spend a lot of money for very few sales, and in the meantime create a rift with Microsoft that would be hard to return from if Windows 8 does eventually take off.
—Reinvest in creating differentiated devices. This is the other option: get off the clone treadmill and be more like Apple, a device innovator. The trouble with this is that many years ago, the PC licensees laid off the people who knew how to build new markets and new categories of computing device. Recovering those skills is like trying to grow a new brain – very slow, and hard to do when your head is stuffed with other things. You need to be incredibly patient during the learning process, and accept that there will be failures along the way. It’s hard for public companies to show that sort of patience.
So maybe you buy a company that knows how to make new-category devices. For example, you could have bought Palm. As time goes on, HP’s handling of that transaction looks more and more like a business Waterloo.
There aren’t many other hardware innovators that you could buy. RIM, maybe? Or HTC? But then you’re in a meatgrinder smartphone market dominated by Samsung and Apple. The PC market, even if it’s shrinking, might look more inviting.
Personally, I’d look at buying Nook. Not necessarily because I want to be in the ebook business, but to get a team that knows how to design good mobile devices and is familiar with working on a forked version of Android.
I don’t think any of these three options look very attractive, but the slower the takeoff for Windows 8, the more desperate the Windows licensees will get, and the more likely that they’ll try one or more radical “strategic initiatives” in the next year.
What if Microsoft gave a party and nobody came? The situation for Microsoft is becoming more and more complicated. Windows is not dead. It has an enormous installed base of users who are hooked on Windows applications and won’t go away in the near future. However, Microsoft faces some huge short-term and long-term challenges, and many of its possible responses could make the situation worse rather than better.
I think it’s pretty clear that we’ve entered a period of extended decline in Windows usage, as customers use tablets to replace notebooks in some situations and for some tasks. The tablet erosion may be self-limiting; I don’t think you can use today’s tablets to replace everything a PC does. If that’s the case, Windows sales may eventually stabilize and even resume growing once the tablet devices have taken their pound of flesh.
On the other hand, it’s equally possible that tablets and netbooks will continue to improve, gradually consuming more and more of the Windows market. That’s certainly what Google is hoping to do with Chrome. What would happen if Apple made a netbook and did it right?
Microsoft had hoped to head off all these problems with Windows 8. By combining the best of PCs and tablets, Windows 8 was supposed to stop the tablet cannibalization and also set off a lucrative Windows upgrade cycle. Unfortunately, at least for the moment, Windows 8 is looking like the worst of both worlds – not a good enough tablet to displace the iPad, but different enough to scare away many Windows users.
This puts Microsoft in a nasty dilemma. If it believes that Windows 8 sales will eventually rebound, then Microsoft should invest heavily in keeping its PC partners engaged. In that context, the $2 billion loan to Dell is a reasonable stopgap to prevent the loss of a major licensee.
On the other hand, if Windows sales are entering a long-term period of gradual decline, Microsoft should be doing the exact opposite. Rather than spending money to keep licensees, it should be allowing one or more of them to leave the business, so the vendors that remain will still be profitable and willing to invest. It’s better for Microsoft to have seven licensees who are making money than ten licensees who all want to leave and are investing heavily in Chrome or Android or other crazy schemes.
Microsoft also faces a difficult challenge with Lenovo. Even if Windows sales turn up, Lenovo has been taking share so fast that it will be hard for other Windows licensees to grow. At current course and speed, Lenovo is likely to end up the largest Windows licensee. In the past, Microsoft didn’t care if one licensee replaced another; they were interchangeable. But Lenovo has close ties to the Chinese government, which has repeatedly shown that it’s willing to lean on foreign tech companies. That has to make Microsoft uncomfortable.
In that case, the $2 billion investment in Dell starts to look like a defensive measure to get someone to compete against Lenovo on price. But if Microsoft subsidizes a price war in PCs, that might give the other licensees more reason to disinvest, enabling Lenovo to gain share even faster.
This is the true ugliness of Microsoft’s situation. It is in danger of falling into a series of self-defeating actions:
—To combat tablets, it creates a version of Windows that accelerates the Windows sales decline.
—To keep its licensees loyal, it makes Windows overdistributed, which increases licensees’ incentive to leave.
The situation is becoming more and more fragile. As I said above, I don’t expect Windows to collapse instantly. But many companies are reconsidering their investments in it, a process that is likely to eventually give customers second thoughts as well. We could end up with an unexpected series of events that combine to break the loyalty of Windows users and start a migration away from it that Microsoft couldn’t stop.
The key question is whether Google, Apple, or some other vendor can give Windows customers and licensees an attractive place to run away to. So far they haven’t, but the year is still young. I’ll talk more about the possibilities next time.
Why is Dell going private?
I should start with a quick recap of Dell’s situation: Michael Dell and tech investment firm Silver Lake Partners have proposed to take Dell private in a transaction funded in part by a $2 billion loan from Microsoft. The proposal has angered shareholders who believe the company is worth more than what was offered, and two competing proposals have emerged from Carl Ichan and Blackstone Group. Dell now apparently faces an extended period of limbo while the competing proposals are evaluated.
Given how messy this process could be, it’s reasonable to ask why Michael Dell started it in the first place. I’m surprised at how many conflicting explanations have surfaced:
—The deal is largely a tax avoidance scheme, according to Slate (link). Like many tech companies, Dell has accumulated a large pool of profit overseas which it can’t bring back into the United States without paying 35% income tax on it. If Dell takes itself private, it can use that money to pay off the interest from the buyout without paying tax on it.
—It’s a financial shell game according to some financial analysts, including Richard Windsor, formerly of Nomura. His scenario is that after Dell takes the company private, it will sell or spin out the PC half of the company to pay off the buyout. That will leave Michael Dell and his partners owning Dell’s IT services business at low cost (link).
—It’s a way for Michael Dell to get some peace. In this scenario, Michael Dell is a sensitive man who’s grown tired of taking criticism from investors. The buyout is a way to get away from them. This explanation showed up in a large number of press reports immediately after the proposal. For example, here’s PC World: “Michael Dell apparently grew tired of running his company to the whims of a stock market that often favors immediate return over long-term investment.” (link)
—It’s a necessary prelude to broad organizational changes at Dell. The Economist put it this way: “Making the kind of wrenching operational changes Silver Lake typically prescribes would be tricky for a public company anxious not to panic shareholders.” (link)
—Michael Dell did it to save his job. According to BusinessWeek, Michael Dell was afraid that an activist shareholder might take over the company and force him out as CEO. So he proposed the deal as a pre-emptive strike. (link).
The problem with analyzing a company’s motivations is that you tend to assume there’s a logical explanation for the things it did. Often there’s not. Company managers are frequently fearful or misinformed, and sometimes they just make dumb mistakes. It’s possible that’s happening with Dell. But if we assume a basic level of rationality, then we can probably discount some of the proposed explanations. For example, I personally doubt Dell can pay off the deal by selling the PC business, because I don’t think anyone would buy it. It’s not like there’s another Lenovo out there hungry to get into PCs, and Google already bought one floundering hardware company; I doubt it has the appetite for another.
I’m also skeptical that after a lifetime in business Michael Dell is so thin-skinned that he can’t stand shareholder criticism. If you have the ego and drive to build up a company from scratch to the size of Dell, you usually don’t care much about complaints from puny mundane humans.
And I find it hard to believe that Dell had to take the company private in order to reorganize it. If Dell took a machete to the PC business, I think most investors would cheer rather than panicking.
The explanation I lean toward is that Michael Dell was afraid he wouldn’t be left in charge long enough to finish transforming the company. You can make a case that as 15% owner and with a base of investors focused on long-term gains, his position was secure from takeover threats. But after I looked in more detail at the company’s finances, and some market trends, I started to suspect that he felt a lot less secure than you’d expect. There are big storm clouds on the horizon for Dell, and they’re darkening rapidly. Those trends also threaten the rest of the PC industry.
A storm’s a-brewin’
Dell’s problems have been developing for years. The company’s power probably hit its peak in about 2005, when it was the world’s #1 PC vendor with about 17% of the market. Dell was the upstart beast that had dethroned the PC powers like Compaq, HP, and IBM. But after 2005, the PC industry adapted many of the flexible manufacturing practices that had made Dell so powerful. PC sales also shifted toward notebooks, which are much less customizable than the desktop computers that made Dell successful. The company’s market share started to erode. Dell tried for several years to turn around the PC business through innovation and new product categories, with no effect. Then in late 2008 it changed strategy and started evolving itself into an IT services company (like IBM, but supposedly aimed more at small and medium businesses). Starting with Perot Systems, Dell made a long series of IT services acquisitions, a process that has continued to this day.
Throughout this process, Dell gradually lost PC share, dropping to 12% by 2011. But because the PC market was growing, Dell’s actual PC shipments were more or less flat, giving the company a financial cushion to fund its transition to services.
Then in 2012, the situation changed. For the first time in years, overall PC unit sales shrank. What’s more, Lenovo (the new upstart beast in the PC market) was taking share from the other leaders. The combination of a shrinking market and a growing Lenovo caused a big drop in Dell’s PC sales.
Worldwide PC (desktop and notebook) unit sales
This chart shows worldwide PC revenue for calendar 2006-12. Until 2012, PC sales were growing fairly steadily, and I'm sure the management of Microsoft and the big PC companies found that reassuring. But in 2012, total PC unit sales dropped while Lenovo (the green wedge) continued to grow. This combination put huge pressure on sales of the other PC leaders, including Dell. (Source: Gartner Group)
Dell revenue (fiscal years)
This chart shows what that did to Dell’s revenue. The new parts of the company -- storage, services, and software -- were flat to slightly up last year. Servers grew as well. But they couldn’t grow quickly enough to offset the major declines in desktop and notebook computers. Dell’s total revenue dropped substantially. (The chart shows Dell’s financial years, which are about a year ahead of the calendar. So FY 2013 in this chart is roughly calendar 2012. Note that Dell did not break out its revenue by product line in FY 2010.) (Source: Dell financial reports)
I think the most disturbing thing for Dell about this revenue drop is that it happened in the face of the launch of Windows 8. Traditionally, new Windows launches have usually led to a nice uptick in PC sales as customers buy new hardware to go with the new software. Even the unpopular Windows Vista didn’t reduce PC sales. I’m sure Dell was expecting some sort of Windows 8 bounce, or at least a flattening in any decline. Instead, as we learned from the latest PC shipment reports, PC shipments dropped after the launch of Windows 8 (link). That indicates that the channel was probably stuffed with new Windows 8 PCs that have not yet sold through.
People who live in the world of smartphones and tablets are probably saying “so what?” But I doubt that was the reaction at Dell.
If you haven’t worked at a PC company, you’ll have trouble understanding how profoundly disturbing the current sales situation is for Windows licensees. The PC companies married themselves to the Microsoft-Intel growth engine years ago. In exchange for riding the Wintel wave, they long ago gave up on independent innovation and market-building. In many ways, they outsourced their product development brains to Microsoft so they could focus on operations and cost control. They trusted Microsoft to grow the market. Microsoft is now failing to deliver on its side of the bargain. Unless there's a stunning turnaround in Windows 8 demand, I think it’s now looking increasingly likely that we’ll see a sustained year over year drop in PC sales for at least several more quarters.
This is an existential shock for the PC companies. It’s like discovering that your house was built over a vast, crumbling sinkhole.
Prior to the PC sales decline, I think Michael Dell probably assumed that his PC business could continue to fund its growth in services for the foreseeable future. He has probably now reconsidered that assumption. If Lenovo continues to grow and the market continues to shrink, Dell’s revenue will drop further, and the company could be in a world of financial trouble a year from now. It’s the sort of trouble that can get a CEO fired even if he does own 15% of the company.
So here’s the sequence of events: By fall of last year, the troubles with Windows 8 were already becoming clear to the PC companies (remember, the Windows licensees have much better information on customer purchase plans than we get from the analysts). Michael Dell must have realized that he was headed for a significant decline in revenue. At the same time, we now know, one of the company’s major shareholders approached Michael Dell to float the idea of a buyout. That was apparently the trigger that started the whole buyout process.
Put yourself in Michael Dell’s shoes: the shareholders are getting restless already, and you know the situation is likely to get worse in the next year. Proposing a buyout now would be a pre-emptive strike to keep control over the company you founded. That’s what I think happened.
What happens next? After more confusion, someone will eventually win the bidding Dell. All of the bidders seem to agree that Dell should continue to invest in services, so the real debate is over what happens to the PC business. Michael Dell says if he wins, Dell will re-engage with the PC market (link):
“While Dell's strategy in the PC business has been to maximize gross margins, following the transaction, we expect to focus instead on maximizing revenue and cash flow growth.”
In other words, Dell will cut its PC prices.
It seems strange that Dell would want to refocus on PCs after treating them like a cash cow for years. If the business was unattractive when PC sales were growing, why would it be attractive now? Maybe Dell decided that it needs strong PC sales to get its foot in the door to sell services. That seems like a reasonable idea. But shouldn’t the company have known that years ago?
Or maybe Dell feels that the interest and principal payments on its buyout will be smaller than the profits required of a public company. That might allow Dell to compete more aggressively in PCs while it still invests in services.
Maybe that’s the purpose of Microsoft’s $2 billion loan, to let Dell stay in PCs while it also grows services. It says something sad (and alarming) about Microsoft’s business if it now needs to pay companies to stay in the PC market.
What it means to the rest of us
I think the Dell deal is just the beginning of the Windows 8 fallout. There are several other, bigger, shoes waiting to drop.
What will the other major PC licensees do? If you’re working at a company like HP or Acer, everything about this situation feels ugly. Your faith in Windows has been broken, you’re losing share to Lenovo, and now Microsoft is subsidizing one of your biggest competitors. I’d be tempted to fly out to Redmond and demand my own handout. And I’d also be willing to look at more radical options. There are several possibilities:
—Exit the PC market. HP considered this in 2011, but backed away after a change in CEO. I wonder if the company will think about it again. Meg Whitman says no, that the PC business is important to HP’s other businesses, such as servers, because they buy many of the same parts. Exit PCs and you costs will go up because you won’t have the same purchase volumes. That’s a pretty backward endorsement of the PC business, but I guess it’s possible.
Acer doesn’t really have the option of dumping PCs. They make up most of its business, so it has to stay in computing hardware, one way or another.
—Find a new plough horse. In this option, you replace Windows with a platform that has better growth prospects. That lets you continue to use your clone vendor skills, but in a market that’s growing. Acer and HP are both dabbling in Chrome netbooks (link) and Android tablets. I wouldn’t be surprised to see many more experiments along these lines. But it’s not clear how much market momentum Google can generate for its tablets and netbooks. HP and Acer could easily spend a lot of money for very few sales, and in the meantime create a rift with Microsoft that would be hard to return from if Windows 8 does eventually take off.
—Reinvest in creating differentiated devices. This is the other option: get off the clone treadmill and be more like Apple, a device innovator. The trouble with this is that many years ago, the PC licensees laid off the people who knew how to build new markets and new categories of computing device. Recovering those skills is like trying to grow a new brain – very slow, and hard to do when your head is stuffed with other things. You need to be incredibly patient during the learning process, and accept that there will be failures along the way. It’s hard for public companies to show that sort of patience.
So maybe you buy a company that knows how to make new-category devices. For example, you could have bought Palm. As time goes on, HP’s handling of that transaction looks more and more like a business Waterloo.
There aren’t many other hardware innovators that you could buy. RIM, maybe? Or HTC? But then you’re in a meatgrinder smartphone market dominated by Samsung and Apple. The PC market, even if it’s shrinking, might look more inviting.
Personally, I’d look at buying Nook. Not necessarily because I want to be in the ebook business, but to get a team that knows how to design good mobile devices and is familiar with working on a forked version of Android.
I don’t think any of these three options look very attractive, but the slower the takeoff for Windows 8, the more desperate the Windows licensees will get, and the more likely that they’ll try one or more radical “strategic initiatives” in the next year.
What if Microsoft gave a party and nobody came? The situation for Microsoft is becoming more and more complicated. Windows is not dead. It has an enormous installed base of users who are hooked on Windows applications and won’t go away in the near future. However, Microsoft faces some huge short-term and long-term challenges, and many of its possible responses could make the situation worse rather than better.
I think it’s pretty clear that we’ve entered a period of extended decline in Windows usage, as customers use tablets to replace notebooks in some situations and for some tasks. The tablet erosion may be self-limiting; I don’t think you can use today’s tablets to replace everything a PC does. If that’s the case, Windows sales may eventually stabilize and even resume growing once the tablet devices have taken their pound of flesh.
On the other hand, it’s equally possible that tablets and netbooks will continue to improve, gradually consuming more and more of the Windows market. That’s certainly what Google is hoping to do with Chrome. What would happen if Apple made a netbook and did it right?
Microsoft had hoped to head off all these problems with Windows 8. By combining the best of PCs and tablets, Windows 8 was supposed to stop the tablet cannibalization and also set off a lucrative Windows upgrade cycle. Unfortunately, at least for the moment, Windows 8 is looking like the worst of both worlds – not a good enough tablet to displace the iPad, but different enough to scare away many Windows users.
This puts Microsoft in a nasty dilemma. If it believes that Windows 8 sales will eventually rebound, then Microsoft should invest heavily in keeping its PC partners engaged. In that context, the $2 billion loan to Dell is a reasonable stopgap to prevent the loss of a major licensee.
On the other hand, if Windows sales are entering a long-term period of gradual decline, Microsoft should be doing the exact opposite. Rather than spending money to keep licensees, it should be allowing one or more of them to leave the business, so the vendors that remain will still be profitable and willing to invest. It’s better for Microsoft to have seven licensees who are making money than ten licensees who all want to leave and are investing heavily in Chrome or Android or other crazy schemes.
Microsoft also faces a difficult challenge with Lenovo. Even if Windows sales turn up, Lenovo has been taking share so fast that it will be hard for other Windows licensees to grow. At current course and speed, Lenovo is likely to end up the largest Windows licensee. In the past, Microsoft didn’t care if one licensee replaced another; they were interchangeable. But Lenovo has close ties to the Chinese government, which has repeatedly shown that it’s willing to lean on foreign tech companies. That has to make Microsoft uncomfortable.
In that case, the $2 billion investment in Dell starts to look like a defensive measure to get someone to compete against Lenovo on price. But if Microsoft subsidizes a price war in PCs, that might give the other licensees more reason to disinvest, enabling Lenovo to gain share even faster.
This is the true ugliness of Microsoft’s situation. It is in danger of falling into a series of self-defeating actions:
—To combat tablets, it creates a version of Windows that accelerates the Windows sales decline.
—To keep its licensees loyal, it makes Windows overdistributed, which increases licensees’ incentive to leave.
The situation is becoming more and more fragile. As I said above, I don’t expect Windows to collapse instantly. But many companies are reconsidering their investments in it, a process that is likely to eventually give customers second thoughts as well. We could end up with an unexpected series of events that combine to break the loyalty of Windows users and start a migration away from it that Microsoft couldn’t stop.
The key question is whether Google, Apple, or some other vendor can give Windows customers and licensees an attractive place to run away to. So far they haven’t, but the year is still young. I’ll talk more about the possibilities next time.
Monday, 1 April 2013
The Real Story Behind Google’s Street View Program
This morning Google signed a consent decree with the US Federal Trade Commission to avoid a lawsuit over privacy concerns caused by Google’s Street View program, which covertly collected electronic data including the location of unsecured WiFi access points and the contents of users’ web access logs. As part of the settlement, the FTC released a huge collection of sworn depositions that were taken from Google employees during its investigation. I’ve been wading through the depositions, and they give tantalizing hints of a deeper data collection plan by Google. Here’s what I found...
–We shouldn’t be surprised that there was an Android angle to the data collection. Most smartphones contain motion sensors, which when tuned to eliminate background noise can detect the pulse of the user. That was being combined with the phone's location and time data, and automatically reported to a centralized Google server.
Using this data, Google could map the excitement of crowds of people at any place and time. This was intended for use in an automated competitor to Yelp. By measuring biometric arousal of people at various locations, Google could automatically identify the most interesting restaurants, movies, and sporting events worldwide. The system was put into secret testing in Kansas City, but problems arose when it had trouble differentiating between the causes of arousal in crowds. This resulted in several unfortunate incidents in which the Google system routed adventurous diners into 24-hour fitness gyms and knife fights at biker bars. According to the papers I saw, Google is now planning to kill the project, a process that will involve announcing it worldwide with a big wave of publicity and then terminating it nine months later.
–Tech Crunch reported about six months ago that Google was renting time on NASA’s network of earth-observation satellites. This was assumed to be a way to increase the accuracy of Google Maps. What wasn’t reported at the time was that Google was also renting time on the National Security Agency’s high resolution photography satellites, the ones that can read a newspaper from low Earth orbit. Apparently the NSA needed money from Google to overcome the federal sequester, and Google wanted a boost for Google+ in its endless battle with Facebook.
Google’s apparent plan was to automate the drudgery of creating status posts for Google+ users. Instead of using your cameraphone to photograph your lunch or something cute you saw on the street, Google would track your smartphone’s location and use the spy satellites to automatically capture and post photographs of any plate-shaped object in front of you, and any dog, cat, or squirrel that passed within ten feet of you. An additional feature would enable your friends to automatically reply “looks yummy” or “awww so cute.” (An advanced option would also insert random comments about Taylor Swift.) Google estimated that automating these functions would add an extra hour and 23 minutes to the average user’s work day, increasing world GDP by three points if everyone switched from Facebook to Google+.
–The other big news to me was the project’s tie-in to Google Glass, the company’s intelligent glasses. Glass doesn’t just monitor everything the user looks at and says; a sensor in Glass also measures pupil dilation, which can be correlated to determine the user’s emotional response to everything around them. This has obvious value to advertisers, who can automatically track brand affinity and reactions to advertisements. What isn’t widely known is that Glass can also feed ideas and emotions into the user’s brain. By carefully modulating the signals from Glass’s wireless transceiver, Google can directly stimulate targeted parts of the brainstem. This can be used to, for example, make you feel a wave of love when you see a Buick, or to feel a wave of nausea when you look at the wrong brand of beer.
This can sometimes cause cognitive problems. For example, during early tests Google found that force-fitting the concepts of “love” and “Buick” caused potentially fatal neurological damage to people under age 40. The papers said Google was working on age filters to overcome this problem.
Although today’s Glass products can only crudely affect emotions, the depositions gave vague hints that Google plans to upgrade the interface to enable full two-way communication with the minds of Glass users. (This explains Google's acquisition of the startup Spitr in 2010, which had been puzzling me.) The Glass-based thought transfer system could enable people to telepathically control Google’s planned fleet of moon-exploring robots. It may also be used to incorporate Glass users into the Singularity overmind when it emerges from Google’s server farms, which is apparently scheduled for sometime in March of 2017.
The ghosts of April Firsts past:
2012: Twitter at Gettysburg
2011: The microwave hairdryer, and four other colossal tech failures you've never heard of
2010: The Yahoo-New York Times merger
2009: The US government's tech industry bailout
2008: Survey: 27% of early iPhone adopters wear it attached to a body piercing
2007: Twitter + telepathy = Spitr, the ultimate social network
2006: Google buys Sprint
–We shouldn’t be surprised that there was an Android angle to the data collection. Most smartphones contain motion sensors, which when tuned to eliminate background noise can detect the pulse of the user. That was being combined with the phone's location and time data, and automatically reported to a centralized Google server.
Using this data, Google could map the excitement of crowds of people at any place and time. This was intended for use in an automated competitor to Yelp. By measuring biometric arousal of people at various locations, Google could automatically identify the most interesting restaurants, movies, and sporting events worldwide. The system was put into secret testing in Kansas City, but problems arose when it had trouble differentiating between the causes of arousal in crowds. This resulted in several unfortunate incidents in which the Google system routed adventurous diners into 24-hour fitness gyms and knife fights at biker bars. According to the papers I saw, Google is now planning to kill the project, a process that will involve announcing it worldwide with a big wave of publicity and then terminating it nine months later.
–Tech Crunch reported about six months ago that Google was renting time on NASA’s network of earth-observation satellites. This was assumed to be a way to increase the accuracy of Google Maps. What wasn’t reported at the time was that Google was also renting time on the National Security Agency’s high resolution photography satellites, the ones that can read a newspaper from low Earth orbit. Apparently the NSA needed money from Google to overcome the federal sequester, and Google wanted a boost for Google+ in its endless battle with Facebook.
Google’s apparent plan was to automate the drudgery of creating status posts for Google+ users. Instead of using your cameraphone to photograph your lunch or something cute you saw on the street, Google would track your smartphone’s location and use the spy satellites to automatically capture and post photographs of any plate-shaped object in front of you, and any dog, cat, or squirrel that passed within ten feet of you. An additional feature would enable your friends to automatically reply “looks yummy” or “awww so cute.” (An advanced option would also insert random comments about Taylor Swift.) Google estimated that automating these functions would add an extra hour and 23 minutes to the average user’s work day, increasing world GDP by three points if everyone switched from Facebook to Google+.
–The other big news to me was the project’s tie-in to Google Glass, the company’s intelligent glasses. Glass doesn’t just monitor everything the user looks at and says; a sensor in Glass also measures pupil dilation, which can be correlated to determine the user’s emotional response to everything around them. This has obvious value to advertisers, who can automatically track brand affinity and reactions to advertisements. What isn’t widely known is that Glass can also feed ideas and emotions into the user’s brain. By carefully modulating the signals from Glass’s wireless transceiver, Google can directly stimulate targeted parts of the brainstem. This can be used to, for example, make you feel a wave of love when you see a Buick, or to feel a wave of nausea when you look at the wrong brand of beer.
This can sometimes cause cognitive problems. For example, during early tests Google found that force-fitting the concepts of “love” and “Buick” caused potentially fatal neurological damage to people under age 40. The papers said Google was working on age filters to overcome this problem.
Although today’s Glass products can only crudely affect emotions, the depositions gave vague hints that Google plans to upgrade the interface to enable full two-way communication with the minds of Glass users. (This explains Google's acquisition of the startup Spitr in 2010, which had been puzzling me.) The Glass-based thought transfer system could enable people to telepathically control Google’s planned fleet of moon-exploring robots. It may also be used to incorporate Glass users into the Singularity overmind when it emerges from Google’s server farms, which is apparently scheduled for sometime in March of 2017.
Posted April 1, 2013
The ghosts of April Firsts past:
2012: Twitter at Gettysburg
2011: The microwave hairdryer, and four other colossal tech failures you've never heard of
2010: The Yahoo-New York Times merger
2009: The US government's tech industry bailout
2008: Survey: 27% of early iPhone adopters wear it attached to a body piercing
2007: Twitter + telepathy = Spitr, the ultimate social network
2006: Google buys Sprint
Thursday, 21 March 2013
Why it’s Hard to Set a Standard and Maximize Short-Term Profit
After my post yesterday on Kevin Lynch’s move to Apple (link), Infinity Softworks CEO Elia Freedman sent me a followup question:
“This line is interesting: ‘You can’t set a standard in tech and maximize short-term profit at the same time.’ Talk more about this?”
He’s right, I did assert that without explaining it. So here goes:
There are a couple of different tech industry things that we call standards. The first type of standard is a product that almost everyone uses because it has critical mass: Microsoft Word, Internet Explorer, etc. The second type of standard is a technology or tech specification that almost everyone builds on or incorporates into relevant products: HTML, JPEG, etc.
Once in a while you can establish a standard without limiting your short-term revenue (Adobe Photoshop is probably an example; it's carried a premium price ever since it was introduced in 1990). But in most cases, to establish a tech standard you have to limit your near-term profitability. Sometimes that means lowering your margins for quarters or years until the standard is established. In other cases it means permanently giving up some revenue streams in order to create a position of power.
A few examples:
—Adobe gradually gave away PDF in order to solidify it as a standard for document interchange. At first Adobe made the PDF reader free of charge, and eventually it gave away the PDF standard itself, enabling other companies to create PDF readers and creators that competed with Adobe. This move enabled PDF to become one of the most resilient standards in computing. Think about it – despite the hostility of much of the Internet community, and full-bore attacks from Microsoft and others, PDF continues to be a standard today.
When Adobe gave up control over PDF, it reduced the near-term revenue it could have earned through selling PDF readers and creator apps. This undoubtedly lowered Adobe’s quarterly revenue for a while, but it enabled PDF to survive as a standard when many other Adobe standards have withered away. Plus Adobe managed to keep a nice business selling PDF management software to large companies.
—Amazon has been selling e-reader devices at cost for several years in order to jumpstart the market for ebooks. I doubt Amazon will ever make much money from its hardware, but it’s willing to make that sacrifice in order to control the ebook transition and establish itself as the standard electronic bookstore.
—Google doesn’t charge license fees to use Android in a smartphone. This played a huge role in the early adoption of Android by phone makers; I think there’s a good chance the OS would never have taken off if Google had tried to charge for it. Google obviously hopes to make the money back through bundled services, but it’s not clear how successful that will be, and in the meantime Android is a huge cost sink for Google.
—Many open source companies operate by giving away their software and then charging for services or other ancillary products related to them. This approach defers revenue until the software becomes established as a widely-adopted standard.
As I explain in Map the Future, strategies like this are very problematic for an analytical company that focuses on logical cost-benefit planning. The benefits of establishing a standard are usually nebulous and risky, while the costs are immediate and painful. Faced with that kind of choice, most analytical companies will focus on tangible near-term opportunities. Thus Adobe made the prudent and logical decision to make money from Flash Lite when it had the chance, rather than sacrificing revenue to possibly make it a standard in the future.
You made your choice, now you have to live with it.
In the tech industry, the road to hell is often paved with prudent business decisions.
“This line is interesting: ‘You can’t set a standard in tech and maximize short-term profit at the same time.’ Talk more about this?”
He’s right, I did assert that without explaining it. So here goes:
There are a couple of different tech industry things that we call standards. The first type of standard is a product that almost everyone uses because it has critical mass: Microsoft Word, Internet Explorer, etc. The second type of standard is a technology or tech specification that almost everyone builds on or incorporates into relevant products: HTML, JPEG, etc.
Once in a while you can establish a standard without limiting your short-term revenue (Adobe Photoshop is probably an example; it's carried a premium price ever since it was introduced in 1990). But in most cases, to establish a tech standard you have to limit your near-term profitability. Sometimes that means lowering your margins for quarters or years until the standard is established. In other cases it means permanently giving up some revenue streams in order to create a position of power.
A few examples:
—Adobe gradually gave away PDF in order to solidify it as a standard for document interchange. At first Adobe made the PDF reader free of charge, and eventually it gave away the PDF standard itself, enabling other companies to create PDF readers and creators that competed with Adobe. This move enabled PDF to become one of the most resilient standards in computing. Think about it – despite the hostility of much of the Internet community, and full-bore attacks from Microsoft and others, PDF continues to be a standard today.
When Adobe gave up control over PDF, it reduced the near-term revenue it could have earned through selling PDF readers and creator apps. This undoubtedly lowered Adobe’s quarterly revenue for a while, but it enabled PDF to survive as a standard when many other Adobe standards have withered away. Plus Adobe managed to keep a nice business selling PDF management software to large companies.
—Amazon has been selling e-reader devices at cost for several years in order to jumpstart the market for ebooks. I doubt Amazon will ever make much money from its hardware, but it’s willing to make that sacrifice in order to control the ebook transition and establish itself as the standard electronic bookstore.
—Google doesn’t charge license fees to use Android in a smartphone. This played a huge role in the early adoption of Android by phone makers; I think there’s a good chance the OS would never have taken off if Google had tried to charge for it. Google obviously hopes to make the money back through bundled services, but it’s not clear how successful that will be, and in the meantime Android is a huge cost sink for Google.
—Many open source companies operate by giving away their software and then charging for services or other ancillary products related to them. This approach defers revenue until the software becomes established as a widely-adopted standard.
As I explain in Map the Future, strategies like this are very problematic for an analytical company that focuses on logical cost-benefit planning. The benefits of establishing a standard are usually nebulous and risky, while the costs are immediate and painful. Faced with that kind of choice, most analytical companies will focus on tangible near-term opportunities. Thus Adobe made the prudent and logical decision to make money from Flash Lite when it had the chance, rather than sacrificing revenue to possibly make it a standard in the future.
You made your choice, now you have to live with it.
In the tech industry, the road to hell is often paved with prudent business decisions.
Wednesday, 20 March 2013
Kevin Lynch and Adobe: Shooting the Messenger
There’s been some nasty commentary about Apple’s decision to hire Kevin Lynch from Adobe. John Gruber at Daring Fireball has been especially acerbic, and there certainly are some things Lynch has said that look dumb when you read them today. But while I usually agree with the Fireball, in this case I think you need to look beyond Lynch’s statements and understand the situation he was in at Adobe.
Let me start with a little history. Adobe is a software powerhouse, with a long and very successful history in publishing and multimedia. But despite all its successes, I think it deserves to go down in history as the company that choked when it had the opportunity to rule the world, not once but twice.
In the formative years of the Internet, Adobe could have set the standard for formatting web pages. Adobe PostScript was far more sophisticated and capable than HTML, which became core standard for displaying web pages. HTML is basically a text formatting specification. You give it a bunch of text tagged with suggestions for things like “this should be bold” or “underline this” or “this is a link,” and then the browser does its best to interpret the tags. HTML was derived from a formatting standard used in academia and government publishing, and it’s great for long text-only reports. But it was not designed to mix text and graphics. That’s why we still struggle to fully integrate great graphics with the web even today.
In contrast, PostScript is a programming language designed to mix text and graphics effortlessly. You can use it to control exactly where every pixel and image goes on the screen, and exactly how it looks. It was so powerful and so far ahead of its time that Steve Jobs’ NeXT chose it as the graphics language for its workstations. Using PostScript, you could easily draw things twenty years ago that we still can’t do on web pages today. The nagging incompatibilities and formatting weirdnesses we have to cope with from HTML, the fragile hacks and workarounds that web page designers live with every day...none of that had to happen.
Unfortunately, Adobe was so obsessed with making money selling PostScript interpreters that it was unwilling to make PostScript an open standard when it could have made a difference. And so Adobe missed the chance to set the graphics standard for the web.
Fast forward a few years, and Adobe again fumbled the chance for greatness, this time with Flash. This wasn’t just Adobe’s fault; it was a joint project with Macromedia, which Adobe bought in 2005. Flash became the dominant animation and video playback standard for the web because, unlike the situation with PostScript, the player was free. There was no cost for users or tech companies to adopt the standard, and so it spread wildly, boosted by a bundling deal with Microsoft (link). There was a time in the early 2000s, prior to the iPhone and Android, when the mobile phone world was ripe for a takeover by software that would let you produce great visuals on a smartphone. Palm OS was too weak for the task, Windows CE was a mess, and Symbian was, well, Symbian. Macromedia, and later Adobe, could have set the standard for mobile phone graphics if they had given away the Flash player for mobile phones. But Macromedia had lucked into a licensing deal under which Japan’s NTT DoCoMo paid to put Flash on millions of mobile phones (link). Macromedia and Adobe fell in love with that revenue stream and decided they could extract money from every other mobile phone company in the world by charging for the player.
I’d call that move arrogant, but it was more than that – it was stupid. You can’t set a standard in tech and maximize short-term profit at the same time. For a few years of profit, Adobe sacrificed the opportunity to dominate the mobile phone market for a generation, and in the process fatally weakened Flash on the PC as well.
I could go on and on about the opportunities Adobe squandered: AIR, e-books...it’s a depressing list that reminds me of the stories people tell about Xerox PARC. If I thought Kevin Lynch was the executive responsible for those moves, I’d be shocked that Apple hired him. But as far as I can tell, they were made by other people, and he was stuck playing out the hand he was dealt. I’ve been there, I’ve done that. If you’re part of a team you do the best you can and trust that the folks around you will do theirs. If you want to fault Kevin for something, fault him for staying so long at a company that was putting quarterly profits ahead of long-term investment.
So my reaction to the Lynch hiring depends on what Apple’s going to ask him to do, and we don’t know that yet. If Apple wants him to run business strategy I’ll be worried, because I don’t think he had great role models at Adobe. If Apple wants him to run marketing I’ll be alarmed. But I think Apple has hired him as a technologist. In that role he’s extremely smart and easy to work with, and Apple fans, I think he can be an asset to the company.
Disclosure: I did a little bit of consulting for Adobe in the past, and have met Kevin Lynch. This article doesn’t include any confidential or inside information.
Let me start with a little history. Adobe is a software powerhouse, with a long and very successful history in publishing and multimedia. But despite all its successes, I think it deserves to go down in history as the company that choked when it had the opportunity to rule the world, not once but twice.
In the formative years of the Internet, Adobe could have set the standard for formatting web pages. Adobe PostScript was far more sophisticated and capable than HTML, which became core standard for displaying web pages. HTML is basically a text formatting specification. You give it a bunch of text tagged with suggestions for things like “this should be bold” or “underline this” or “this is a link,” and then the browser does its best to interpret the tags. HTML was derived from a formatting standard used in academia and government publishing, and it’s great for long text-only reports. But it was not designed to mix text and graphics. That’s why we still struggle to fully integrate great graphics with the web even today.
In contrast, PostScript is a programming language designed to mix text and graphics effortlessly. You can use it to control exactly where every pixel and image goes on the screen, and exactly how it looks. It was so powerful and so far ahead of its time that Steve Jobs’ NeXT chose it as the graphics language for its workstations. Using PostScript, you could easily draw things twenty years ago that we still can’t do on web pages today. The nagging incompatibilities and formatting weirdnesses we have to cope with from HTML, the fragile hacks and workarounds that web page designers live with every day...none of that had to happen.
Unfortunately, Adobe was so obsessed with making money selling PostScript interpreters that it was unwilling to make PostScript an open standard when it could have made a difference. And so Adobe missed the chance to set the graphics standard for the web.
Fast forward a few years, and Adobe again fumbled the chance for greatness, this time with Flash. This wasn’t just Adobe’s fault; it was a joint project with Macromedia, which Adobe bought in 2005. Flash became the dominant animation and video playback standard for the web because, unlike the situation with PostScript, the player was free. There was no cost for users or tech companies to adopt the standard, and so it spread wildly, boosted by a bundling deal with Microsoft (link). There was a time in the early 2000s, prior to the iPhone and Android, when the mobile phone world was ripe for a takeover by software that would let you produce great visuals on a smartphone. Palm OS was too weak for the task, Windows CE was a mess, and Symbian was, well, Symbian. Macromedia, and later Adobe, could have set the standard for mobile phone graphics if they had given away the Flash player for mobile phones. But Macromedia had lucked into a licensing deal under which Japan’s NTT DoCoMo paid to put Flash on millions of mobile phones (link). Macromedia and Adobe fell in love with that revenue stream and decided they could extract money from every other mobile phone company in the world by charging for the player.
I’d call that move arrogant, but it was more than that – it was stupid. You can’t set a standard in tech and maximize short-term profit at the same time. For a few years of profit, Adobe sacrificed the opportunity to dominate the mobile phone market for a generation, and in the process fatally weakened Flash on the PC as well.
I could go on and on about the opportunities Adobe squandered: AIR, e-books...it’s a depressing list that reminds me of the stories people tell about Xerox PARC. If I thought Kevin Lynch was the executive responsible for those moves, I’d be shocked that Apple hired him. But as far as I can tell, they were made by other people, and he was stuck playing out the hand he was dealt. I’ve been there, I’ve done that. If you’re part of a team you do the best you can and trust that the folks around you will do theirs. If you want to fault Kevin for something, fault him for staying so long at a company that was putting quarterly profits ahead of long-term investment.
So my reaction to the Lynch hiring depends on what Apple’s going to ask him to do, and we don’t know that yet. If Apple wants him to run business strategy I’ll be worried, because I don’t think he had great role models at Adobe. If Apple wants him to run marketing I’ll be alarmed. But I think Apple has hired him as a technologist. In that role he’s extremely smart and easy to work with, and Apple fans, I think he can be an asset to the company.
Disclosure: I did a little bit of consulting for Adobe in the past, and have met Kevin Lynch. This article doesn’t include any confidential or inside information.
Wednesday, 6 March 2013
Coming Soon: My Book on Business Strategy
I’m getting ready to publish my book on business strategy, Map the Future. It’s all about how a business should plan for the future, and how to manage the functions that help you make those plans: competitive analysis, market research, and advanced technology. It’s not a case study book; it’s more like a business cookbook, with detailed how-to instructions on everything from segmenting the market for a new product to influencing people who don't want to listen.
I’ll post more about the book when it ships, but in the meantime I wanted to offer a review copy to any journalists or bloggers who want to look at it. If you’re interested, please write to me at the address here. Be sure to include the URL of your publication or blog.
Now that the book’s finally done, I can get back to blogging. There’s a lot of interesting stuff going on, and I’ve been dying to dig into it.
I’ll post more about the book when it ships, but in the meantime I wanted to offer a review copy to any journalists or bloggers who want to look at it. If you’re interested, please write to me at the address here. Be sure to include the URL of your publication or blog.
Now that the book’s finally done, I can get back to blogging. There’s a lot of interesting stuff going on, and I’ve been dying to dig into it.
Thursday, 28 February 2013
Remove Content Adviser Password from Internet Explorer
Remove content adviser password from window xp internet explorer 6.0 |
First you need to make a backup of registry:
1 Click on Start
2 Choose Run.
3 In the Open field, type regedit.
4 Click the File menu
5 Then Choose Export.
6 In the File name field, type backup.
7 Click Save.
The following steps will remove any password set in the Internet Explorer Content Advisor and allow you to reset the program to its original state.
1 Go to start, run, type "regedit" and press "OK".
2 Expand the folders down the left, in this order.
HKEY_LOCAL_MACHINE
>Software
>Microsoft
>Windows
>CurrentVersion
>Policies
>Ratings
3 On the right you will see the word "Key" - Click on it once, then press "del" or right click, and select delete.
4 Close the registry editor.
5 Go to start, run, type "explorer" and press "OK".
6 Navigate to C:\Windows\System and delete "ratings.pol"
7 Restart the computer and run Internet Explorer again.
8 Choose Tool and then Internet Options
9 Click on the Content tab and click on Disable. When asked for a password, don't enter anything; just click on OK. This will disable Content Advisor because there's no longer a password.
If face any problem try another way to remove any password set in the Internet Explorer Content Advisor and allow you to reset the program to its original state.
1 Go to start, run, type "Notepad" and press "OK".
2 Copy below code
Windows Registry Editor Version 5.00
[-HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Ratings]
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Ratings]
3 Then past into notepad & save as remove-content-adviser.reg on Desktop
4 After save go to desktop & click to open remove-content-adviser.reg
5 Click Yes Then OK
6 Now you've just deleted your original Content Advisor password.
Subscribe to:
Posts (Atom)
Here Comes the Hammer: The Tech Industry's Three Crises
The next few years are going to be extremely uncomfortable, and maybe disastrous, for the tech industry. Political opposition to the big tec...
-
Today, everyone relies on Internet for their day-to-day work and needs, and there are millions of websites registered there on the Internet...
-
Images are the backbone of your well-nourished articles. Every business success factor rate depends on how well was its content marketing, a...
-
I was very surprised by the volume of responses to last week's post on the decline of the mobile applications business. Many of the com...