None

The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution. BY WALTER ISAACSON. Simon & Schuster, 2014, 560 pp. $35.00.

Zero to One: Notes on Startups, or How to Build the Future. BY PETER THIEL WITH BLAKE MASTERS. Crown Business, 2014, 224 pp. $27.00.

In the grand scope of human history, technological progress is actually a surprisingly new phenomenon. While there had always been the occasional new invention or technological breakthrough, it wasn’t until the Industrial Revolution that sustained technological progress became a reality—and, with it, the possibility of steadily rising living standards. For most of the past two centuries, that progress was most visible in the industrial and agricultural realms. But over the past 60 years or so, the lion’s share of innovation has come from a single sector: what is now loosely called “information technology.” When thinking about innovation in the United States today, the first (and sometimes only) place that comes to mind is Silicon Valley. In the simplest sense, Walter Isaacson’s The Innovators explains how that happened and, in the process, sheds some interesting light on what drives innovation more generally.


The Innovators doesn’t begin in the Valley, though. It doesn’t even begin in the United States, or in the twentieth century. Instead, Isaacson starts with the story of the visionary British mathematicians Ada Lovelace and Charles Babbage, who, in the 1830s and 1840s, rather miraculously came up with the basic idea for a general-purpose computer much like those of today. From there, Isaacson leaps ahead a century to the years just before World War II, when a series of conceptual breakthroughs led to the construction of what might be considered the first proto-computers. Isaacson’s book, as its title suggests, then becomes a kind of serial biography, as he offers up portrait after portrait of the men and women who turned the computer from a theoretical idea into a daily reality. This approach has its limitations: there are times when it feels as if the reader is hearing at once too much and yet not enough about the characters. But what keeps the book from feeling like a series of potted sketches is that Isaacson is adept at situating his characters in both time and place. He lays out the institutional and organizational contexts in which they functioned, as well as the broader social trends they were influenced by and in turn shaped. The result is a book that offers a remarkably vivid picture of people genuinely making history, even if they did so not entirely on their own terms.


O PIONEERS!


In a schematic sense, the story Isaacson tells can be loosely divided into three periods. The early years culminate in the 1946 debut of ENIAC (the Electronic Numerical Integrator and Computer), which Isaacson considers to be the first true computer, meaning that it was all electronic, was programmable, and “could in theory tackle any task,” rather than being designed for a single goal. Those early years were dominated by the demands of the U.S. military (ENIAC was originally designed to calculate the trajectory of artillery shells), and the work took place largely in university laboratories and military facilities.


The two and half decades that followed saw private corporations become central to the development of the computer, first at Bell Labs (where the transistor was invented) and soon after in Silicon Valley. Government, however, was still an integral part of the process, since federal research funding was immense—together, the U.S. Defense Department and the National Science Foundation spent as much on basic research as private companies did between the 1950s and the 1980s—and since the military was a crucial customer for the technology industry in its early days, most notably in the case of microchips. Indeed, the real driver of technological change during this era was what Isaacson calls “the military-industrial-academic complex,” which eventually led to the creation of the Internet in the 1970s.


What the advent of the Internet signaled, particularly in combination with the birth of the personal computer, was the mainstreaming of the idea that, as Isaacson puts it, “computers should be personal and interactive.” This has been the central theme of the last three decades or so in computing: the personalization of technology and the transformation of computers from expensive, hard-to-use machines into accessible, affordable household devices that put immense computing power in the hands of individuals. And although corporations have obviously been integral to this process, the last 20 years have also seen the emergence of important, and surprising, new ways of producing software and organizing work—think of open-source software or Wikipedia.


That last point is important, because, as Isaacson makes clear, advances in the digital revolution have taken myriad forms. The easiest innovations to see are the devices that people built—ENIAC, the semiconductor, the microprocessor, the personal computer, the Internet—and the software they wrote: graphical user interfaces, operating systems, word-processing programs, and so on. But there were also important innovations in finance, such as the rise of venture capital (about which Isaacson could have said more). There were important organizational innovations, too, such as the corporate research laboratories at Bell Labs and Xerox parc, both of which invested heavily in basic research as well as product development and became extraordinary founts of technological breakthroughs (even if the companies that owned those labs didn’t always take advantage of them). And there were innovations in management, such as the nonhierarchical structure that Robert Noyce, Gordon Moore, and Andrew Grove pioneered at Intel. Then there’s what Isaacson calls “the advance that is closest to being revolutionary” in recent years—namely, the way the Internet has “facilitated collaboration not only within teams but also among crowds of people who didn’t know each other.”


Those who have read deeply in the history of technology will find much of this material familiar. But Isaacson has a great way with the telling detail, and he does an excellent job of showcasing the work of innovators such as Douglas Engelbart (who in the 1960s essentially invented a computer that had nearly all the features users take for granted today) and J. C. R. Licklider (who worked at the Defense Department and in some sense shepherded the Internet into existence), people who had a profound impact on modern life but whom most have never heard of. And while at heart, Isaacson is telling a story, he’s also trying to use history to investigate how innovation works.


HOW TO SUCCEED IN BUSINESS

This is trickier than one might think. The temptation is to simply look at successful innovators and at failed ones and identify what the former did well and what the latter did poorly. The problem is that the reasons for failure (or for success, for that matter) are not always obvious, and the sample size of truly successful innovators is not that large. As Isaacson writes, “Sometimes the difference between geniuses and jerks hinges on whether their ideas turn out to be right.” That should make one at least a little cautious about believing that it is possible to reliably distinguish the distinctive characteristics of geniuses and jerks.


Still, some common themes do emerge from the history of the digital revolution. Probably the most important is not to rely solely on geniuses in the first place. That may sound odd, since the story of invention is usually told as a story of great inventors. But as Isaacson reveals, the true engine of innovation is collaboration. The pairing of a creative visionary and a more practical engineer (such as John Mauchly and J. Presper Eckert, who created ENIAC, or Steve Jobs and Steve Wozniak at Apple) can be enormously productive. And it isn’t just strong pairs, either; the organizations that have done best at innovating have typically been those that have relied on strong teams made up of diverse thinkers from lots of different disciplines. These teams didn’t try to quash independent thinking; they welcomed it. As Isaacson puts it, “Rugged individualism and the desire to form associations were totally compatible, even complementary, especially when it involved creating things collaboratively.”


One of the reasons diverse teams have tended to be more successful is that they have done a better job of turning ideas into actual products. This is an important theme in Isaacson’s book: genuine innovations are not just about brilliant insights. They’re the result of taking those insights and turning them into things that people will actually use and then finding a way to get those products into people’s hands. One of the more interesting sections of The Innovators is Isaacson’s account of John Atanasoff’s quixotic quest to build a general-purpose computer by himself in the early 1940s. Atanasoff anticipated important aspects of what would become ENIAC and constructed a prototype. But because he worked alone, in Iowa, rather than in a lab with other scientists and engineers, his computer never became fully functional, and he became a footnote to history, eclipsed by Mauchly and Eckert. Isaacson takes Atanasoff’s efforts seriously, but he notes that “we shouldn’t in fact romanticize such loners.” Real innovation isn’t just about an invention. As Eckert put it, “You have to have a whole system that works.” And that’s hard to do when you’re all by yourself.


In the same vein, Isaacson’s book points to the virtues of public financing of basic research, so that what might be called “the knowledge commons” is constantly being replenished and reinvigorated. Although patent rights and intellectual property law are obviously at the core of the way much of the technology industry works today, the history of the last 60 years is, in large part, a history of people building freely on the ideas of others to come up with something new.


Of course, even having all these things can’t guarantee an innovation boom. There are times when Isaacson’s take feels a bit reductionist, as when he concludes that the way the United States has driven innovation historically—relying on government-funded work, the corporate pursuit of profits, and collaborative, open-source labor all at once—is necessarily the best model for how to propel innovation everywhere. But we don’t actually know that. What we do know is that taking this multipronged approach to innovation has worked well. Whether a different approach might have worked better will remain a mystery.


THE VISION THING

Different as Isaacson’s innovators were, they all had one important thing in common: the faith that dramatic change was possible, and that computers and networks could truly transform the way people lived. That kind of ambition is precisely what Peter Thiel, a co-founder of PayPal who is now a well-known venture capitalist and gadfly in Silicon Valley, believes is missing from today’s entrepreneurs. That may sound peculiar, given that nary a day goes by without news of some new start-up raising a big chunk of venture capital. But in his new book, Zero to One, which was born out of a class he taught at Stanford University for budding entrepreneurs, Thiel argues that today, too many start-ups have embraced an overly cautious view of what’s possible. Instead of trying to reinvent the wheel, they are trying to make existing wheels go a tiny bit faster, doing things such as producing yet another social networking app rather than trying to solve genuinely complicated and important problems. What the world needs, Thiel thinks, are more people like Elon Musk, who started Tesla Motors and SpaceX. What it’s getting instead are a lot of start-ups that are doing what everyone else is doing and just hoping to do it a little better.


Thiel thinks that this is a bad strategy for starting a business. The enemy of any business, he argues, is competition, and if a start-up is doing what other companies are already doing, it has guaranteed itself competition. What the founders of companies should really want is a monopoly, and the best way to get that is to build something that others aren’t (for example, in Musk’s case, a stylish and reasonably affordable electric car). Thiel’s language here is attention-grabbing (aren’t monopolies bad things?), but he’s clearly onto something important: what businesses want to avoid is commoditization, where there is no real distinction between their product and those of their competitors. There is money to be made even in a competitive market, but it’s always difficult, and the profit margins are nearly always slim. Monopolies, by contrast, are lucrative. In Thiel’s mind, that is a good thing, since it is the prospect of earning a monopoly, and the profits that come with it, that encourages people to come up with profound innovations.


Of course, coming up with those isn’t exactly easy. And although Thiel’s book has plenty of interesting and concrete advice for entrepreneurs—Thiel suggests starting off by aiming to dominate a small market rather than trying to take a small slice out of a big market, and he stresses, as Isaacson does, the importance of sales and presentation in making even great products successful—he’s also clear that the vast majority of the economic value that start-ups create is created by a minuscule fraction of the start-ups out there. One could say that this means the chances of success are very small, but Thiel, a libertarian, frames it in different terms. The real question an entrepreneur has to be able to answer, he argues, is, What do you know that the rest of the world doesn’t? If you have a good answer to that question, you should try to turn your idea into reality.


Much of Zero to One is about what it takes, on an organizational and cultural level, for a start-up (even one with a great idea) to succeed. But Thiel’s real concern isn’t so much with individual businesses. Instead, Thiel is writing against what he perceives as a narrowed sense of ambition and a lack of imagination in the business world more generally, and indeed in society as a whole—and it’s this that makes his book an interesting complement to Isaacson’s. The Innovators is deeply optimistic about technological progress and American innovation. Thiel, by contrast, believes that the United States (and the world, for that matter) has been stuck for the past 40 years in a state of what he calls “horizontal progress.” People are getting better at copying what already works. But with few exceptions—most of them in information technology—they are not making the kinds of great leaps (or what he calls “vertical progress”) that have taken place in the past. One might ask, “But what about globalization, which has improved living standards for billions of people?” For Thiel, globalization is 
a quintessentially horizontal improvement: it’s about developing countries imitating the successes of others. Although that has certainly made people in developing countries better off than before, Thiel argues that it hasn’t actually make the world significantly richer as a whole.


Thiel’s gloomy take on innovation today is surprisingly common, what with worries about “secular stagnation” in the economy and the possibility that the world now faces permanently slower growth. And for all the talk about the rapid pace of change these days, it would be hard to argue that things have changed more in the last 50 years than they did in the previous 50. The years between 1915 and 1965 saw the birth of radio, television, jet airplanes, penicillin, air conditioning, the interstate highway system, the microchip, the semiconductor, and so on. The record of the last five decades isn’t anywhere near as dramatic.


GETTING THERE

At the same time, Thiel’s implicit downplaying of the advances since the 1970s—including the Internet, the personal computer, and the smartphone—understates the enormous benefits they have brought, not just to the way people entertain and educate themselves but also to the way they collaborate and connect. Isaacson, for his part, offers up a wonderful paean to how the Internet allows collective intelligence and collective effort to emerge on a grand scale, in a way that was never possible before. But Thiel has little patience or, it seems, interest in this. Thiel has described himself as an outsider. So perhaps it’s not surprising that these extraordinary social networking technologies appear relatively trivial to him. But it is hard to argue that tools that people use so frequently and seem to enjoy so much are not actually important. In interviews, Thiel has pointed to the small number of people employed by Twitter as evidence that the company isn’t transformative. But whatever one thinks of Twitter, a head count is an odd way to determine its social value. That depends instead on how much value it creates for its users.


What is peculiar is that Thiel sort of knows this: he was an early investor in Facebook, and in Zero to One, he cites Google, the iPod, the iPad, and Uber as genuine innovations. Yet at some level, he is clearly dissatisfied with these inventions. They may have remade people’s daily lives. But Thiel is looking for something bigger—something like the Apollo program, perhaps. (Although Thiel is a libertarian, he expresses admiration for the government’s role in orchestrating both the space program and the Manhattan Project. He’s just disappointed that the government’s ambitions, too, have been scaled back in recent years.) Of course, it’s far from obvious that the Apollo program had all that big an impact on the lives of ordinary Americans. But it is the scale of it that Thiel admires.


For all these flaws, it would be a mistake to dismiss Thiel’s critique of today’s narrowed ambitions or his exhortations to entrepreneurs to think big. There are too many start-ups raising money to create another app that will add incremental value at best. Many businesses have become so obsessed with optimizing their current production that they have lost the desire—or maybe even the ability—to seek out truly breakthrough innovations. And Thiel is onto something when he says that the “definite optimism” of the postwar era, which assumed that the future was going to be better because Americans were going to make it so, has been replaced by “indefinite optimism,” a vague notion that Americans are going to keep improving but without any real idea for how to get there.


In some sense, what Thiel is saying to entrepreneurs is not just, “Think about what it takes to get rich,” but also, “How do you want to spend your life?” Do you really want to have developed one of a million apps in the iTunes store or be a consultant who helps some company save a fraction of a penny making widgets? Do you, as so many end up doing, want to muddle along and hope something good happens? Or do you want to try to do something great and transformative? 


Thiel obviously thinks that entrepreneurs should do the latter—if they have genuinely great ideas. (Otherwise, he suggests, they’re better off going to work for someone else who has a great idea.)As he puts it, “better to risk boldness than triviality.” And although he recognizes that luck plays a role in whether or not one succeeds, he contends that entrepreneurs need to “prioritize design over chance.” Even though they may know that the potential outcomes of their actions are uncertain, they need to plan, and not use that uncertainty as a crutch. These are the things that Isaacson’s innovators did. Isaacson’s history suggests that by its very nature, successful innovation requires a leap of faith, a willingness to believe that one can go from zero to one. Or, as the computer scientist Alan Kay has put it, “The best way to predict the future is to invent it.”

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print, online, and audio editions
Subscribe Now