First, check your tickets. In case you’re not sure if you should be here, a digital native is “a person born or brought up during the age of digital technology and therefore familiar with computers and the Internet from an early age.”
If that describes your child, then this is the place for you.
Or is it?
Because this will not be a place where you hear fear-mongering advice about screen time or read research suggesting that you are a bad parent if your child did not spend the entire weekend outside.
Instead, this will be a place where we all boldly face and embrace the “future that’s already here” (or what I fondly call the FUTAH – apply New Yorker accent), and parent our children with eyes and minds open to the unimaginable opportunities that technology will make possible for them.
The first step, in my opinion, is to grab a primer like The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, an excellent and thorough guide to the world your children will occupy. It approaches the FUTAH both conceptually and quite literally and is not the least bit laden with the conventional fears that rule today’s playground conversations. It is a vivid picture of your child’s day-to-day life when he or she is you and me. Written by one of the world’s most intelligent and eclectic human beings who has also successfully parented a child.
That author, Kevin Kelly, is worth looking into. He’s not your typical “tech type,” which is why I find his observations about tech so compelling. In just the past 10 years, he has written The Inevitable (NYTimes bestseller 2016), now finishing a voluminous photographic documentary of the disappearing traditions of Asia, and just 3 years ago published his first fiction (science fiction of course) after laboring for 11 years and completing a successful Kickstarter campaign, and oh, the book is a graphic novel beautifully PRINTED as a 6-page fold-out. He also spent a meaningful part of his life as a nomadic photojournalist once riding a bicycle 5,000 miles across America. There is more, but you can explore him yourself here.
So take my word for it. Kevin Kelly is no Silicon Valley guy blind to the importance of craft, creativity and culture.
But technically (ahem), he is the co-founder of Wired magazine and served as its Executive Editor for 7 years. He was very much a part of the evolution of the Internet, and has the benefit of an intimate perspective on it and all the societal elements it impacted.
So pick up a copy (or show off and read it on a screen), and come back for a fun journey into the 2030’s through the 2050’s….approximately the time our kids will be getting (or making) jobs, and navigating the thing(s) we all once called college…not necessarily in that order.
If you’d like to cheat and see my notes from The Inevitable, I have included them down below.
See you soon!
The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future by Kevin Kelly
We are moving away from the world of fixed nouns and toward a world of fluid verbs.
In the next 30 years we will continue to take solid things—an automobile, a shoe—and turn them into intangible verbs. Products will become services and processes.
Embedded with high doses of technology, an automobile becomes a transportation service
product, but an endless process of reimagining our extended feet, perhaps with disposable covers, sandals that morph as you walk, treads that shift, or floors that act as shoes.
“Shoeing” becomes a service and not a noun. In the intangible digital realm, nothing is static or fixed. Everything is becoming
technology is taking us to protopia. More accurately, we have already arrived in protopia. Protopia is a state of becoming, rather than a destination. It is a process
world can’t be googled. A lot of what happens in Facebook, or on a phone app, or inside a game world, or even inside a video can’t be searched right now. In 30 years it will be. The tendrils of hyperlinks will keep expanding to connect all the bits. The events that take place in a console game will be as searchable as the news. You’ll be able to look for things that occur inside a YouTube video. Say you want to find the exact moment on your phone when your sister received her acceptance to college. The web will reach this. It will also extend to physical objects, both manufactured and natural. A tiny, almost free chip embedded into products will connect them to the web and integrate their data. Most objects in your room will be connected, enabling you to google your room. Or google your house. We already have a hint of that. I can operate my thermostat and my music system from my phone. In three more decades, the rest of the world will overlap my devices. Unsurprisingly, the web will expand to the dimensions of the physical planet. It will also expand in time[…]
cycle of obsolescence is accelerating (the average lifespan of a phone app is a mere 30 days!)
every one of us—will be endless newbies in the future simply trying to keep up. Here’s why: First
Everything media experts knew about audiences—and they knew a lot—promoted the belief that audiences would never get off their butts and start making their own entertainment. The audience was a confirmed collective coach potato, as the ABC honchos assumed. Everyone knew writing and reading were dead; music was too much trouble to make when you could sit back and listen; video production was simply out of reach of amateurs in terms of cost and expertise. User-generated creations would never happen at a large scale, or if they happened they would not draw an audience, or if they drew an audience they would not matter. What a shock, then, to witness the near instantaneous rise of 50 million blogs
And, of course, the internet is not and has never been a teenage realm. In 2014 the average age of a user was roughly a bone-creaking 44 years old.
As we try to imagine this exuberant web three decades from now, our first impulse is to imagine it as Web 2.0—a better web. But the web in 2050 won’t be a better web, just as the first version of the web was not better TV with more channels. It will have become something new, as different from the web today as the first web was from TV.
In a strict technical sense, the web today can be defined as the sum of all the things that you can google—that is, all files reachable with a hyperlink. Presently major portions of the digital
From the moment you wake up, the web is trying to anticipate your intentions. Since your routines are noted, the web is attempting to get ahead of your actions, to deliver an answer almost before you ask a question. It is built to provide the files you need before the meeting, to suggest the perfect place to eat lunch with your friend, based on the weather, your location, what you ate this week, what you had the last time you met with your friend, and as many other factors as you might consider. You’ll converse with the web. Rather than flick through stacks of friends’ snapshots on your phone, you ask it about a friend. The web anticipates which photos you’d like to see and, depending on your reaction to those, may show you more or something from a different friend—or, if your next meeting is starting, the two emails
you need to see. The web will more and more resemble a presence that you relate to rather than a place—the famous cyberspace of the 1980s—that you journey to. It will be a low-level constant presence like electricity: always around us, always on, and subterranean. By 2050 we’ll come to think of the web as an ever-present type of conversation.
But, but . . . here is the thing. In terms of the internet, nothing has happened yet! The internet is still at the beginning of its beginning. It is only becoming. If we could climb into a time machine, journey 30 years into the future, and from that vantage look back to today, we’d realize that most of the greatest products running the lives of citizens in 2050 were not invented until after 2016. People in
the future will look at their holodecks and wearable virtual reality contact lenses and downloadable avatars and AI interfaces and say, “Oh, you didn’t really have the internet”—or whatever they’ll call it—“back then.”
Because here is the other thing the graybeards in 2050 will tell you: Can you imagine how awesome it would have been to be an innovator in 2016? It was a wide-open frontier! You could pick almost any category and add some AI to it, put it on the cloud. Few devices had more than one or
two sensors in them, unlike the hundreds now. Expectations and barriers were low. It was easy to be the first. And then they would sigh. “Oh, if only we realized how possible everything was back then!”
So, the truth: Right now, today, in 2016 is the best time to start up. There has never been a better day in the whole history of the world to invent something. There has never been a better time with more opportunities, more openings, lower barriers, higher benefit/risk ratios, better returns, greater upside than now. Right now, this minute. This is the moment that folks in the future will look back at and say, “Oh, to have been alive and well back then!”
but thin, embedded, and loosely connected. It will be hard to tell where its thoughts begin and ours end. Any device that touches this networked AI will share—and contribute to—its intelligence. A lonely off-the-grid AI cannot learn as fast, or as smartly, as one that is plugged into 7 billion human minds, plus quintillions of online transistors, plus hundreds of exabytes of real-life data, plus the self-correcting feedback loops of the entire civilization. So the network itself will cognify into something that uncannily keeps getting better. Stand-alone synthetic minds are likely to be viewed as handicapped, a penalty one might pay in order to have AI mobility in distant places. When this emerging AI arrives, its very ubiquity will hide it. We’ll use its growing smartness for all kinds of humdrum chores, but it will be faceless, unseen. We will be able to reach this distributed intelligence in a million ways, through any digital screen anywhere on earth, so it will be hard to say where it is. And because this synthetic intelligence is a combination of human intelligence (all past human learning, all current humans online), it will be difficult to[…]
the first genuine AI will not be birthed in a stand-alone supercomputer, but in the superorganism of a billion computer chips known as the net. It will be planetary in dimensions,
At the rate AI technology is improving, a kid born today will rarely need to see a doctor to get a diagnosis by the time they are an adult.”
In 2015 researchers at DeepMind published a paper in Nature describing how they taught an AI to learn to play 1980s-era arcade video games, like Video Pinball. They did not teach it how to play the games, but how to learn to play the games—a profound difference.
will cognify. There is almost nothing we can think of that cannot be made new, different, or more valuable by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. Find something that can be made better by adding online smartness to it.
Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike
consciousness—or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. You’ll simply plug into the grid and get AI as if it was electricity. It will enliven inert objects, much as electricity did more than a century past. Three generations ago, many a tinkerer struck it rich by taking a tool and making an electric version. Take a manual pump; electrify it. Find a hand-wringer washer; electrify it. The entreprenuers didn’t need to generate the electricity; they bought it from the grid and used it to automate the previously manual. Now everything that we formerly electrified we
Contemporary phone cameras eliminated the layers of heavy glass by adding algorithms, computation, and intelligence to do the work that physical lenses once did. They use the intangible smartness to substitute for a physical shutter.
There are even designs for a completely flat camera with no lens at all. Instead of any glass, a perfectly flat light sensor uses insane amounts of computational cognition to compute a picture from the different light rays falling on the unfocused sensor.
Take chemistry, another physical endeavor requiring laboratories of glassware and bottles brimming with solutions.
By adding AI to chemistry, scientists can perform virtual chemical experiments.
At first glance, you might think that Google is beefing up its AI portfolio to improve its search capabilities, since search constitutes 80 percent of its revenue. But I think that’s backward. Rather than use AI to make its search better, Google is using search to make its AI better. Every
time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI. When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny–looking image, you are teaching the AI what an Easter Bunny looks like. Each of the 3 billion queries that Google conducts each day tutors the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousandfold more data and a hundred times more computing resources, Google will have an unrivaled AI. In
My prediction: By 2026, Google’s main product will not be search but AI.
As it does, this cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing empowers the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people who use it. The more people who use it, the smarter it gets.
But here’s the even more surprising part: The advent of AI didn’t diminish the performance of purely human chess players. Quite the opposite. Cheap, supersmart chess programs inspired more people than ever to play chess, at more tournaments than ever, and the players got better than ever. There are more than twice as many grand masters now as there were when Deep Blue first beat Kasparov. The top-ranked human chess player today, Magnus Carlsen, trained with AIs and has been deemed the most computerlike of all human chess players. He also has the highest human grand master rating of all time.
We like to call our human intelligence “general purpose,” because compared with other kinds of minds we have met, it can solve more types of problems, but as we build more and more synthetic minds we’ll come to realize that human thinking is not general at all. It is only one species of thinking.
Let the robots take our jobs, and let them help us dream up new work that matters.
Today most books are predominantly born as ebooks. Even old books have had their texts scanned and blasted into every corner of the internet, encouraging them to flow freely on the superconducting wires of the net. The four fixities are not present in ebooks, at least not in the versions of ebooks we see today. But while book lovers will miss the fixities, we should be aware that ebooks offer four fluidities to counter them: Fluidity of the page—The page is a flexible unit. Content will flow to fit any available space, from a tiny screen in a pair of glasses to a wall. It can adapt to your preferred reading device or reading style. The page fits you. Fluidity of the edition—A book’s material can be personalized. Your edition might explain new words if you are a student, or it could skip a recap of the previous books in the series if you’ve already read them. Customized “my books” are for me. Fluidity of the container—A book can be kept in the cloud at such low cost that it is “free” to store in an unlimited library and can be delivered instantly anywhere on earth[…]
. Every now and then a band or artist will experiment in letting fans pay them whatever they wish for a free copy. This scheme basically works. It’s an excellent illustration of the power of patronage. The elusive connection that flows between appreciative fans and the artist is definitely worth something. One of the first bands to offer the option of pay-what-you-want was Radiohead. They discovered they made about $2.26 per download of their 2007 In Rainbows album, earning the band more money than all previous albums released on labels combined and spurring several million sales of CDs. There are many other examples of the audience paying simply because they gain an intangible pleasure from it.
Liquidity offered new powers. Forget the tyranny of the radio DJ. With liquid music you had the power to reorder the sequence of tunes on an album or among albums. You could shorten a song or draw it out so that it took twice as long to play. You could extract a sample of notes from someone else’s song to use yourself. Or you could substitute lyrics in the audio. You could reengineer a piece so that it sounded better on a car woofer. You could—as someone later did—take two thousand versions of the same song and create a chorus from it. The superconductivity of digitalization had unshackled music from its narrow confines on a vinyl disk and thin oxide tape. Now you could unbundle a song from its four-minute package, filter it, bend it, archive it, rearrange it, remix it, mess with it. It wasn’t only that it was monetarily free;
But today more than 5 billion digital screens illuminate our lives. Digital display manufacturers will crank out 3.8 billion new additional screens per year. That’s nearly one new screen each year for every human on earth. We will start putting watchable screens on any flat surface. Words have migrated from wood pulp to pixels on computers, phones, laptops, game consoles, televisions, billboards, and tablets. Letters are no longer fixed in black ink on paper, but flitter on a glass surface in a rainbow of colors as fast as our eyes can blink. Screens fill our pockets, briefcases, dashboards, living room walls, and the sides of buildings. They sit in front of us when we work—regardless of what we do. We are now People of the Screen. This has set up the current culture clash between People of the Book and
piece by piece by the audience themselves. People of the Screen make their own content and construct their own truth. Fixed copies don’t matter as much as flowing access. Screen culture is fast, like a 30-second movie trailer, and as liquid and open-ended as a Wikipedia page. On a screen, words move, meld into pictures, change color, and perhaps even change meaning. Sometimes there are no words at all, only pictures or diagrams or glyphs that may be deciphered into multiple meanings. This liquidity is terribly unnerving to any civilization based on text logic. In this new world, fast-moving code—as in updated versions of computer code—is more important than law, which is fixed. Code displayed on a screen is endlessly tweakable by users, while law embossed into books is not. Yet code can shape behavior as much as, if not more than, law. If you want to change how people
act online, on the screen, you simply alter the algorithms that govern the place, which in effect polices the collective behavior or nudges people in preferred directions. People of the Book favor solutions by laws, while People of the Screen favor technology as a solution to all problems. Truth is, we are in transition, and the clash between the cultures of books and screens occurs within us as individuals as well. If you are an educated modern person, you are conflicted by these two modes. This tension is the new norm. It all started with the first screens that invaded our living rooms 50 years ago: the big, fat, warm tubes of television. These glowing altars reduced the time we spent reading to such an extent that in the following decades it seemed as if reading and writing were over. Educators, intellectuals, politicians, and parents in the last half of the last century worried deeply that the TV generation would be unable to write. Screens were blamed for an amazing list of societal ills. But of course we all kept watching. And for a while it did seem as if nobody wrote, or could write, and reading scores trended[…]
of reading and writing. The literacy rate in the U.S. has remained unchanged in the last 20 years, but those who can read are reading and writing more. If we count the creation of all words on all screens, you are writing far more per week than your grandmother, no matter where you live. In addition to reading words on a page, we now read words floating nonlinearly in the lyrics of a music video or scrolling up in the closing credits of a movie. We might read dialog balloons spoken by an avatar in a virtual reality, or click through the labels of objects in a video game, or decipher the words on a diagram online. We should properly call this new activity “screening” rather than reading. Screening includes reading words, but also watching words and reading images. This new activity has new characteristics. Screens are always on; we never stop staring at them, unlike with books. This new platform is
The fate of books is worth investigating in detail because books are simply the first of many media that screening will transform. First screening will change books, then it will alter libraries of books, then it will modify movies and video, then it will disrupt games and education, and finally screening will change everything else.
Some scholars of literature claim that a book is really that virtual place your mind goes to when you are reading. It is a conceptual state of imagination that one might call “literature space.” According to these scholars, when you are engaged in this reading space, your brain works differently than when you are screening. Neurological studies show that learning to read changes the brain’s circuitry. Instead of skipping around distractedly gathering bits, when you read you are transported, focused, immersed. One can spend hours reading on the web and never encounter this literature space. One gets fragments, threads, glimpses. That is the web’s great attraction: miscellaneous pieces loosely joined. But without some kind of containment, these loosely joined pieces spin away, nudging a reader’s attention
the elite, this library would be truly democratic, offering every book in every language to every person alive on the planet. Ideally, in such a complete library we should be able to read any article ever written in any newspaper, magazine, or journal. The universal library should also include a copy of every painting, photograph, film, and piece of music produced by all artists, present and past. Still more, it should include all radio and television broadcasts. Commercials too. Of course, the grand library naturally needs a copy of the billions of dead web pages no longer online and the tens of millions of blog posts now gone—the ephemeral literature of our time. In short, the entire works of humankind, from the beginning of recorded history, in all languages, available to all people, all the time.
Even when the central core of a text is authored by a lone author (as is likely for many fictional books), the auxiliary networked references, discussions, critiques, bibliography, and hyperlinks surrounding a book will probably be a collaboration. Books without this network will feel naked.
pattern making, associating one idea with another, equipping us to deal with the thousands of new thoughts expressed every day. Screening nurtures thinking in real time. We review a movie while we watch it, or we come up with an obscure fact in the middle of an argument, or we read the owner’s manual of a gadget before we purchase it rather than after we get home and discover that it can’t do what we need it to do. Screens are instruments of the now. Screens provoke action instead of persuasion. Propaganda is less effective in a world of screens, because while misinformation travels as fast as electrons, corrections do too. Wikipedia works so well because it removes an error in a single click, making it easier to eliminate a falsehood than to post a falsehood in the first place. In books we find a revealed truth; on the screen we assemble our own myths from pieces. On
But today most of us have become People of the Screen. People of the Screen tend to ignore the classic logic of books or the reverence for copies; they prefer the dynamic flux of pixels. They gravitate toward movie screens, TV screens, computer screens, iPhone screens, VR goggle screens, tablet screens, and in the near future massive Day-Glo megapixel screens plastered on every surface. Screen culture is a world of constant flux, of endless sound bites, quick cuts, and half-baked ideas. It is a flow of tweets, headlines, instagrams, casual texts, and floating first impressions. Notions don’t stand alone but are massively interlinked to everything else; truth is not delivered by authors and authorities but is assembled in real time
But there is no reason an ebook has to be a plank. E-ink paper can be manufactured in inexpensive flexible sheets as thin and supple and cheap as paper. A hundred or so sheets can be bound into a sheaf, given a spine, and wrapped between two handsome covers. Now the ebook looks very much like a paper book of old, thick with pages, but it can change its content. One minute the page
has a poem on it; the next it has a recipe. Yet you still turn its thin pages (a way to navigate through text that is hard to improve). When you are finished reading the book, you slap the spine. Now the same pages show a different tome. It is no longer a bestselling mystery, but a how-to guide to raising jellyfish. The whole artifact is superbly crafted and satisfying to hold. A well-designed ebook shell may be so sensual it might be worth purchasing a very fine one covered in soft well-worn Moroccan leather, molded to your hand, sporting the most satiny, thinnest sheets. You’ll probably have several ebook readers of different sizes and shapes optimized for different content.
Indeed, dense hyperlinking among books would make every book a networked event. The conventional vision of the book’s future assumes that books will remain isolated items, independent from one another, just as they are on the shelves in your public library. There, each book is pretty much unaware of the ones next to it. When an author completes a work, it is fixed and finished. Its only movement comes when a reader picks it up to enliven it with his or her imagination. In this conventional vision, the main advantage of the coming digital library is portability—the nifty translation of a book’s full text into bits, which permits it to be read on a screen anywhere. But this vision misses the chief revolution birthed by scanning
Brewster Kahle, an archivist who is backing up the entire internet, says that the universal library is now within reach. “This is our chance to one-up the Greeks!” he chants. “It is really possible with the technology of today, not tomorrow. We can provide all the works of humankind to all the people of the world. It will be an achievement remembered for all time, like putting a man on the moon.” And unlike the libraries of old, which were restricted to
This is a very big library. From the days of Sumerian clay tablets until now, humans have “published” at least 310 million books, 1.4 billion articles and essays, 180 million songs, 3.5 trillion images, 330,000 movies, 1 billion hours of videos, TV shows, and short films, and 60 trillion public web pages. All this material is currently contained in all the libraries and archives of the world. When fully digitized, the whole lot could be compressed (at current technological rates) onto 50-petabyte hard disks. Ten years ago you needed a building about the size of a small-town library to house 50 petabytes. Today the universal library would fill your bedroom. With tomorrow’s technology, it will all fit onto your phone. When that happens, the library of all libraries will ride in your purse or wallet—if it doesn’t plug directly into your brain with thin white cords. Some people alive today are surely hoping that they
The universal library and its “books” will be unlike any library or books we have known because, rather than read them, we will screen them. Buoyed by the success of massive interlinking in Wikipedia, many nerds believe that a billion human readers can reliably weave together the pages of old books, one hyperlink at a time. Those with a passion for a special subject, obscure author, or favorite book will, over time, link up its important parts. Multiply that simple generous act by millions of readers, and the universal library can be integrated in full, by fans, for fans.
In addition to a link, which explicitly connects one word or sentence or book to another, readers will also be able to add tags. Smart AI-based search technology overcomes the need for overeducated classification systems so user-generated tags are enough to find things. Indeed, the sleepless smartness in AI will tag text and images automatically in the millions, so that the entire universal library will yield its wis
nonfiction book will usually have a bibliography and some kind of footnotes. When books are deeply linked, you’ll be able to click on the title in any bibliography or any footnote and find the actual book referred to in the footnote. The books referenced in that book’s bibliography will themselves be available, and so you can hop through the library in the same way we hop through web links, traveling from footnote to footnote to footnote until you reach the bottom of things.
Books were good at developing a contemplative mind. Screens encourage more utilitarian thinking. A new idea or unfamiliar fact uncovered while screening will provoke our reflex to do something: to research the term, to query your screen “friends” for their opinions, to find alternative views, to create a bookmark, to interact with or tweet the thing rather than simply contemplate it. Book reading strengthened our analytical skills, encouraging us to pursue an observation all the way down to the footnote
Hold an electronic tablet up as you walk along a street—or wear a pair of magic spectacles or contact lenses—and it will show you an annotated overlay of the real street ahead: where the clean restrooms are, which stores sell your favorite items, where your friends are hanging out. Computer chips are becoming so small, and screens so thin and cheap, that in the next 30 years semitransparent eyeglasses will apply an informational layer to reality. If you pick up an object while peering through these spectacles, the object’s (or place’s) essential information will appear in overlay text. In this way screens will enable us to “read” everything, not just text.
native is free to race ahead and
For instance, on Pinterest, plentiful tags and categories (“pins”) enable a user to make very quick and specific scrapbooks that are super easy to retrieve and add to. Second, other users will benefit from an individual’s tags, pins, and bookmarks. It makes it easier for them to find similar material. The more tags an image gets in Pinterest, or likes in Facebook, or hashtags on Twitter, the more useful it becomes for others.
Half of all web pages in the world today are hosted on more than 35 million servers running free Apache software, which is open source, community created. A free clearinghouse called 3D
Nearly 1 million community-designed Arduinos and 6 million Raspberry Pi computers have been built by schools and hobbyists. Their designs are encouraged to be copied freely and used as the basis for new products. Instead of money, the peer producers who create these products and services gain credit, status
reputation, enjoyment, satisfaction, and experience.
The new OS is neither the classic communism of centralized planning without private property nor the undiluted selfish chaos of a free market. Instead, it is an emerging design space in which decentralized public coordination can solve problems and create things that neither pure communism nor pure capitalism can.
Black Duck Open Hub, which tracks the open source industry, lists roughly 650,000 people working on more than half a million projects. That total is three times
If it were a nation, Facebook would be the largest country on the planet. Yet the entire economy of this largest country runs on labor that isn’t paid
A billion people spend a lot of their day creating content for free. They report on events around them, summarize stories, add opinions, create graphics, make up jokes, post cool photos, and craft videos. They are “paid” in the value of the communication and relations that emerge from 1.4 billion connected verifiable individuals. They are paid by being allowed to stay on the commune.
The most common motivation for working without pay (according to a survey of 2,784 open source developers) was “to learn and develop new skills.”
but no one would share their medical records. But PatientsLikeMe, where patients pool results of treatments to better their own care, proves that collective action can trump both doctors and privacy scares. The increasingly common habit of sharing what you’re thinking (Twitter), what you’re reading (StumbleUpon), your finances (Motley Fool Caps), your everything (Facebook) is becoming a foundation of our culture. Doing it while collaboratively building encyclopedias, news agencies, video archives, and software in groups that span continents, with people you don’t know and whose class is irrelevant—that makes political socialism seem like the logical next step.
So merely by using Google, the fans themselves made Google better and more economically valuable.
We live in a golden age now. The volume of creative work in the next decade will dwarf the volume of the last 50 years. More artists, authors, and musicians are working than ever before, and they are creating significantly more books, songs, films, documentaries, photographs, artworks, operas, and albums every year. Books have never been cheaper, and more available, than today. Ditto for music, movies, games, and every kind of creative content that can be digitally copied. The volume and variety of creative works available have skyrocketed. More and more of civilization’s past works—in all languages—are no longer hidden in rare-book rooms or locked up in archives, but are available a click away no matter where you live.
made it super easy to locate the most obscure work. If you want 6,000-year-old Babylonian chants accompanied by the lyre, there they are.
The technologies of recommendation and search have
Occasionally, unexpectedly popular fan-financed Kickstarter projects may pile on an additional $1 million above the goal. The highest grossing Kickstarter campaign raised $20 million for a digital watch from its future fans. Approximately 40 percent of all projects succeed in reaching their funding goal.
If you have an idea, you can seek investment from anyone else who sees the same potential as you do. You don’t need the permission of bankers, or the rich.
Loan a poor woman $95 to buy supplies to launch a street food cart and the benefits
of her stable income would ripple up through her children, the local economy, and quickly build a base for more complex startups. It was the most efficient development strategy invented yet. Kiva took the next step in sharing and turned microfinancing into peer-to-peer lending by enabling anyone, anywhere to make a microfinance loan. So you, sitting at Starbucks, could now lend $120 to a specific individual Bolivian woman who plans to buy wool to start a weaving
take. During the day my biological vitals are tracked with wearable sensors so that the effect of the medicine is measured hourly and then sent to the cloud for analysis. The next day the dosage of the medicines is adjusted based on the past 24-hour results and a new personalized pill produced. Repeat every day thereafter. This appliance, manufactured in the millions, produces mass personalized medicine. My personal avatar is stored online, accessible to any retailer. It holds the exact measurements of every part and curve of my body. Even if I go to a physical retail store, I still try on each item in a virtual dressing room before I go because stores carry only the most basic colors and designs. With the virtual mirror I get a surprisingly realistic preview of what the clothes will look like on me; in fact, because I can spin my simulated dressed self around, it is
in-depth profile of my behavior, which I can apply to anything I desire. My profile, like my avatar, is managed by Universal You. It knows that I like to book inexpensive hostels when I travel on vacation, but with a private bath, maximum bandwidth, and always in the oldest part of the town, except if it is near a bus station. It works with an AI to match, schedule, and reserve the best rates. It is more than a mere stored profile; rather it is an ongoing
space is available that day. My personal device turns the space’s screens into my screen. My work during the day entails tweaking several AIs that match doctoring and health styles with clients. My job is to help the AIs understand some of the outlier cases (such as folks with faith-healing tendencies) in order to increase the effectiveness of the AIs’ diagnoses and recommendations. When I get home, I really look forward to watching the string of amusing 3-D videos and
Very soon you’ll be able to carry around all the music of humankind in your pants.
Here is a picture of where this force is taking us. My day in the near future will entail routines like this: I have a pill-making machine in my kitchen, a bit smaller than a toaster. It stores dozens of tiny bottles inside, each containing a prescribed medicine or supplement in powdered form. Every day the machine mixes the right doses of all the powders and stuffs them all into a single
more revealing than a real mirror in a dressing room. (It could be better in predicting how comfortable the new clothes feel, though.) My clothing is custom fit based on the specifications (tweaked over time) from my avatar. My clothing service generates new variations of styles based on what I’ve worn in the past, or on what I spend the most time wishfully gazing at, or on what my closest friends have worn. It is filtering styles. Over years I have trained an
In my cupboard I find a new kind of cereal with saturated nutrition that my friends are trying this week, so Universal ordered it for me yesterday. It’s not bad. My car service notices where the traffic jams are this morning, so it schedules my car later than normal and it will try an unconventional route to the place I’ll work today, based on several colleagues’ commutes earlier. I never know for sure where my office will be since our startup meets in whatever coworking
fun games that Albert lines up for me. That’s the name I gave to the avatar from Universal who filters my media for me. Albert always gets the coolest stuff because I’ve trained him really well. Ever since high school I would spend at least 10 minutes every day correcting his selections and adding obscure influences, really tuning the filters, so that by now, with all the new AI algos and the friends of friends of friends’ scores, I have the most amazing channel. I have a lot of people who
2,000 video ads and more than 2 million people voted on the best, which was aired
We are in a period of productive remixing
We live in a golden age of new mediums. In the last several decades hundreds of media genres have been born, remixed out of old genres. Former mediums such as a newspaper article, or a 30-minute TV sitcom, or a 4-minute pop song still persist and enjoy immense popularity. But digital technology unbundles those forms into their elements so they can be
For instance, behind every bestselling book are legions of fans who write their own sequels using their favorite author’s characters in slightly altered worlds. These extremely imaginative extended narratives are called fan fiction, or fanfic. They are unofficial—without the original authors’ cooperation or approval—and may mix elements from more than one book or author. Their chief audience is other avid fans. One fanfic archive lists 1.5 million
Extremely short snips (six seconds or less) of video quickly recorded on a phone can easily be shared and reshared with an app called Vine
moving images. As a percentage of the hundreds of millions of hours of moving images produced annually today, 1,200 hours is minuscule. It is an insignificant rounding error.
YouTube videos are viewed more than 12 billion times in a single month. The most viewed videos
have been watched several billion times each, more than any blockbuster movie
In essence, his films were written pixel by pixel. Indeed, every single frame in a big-budget Hollywood action film today has been built up with so many layers of additional details that it should be thought of as a moving painting rather than as a moving photograph.
Eno told me, “The trouble with computers is that there is not enough Africa in them.” By that he meant that interacting with computers using only buttons was like dancing with only your fingertips, instead of your full body, as you would in Africa. Embedded microphones, cameras, and accelerometers inject some Africa into devices. They provide embodiment in order to hear us, see us, feel us. Swoosh your hand to scroll. Wave your arms with a Wii. Shake or tilt a tablet. Let us embrace our feet, arms, torso, head, as well as our fingertips. Is there a way to use our whole bodies to overthrow the tyranny of the keyboard?
them like a beach ball, rotating bundles of information as if they were objects. It’s very cinematic, but real interfaces in the future are far more likely to use hands closer to the body. Holding your arms out in front of you for more than a minute is an aerobic exercise. For extended use, interaction will more closely resemble sign language. A future office worker is not going to be pecking at a keyboard—not even a fancy glowing holographic keyboard—but will be talking to a device with a newly evolved set of hand gestures, similar to the ones we now have of pinching our fingers in to reduce size, pinching them out to enlarge, or holding up two L-shaped pointing hands to frame and select something. Phones are very close to perfecting speech recognition today (including being able to translate in real time), so voice will be a huge part of interacting with devices. If you’d like to have a vivid picture of someone interacting with a portable device in the year 2050, imagine them using their eyes to visually “select” from a set of rapidly flickering options on the screen, confirming with lazy audible grunts[…]
But it’s not a real camera—it doesn’t have the picture on the back.” Another friend had a barely speaking toddler take over his iPad. She could paint and easily handle complicated tasks on apps almost before she could walk. One day her dad printed out a high-resolution image on photo paper and left it on the coffee table. He noticed his toddler came up and tried to unpinch the photo to make it larger. She tried unpinching it a few times, without success, and looked at him, perplexed. “Daddy, broken.” Yes, if something is not interactive, it is broken. The dumbest objects we can imagine today can be vastly improved by outfitting them with sensors and making them interactive. We had an old standard thermostat running the furnace in our home. During a remodel we upgraded to a Nest smart thermostat, designed by a team of ex-Apple execs and recently bought by Google. The Nest is aware of our presence. It senses when we are home, awake or asleep, or on vacation. Its brain, connected to the cloud, anticipates our routines, and over time builds up a pattern of our lives so it can warm up[…]
What could be more intimate and interactive than wearing something that responds to us? Computers have been on a steady march toward us. At first computers were housed in distant air-conditioned basements, then they moved to nearby small rooms, then they crept closer to us perched on our desks, then they hopped onto our laps, and recently they snuck into our pockets. The next obvious step for computers is to lay against our skin. We call those wearables. We can wear special spectacles that reveal an augmented reality. Wearing such a transparent computer (an early prototype was Google Glass) empowers us to see the invisible bits that overlay the physical world. We can inspect a cereal box in the grocery store and, as the young boy suggested, simply click it within our wearable to read its meta-information. Apple’s watch is a wearable computer, part health monitor, but mostly a handy portal to the cloud. The entire super-mega-processing power of the entire internet and World Wide Web is funneled through that little square on your wrist. But wearables in particular mean smart clothes. Of course, itsy-bitsy chips can be woven into a shirt so that[…]
Here is how a day plugged into virtual and augmented realities may unfold in the very near future. I am in VR, but I don’t need a headset. The surprising thing that few people expected way back in 2016 is that you don’t need to wear goggles, or even a pair of glasses, in order to get a basic “good enough” augmented reality. A 3-D image projects directly into my eyes from tiny light sources that peek from the corner of my rooms, all without the need of something in front of my face. The quality is good enough for most applications, of which there are tens of thousands.
I wear a pair of AR glasses outside to get a sort of X-ray view of my world. I use it first to find good connectivity. The warmer the colors in the world, the closer I am to heavy-duty bandwidth. With AR on I can summon earlier historical views layered on top of whatever place I am looking at, a nifty trick I used extensively in Rome. There, a fully 3-D life-size intact Colosseum appeared synchronized over the ruins as I clambered through them. It’s an unforgettable experience. It also shows me comments virtually “nailed” to different spots in the city left by other visitors that are viewable only from that very place. I left a few notes in spots for others to discover as well. The app reveals all the underground service pipes and cables beneath the street, which I find nerdly fascinating. One of the weirder apps I found is one that will float the dollar value—in big red numbers—over everything you look at. Almost any subject I care about has an overlay app that displays it as an apparition. A fair amount of public art is now 3-D mirages[…]
Passwords are easily hacked or stolen. So what is the better solution than passwords? You, yourself. Your body is your password. Your digital identity is you. All the tools that VR is exploiting, all the ways it needs to capture your movements, to follow your eyes, to decipher your emotions, to encapsulate you as much as possible so you can be transported into another realm and believe you were there—all these interactions will be unique to you, and therefore proof of you. One of the recurring surprises in the field of biometrics—the science behind the sensors that track your body—is that almost everything that we can measure has a personally unique fingerprint. Your heartbeat is unique. Your gait when you walk is unique. Your typing rhythm on a keyboard is distinctive. What words you use most frequently. How you sit. Your blinks. Of course, your voice. When these are combined, they fuse into a metapattern that almost can’t be faked. Indeed, that’s how we identify people in the real world. If I were to meet you and was asked if we had met before, my subconscious mind would churn through a spectrum of subtle attributes—voice[…]
Microsoft’s vision for light field AR is to build the office of the future. Instead of workers sitting in a cubicle in front of a wall of monitor screens, they sit in an open office wearing HoloLenses and see a huge wall of virtual screens around them. Or they click to be teleported to a 3-D conference room with a dozen coworkers who live in different cities. Or they click to a training room where an instructor will walk them though a first-aid class, guiding their avatars through the proper procedures. “See this? Now you do it.” In most ways, the AR class will be superior to a real-world class.
Second Life is rebooting itself as a 3-D world in 2016, code-named Project Sansa.
of cheap sensors, it can mirror the direction of your gaze in both worlds. Not just where you turn your head, but where you turn your eyes. Nano-small cameras buried inside the headset look back at your real eyes and transfer your exact gaze onto your avatar. That means that if someone is talking to your avatar, their eyes are staring at your eyes, and yours at theirs. Even if you move, requiring them to rotate their head, their eyes continue to lock onto yours. This eye contact is immensely magnetic. It stirs intimacy and radiates a felt presence.
To that end, High Fidelity is exploiting a neat trick. Taking advantage of the tracking abilities
Gaze tracking can be used in many ways. It can speed up screen navigation since you often look at something before your finger or mouse moves to confirm it
urinal in the men’s restroom was smarter than his computer because it knew he was there and would flush when he left, while his computer had no idea he was sitting in front of it all day. That
Recently researchers at MIT have taught the eyes in our machines to detect human emotions
Rosalind Picard and Rana el Kaliouby at the MIT Media Lab have developed software so attuned to subtle human emotions that they claim it can detect if someone is depressed
One answer first premiered in the 2002 movie Minority Report. The director, Steven Spielberg, was eager to convey a plausible scenario for the year 2050, and so he convened a group of technologists and futurists to brainstorm the features of everyday life in 50 years. I was part of that invited group, and our job was to describe a future bedroom, or what music would sound like
and especially how you would work on a computer in 2050. There was general consensus that we’d use our whole bodies and all our senses to communicate with our machines. We’d add Africa by standing instead of sitting. We think different on our feet. Maybe we’d add some Italy by talking to machines with our hands. One of our group, John Underkoffler, from the MIT Media Lab, was way ahead in this scenario and was developing a working prototype using hand motions to control data visualizations. Underkoffler’s system was woven into the film. The Tom Cruise character stands, raises his hands outfitted with a VR-like glove, and shuffles blocks of police surveillance data, as if conducting music. He mutters voice instructions as he dances with the data. Six years later, the Iron Man movies picked up this theme. Tony Stark, the protagonist, also uses his arms to wield virtual 3-D displays of data projected by computers, catching
It’s all interactive details. Dawns in the territory of Red Dead Redemption are glorious,
Cheap, abundant VR will be an experience factory. We’ll use it to visit environments too dangerous to risk in the flesh, such as war zones, deep seas, or volcanoes. Or we’ll use it for experiences we can’t easily get to as humans—to visit the inside of a stomach, the surface of a comet. Or to swap genders, or become a lobster. Or to cheaply experience something expensive, like a flyby
of the Himalayas. But experiences are generally not sustainable. We enjoy travel experiences in part because we are only visiting briefly. VR, at least in the beginning, is likely to be an experience we dip in and out of. Its presence is so strong we may want it only in small, measured doses. But we have no limit on the kind of interacting we crave.
sweet feeling of being part of something large that is moving forward (the game’s narrative) while you still get to steer (the game’s play).
When you halt at a random homestead and chat with the cowhand, his responses are plausible because in his heart beats an AI. AI is seeping into VR and AR in other ways as well. It will be used to “see” and map the physical world you are really standing in so that it can transport you to a synthetic world. That includes mapping your physical body’s motion. An AI can watch you as you sit, stand, move around in, say, your office without the need of special tracking equipment, then mirror that in the virtual world. An AI can read your route through the synthetic environment and calculate interferences needed to herd you in certain directions, as a minor god might do.
Instead of getting A-pluses on daily quizzes, you level up. You get points for picking up litter or recycling. Ordinary life, not just virtual worlds, can be gameified.
Implicit in VR is the fact that everything—without exception—that occurs in VR is tracked. The virtual world is defined as a world under total surveillance, since nothing happens in VR without tracking it first. That makes it easy to gameify behavior—awarding points, or upping levels, or scoring powers, etc.—to keep it fun
we can use the same interaction techniques that we use in VR. We’ll communicate with our appliances and vehicles using the same VR gestures. We can use the same gameifications to create incentives, to nudge participants in preferred directions in real life. You might go through your day racking up points for brushing your teeth properly, walking 10,000 steps, or driving safely,
The first seconds were awkward and embarrassing. But amazingly, within a few minutes I could kick with my arms and punch with my feet. Jeremy Bailenson, the Stanford professor who devised this experiment and uses VR as the ultimate sociological lab, discovered that it usually took a person only four minutes to completely rewire the feet/arm circuits in their brain. Our identities are far more fluid than we think.
In 2004, Udo Wachter, an IT manager in Germany, took the guts of a small digital compass and soldered it into a leather belt. He added 13 miniature piezoelectric vibrators, like the ones that vibrate your smartphone, and buried them along the length of the belt. Finally he hacked the electronic compass so that instead of displaying north on a circular screen, it vibrated different parts of the belt when it was clasped into a circle. The section of the circle “facing” north would always vibrate. When Udo put the belt on, he could feel northness on his waist. Within a week of always wearing the north belt, Udo had an unerring sensation of “north.” It was unconscious. He could point in the direction without thinking. He just knew. After several weeks he acquired an additional heightened sense of location, of where he was in a city, as if he could feel a map. Here the quantification from digital tracking was subsumed into a wholly new bodily sensation. In the long term this is the destiny of many of the constant streams of data flowing from our bodily sensors. They won’t be numbers; they will be new senses. These[…]
First described by the computer scientist David Gelernter in 1999, a lifestream is more than just a data archive. Gelernter conceived of lifestreams as a new organizing interface for computers. Instead of an old desktop, a new chronological stream. Instead of a web browser, a stream browser. Gelernter and his graduate student Eric Freeman define the lifestream architecture like this: A lifestream is a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents—pictures, correspondence, bills, movies, voice mail, software. Moving beyond the present and into the future, the stream contains documents you will need: reminders, calendar items, to-do lists. You
This list, instead, tallies the kind of tracking an average person might encounter on an ordinary day in the United States. Each example has been sourced officially or from a major publication. Car movements—Every car since 2006 contains a chip that records your speed, braking, turns, mileage, accidents whenever you start your car. Highway traffic—Cameras on poles and sensors buried in highways record the location of cars by license plates and fast-track badges. Seventy million plates are recorded each month. Ride-share taxis—Uber, Lyft, and other decentralized rides record your trips. Long-distance travel—Your travel itinerary for air flights and trains is recorded. Drone surveillance—Along U.S. borders, Predator drones monitor and record outdoor activities. Postal mail—The exterior of every piece of paper mail you send or receive is scanned and digitized. Utilities—Your power and water usage patterns are kept by utilities. (Garbage is not cataloged, yet.) Cell phone location and call logs—Where, when, and who you call (metadata) is stored for months. Some phone carriers routinely store the contents of calls and messages for days to years. Civic cameras—Cameras record your activities 24/7 in most city downtowns in[…]
for patterns that reveal your personality, ethnicity, idiosyncrasies, politics, and preferences. E-wallets and e-banks—Aggregators like Mint track your entire financial situation from loans, mortgages, and investments. Wallets like Square and PayPal track all purchases. Photo face recognition—Facebook and Google can identify (tag) you in pictures taken by others posted on the web. The location of pictures can identify your location history. Web activities—Web advertising cookies track your movements across the web. More than 80 percent of the top thousand sites employ web cookies that follow you wherever you go on the web. Through agreements with ad networks, even sites you did not visit can get information about your viewing history. Social media—Can identify family members, friends, and friends of friends. Can identify and track your former employers and your current work mates. And how you spend your free time. Search browsers—By default Google saves every question you’ve ever asked forever. Streaming services—What movies (Netflix), music (Spotify), video (YouTube) you consume and when, and what
For eons and eons humans have lived in tribes and clans where every act was open and visible and there were no secrets. Our minds evolved with constant co-monitoring. Evolutionarily speaking, coveillance is our natural state. I believe that, contrary to our modern suspicions, there won’t be a backlash against a circular world in which we constantly track each other because humans have lived like this for a million years, and—if truly equitable and symmetrical—it can feel comfortable. That’s a big if. Obviously, the relation between me and Google, or between me and the government, is inherently not equitable or symmetrical. The very fact they have access to everyone’s lifestream, while I have access only to mine, means they have access to a qualitatively greater thing. But if some symmetry can be restored so that I can be part of holding their greater status to a greater accountability, and I benefit from their greater view, it might work. Put it this way: For sure cops will videotape citizens. That’s okay as long as citizens can videotape cops, and can get access to the cops’ videos, and share them to keep the more powerful accountable. That’s not[…]
the same AI at Google that can already describe what is going on in a random photo could (when it is cheap enough) digest the images from my Narrative shirt cam so that I can simply ask Narrative in plain English to find me the guy who was wearing a pirate hat at a party I attended a couple of years ago
Gigabytes are on your phone. Terabytes were once unimaginably enormous, yet today I have three terabytes sitting on my desk. The next level up is peta. Petabytes are the new normal for companies. Exabytes are the current planetary scale. We’ll probably reach zetta in a few years. Yotta is the last scientific term for which we have an official measure of magnitude. Bigger than yotta is blank. Until now, any more than a yotta was a fantasy not deserving an official name. But we’ll be flinging around yottabytes in two decades or so.
Navigating zillions of bits, in real time, will require entire new fields of mathematics, completely new categories of software algorithms, and radically innovative hardware. What wide-open opportunities!
bulk of usable information today has been arranged in forms that only humans understand. Inside a snapshot taken on your phone is a long string of 50 million bits that are arranged in a way that makes sense to a human eye. This book you are reading is about 700,000 bits ordered into the structure of English grammar. But we are at our limits. Humans can no longer touch, let along process, zillions of bits. To exploit the full potential of the zillionbytes of data that we are harvesting and creating, we need to be able to arrange bits in ways that machines and artificial intelligences can understand.
Data scientists call this stage “machine readable” information, because it is AIs and not humans who will do this work in the zillions. When you hear a term like “big data,” this is what it is about.
We are on our way to manufacturing 54 billion sensors every year by 2020. Spread around the globe, embedded in our cars, draped over our bodies, and watching us at home and on public streets, this web of sensors will generate another 300 zillionbytes of data in the next decade
Even the most angelic technology can be weaponized, and will be. Criminals are some of the most creative innovators in the world. And crap constitutes 80 percent of everything. But importantly, these negative forms follow exactly the same general trends I’ve been outlining for the positive. The negative, too, will become increasingly cognified, remixed, and filtered. Crime, scams, warring, deceit, torture, corruption, spam, pollution, greed, and other hurt will all become more decentralized and data centered. Both virtue and vice are subject to the same great becoming and flowing forces. All the ways that startups and corporations need to adjust to ubiquitous sharing and constant screening apply to crime syndicates and hacker squads as well. Even the bad can’t escape these trends. Additionally, it may seem counterintuitive, but every harmful invention also provides a niche to create a brand-new never-seen-before good. Of course, that newly minted good can then be (and probably will be) abused by a corresponding bad idea. It may seem that this circle of new good provoking new bad which provokes new good which spawns new bad is just spinning us in place, only faster and faster. That would be true except for[…]
A good question is like the one Albert Einstein asked himself as a small boy—“What would you see if you were traveling on a beam of light?” That question launched the theory of relativity, E=MC2, and the atomic age. A good question is not concerned with a correct answer. A good question cannot be answered immediately. A good question challenges existing answers. A good question is one you badly want answered once you hear it, but had no inkling you cared before it was asked. A good question creates new territory of thinking. A good question reframes its own answers. A good question is the seed of innovation in science, technology, art, politics, and business. A good question is a probe, a what-if scenario. A good question skirts on the edge of what is known and not known, neither silly nor obvious. A good question cannot be predicted. A good question will be the sign of an educated mind. A good question is one that generates many other good questions. A good question may be the last job a machine will learn to do. A good question is what humans are for.
Yet the greatest surprise brought by Wikipedia is that we still don’t know how far this power can go. We haven’t seen the limits of wiki-ized intelligence. Can it make textbooks, music, and movies? What about law and political governance?
I am convinced that the full impact of Wikipedia is still subterranean and that its mind-changing force is working subconsciously on the global millennial generation, providing them with an existent proof of a beneficial hive mind, and an appreciation for believing in the impossible
As far as I can tell, the impossible things happening now are in every case due to the emergence of a new level of organization that did not exist before. These incredible eruptions are the result of large-scale collaboration, and massive real-time social interacting, which in turn are enabled by omnipresent instant connection between billions of people at a planetary scale.
The technium—the modern system of culture and technology—is accelerating the creation of new impossibilities by continuing to invent new social organizations. The
On the contrary, I cherish a good wasting of time as a necessary precondition for creativity. More important, I believe the conflation of play and work, of thinking hard and thinking playfully, is one of the greatest things this new invention has done. Isn’t the whole idea that in a highly evolved advanced society work is over?
I’ve noticed a different approach to my thinking now that the hive mind has spread it extremely wide and loose. My thinking is more active, less contemplative. Rather than begin a question or hunch by ruminating aimlessly in my mind, nourished only by my ignorance, I start doing things. I immediately go. I go looking, searching, asking, questioning, reacting, leaping in, constructing notes, bookmarks, a trail—I start off making something mine. I don’t wait. Don’t have to wait. I act on ideas first now instead of thinking on them. For some folks, this is the worst of the net—the loss of contemplation. Others feel that all this frothy activity is simply stupid busywork, or spinning of wheels, or illusionary action. But compared with what? Compared with the passive consumption of TV? Or time spent lounging at a bar chatting? Or the slow trudge to a library only to find no answers to the hundreds of questions I have? Picture the thousands of millions of people online at this very minute. To my eye they are not wasting time with silly associative links, but are engaged in a more productive way of thinking—getting instant answers, researching, responding[…]
The number of scientific articles published each year has been accelerating even faster than this for decades. Over the last century the annual number of patent applications worldwide has risen in an exponential curve.
While the answer machine can expand answers infinitely, our time to form the next question is very limited. There is an asymmetry in the work needed to generate a good question versus the work needed to absorb an answer. Answers become cheap and questions become valuable—the inverse of the situation now. Pablo Picasso brilliantly anticipated this inversion in 1964 when he told the writer William Fifield, “Computers are useless. They only give you answers.”
Future people will envy us, wishing they could have witnessed the birth we saw. It was in these years that humans began animating inert objects with tiny bits of intelligence, weaving them into a cloud of machine intelligences and then linking billions of their own minds into this single supermind. This convergence will be recognized as the largest, most complex, and most surprising event on the planet up until this time. Braiding nerves out of glass, copper, and airy radio waves, our species began wiring up all regions, all processes, all people, all artifacts, all sensors, all facts and notions into a grand network of hitherto unimagined complexity. From this embryonic net was born a collaborative interface for our civilization, a sensing, cognitive apparatus with power that exceeded any previous invention.