Rise of the Robots: Technology and the Threat of a Jobless Future (15 page)

BOOK: Rise of the Robots: Technology and the Threat of a Jobless Future
13.47Mb size Format: txt, pdf, ePub

The ever-growing mountain of data is increasingly viewed as a resource that can be mined for value—both now and in the future. Just as extractive industries like oil and gas continuously benefit from technical advances, it’s a good bet that accelerating computer power and improved software and analysis techniques will enable corporations to unearth new insights that lead directly to increased profitability. Indeed, that expectation on the part of investors is probably what gives data-intensive companies like Facebook such enormous valuations.

Machine learning—a technique in which a computer churns through data and, in effect, writes its own program based on the statistical relationships it discovers—is one of the most effective means of extracting all that value. Machine learning generally involves two steps: an algorithm is first trained on known data and is then unleashed to solve similar problems with new information. One ubiquitous use of machine learning is in email spam filters. The algorithm might be trained by processing millions of emails that have been pre-categorized as either spam or not. No one sits down and directly programs the system to recognize every conceivable typographic butchery of the word “Viagra.” Instead, the software figures this out by itself. The result is an application that can automatically identify the vast majority of junk email and can also continuously improve and adapt over time as more examples become available. Machine learning algorithms based on the same basic principles recommend books at
Amazon.com
, movies at Netflix, and potential dates at
Match.com
.

One of the most dramatic demonstrations of the power of machine learning came when Google introduced its online language translation tool. Its algorithms used what might be called a “Rosetta
Stone” approach to the problem by analyzing and comparing millions of pages of text that had already been translated into multiple languages. Google’s development team began by focusing on official documents prepared by the United Nations and then extended their effort to the Web, where the company’s search engine was able to locate a multitude of examples that became fodder for their voracious self-learning algorithms. The sheer number of documents used to train the system dwarfed anything that had come before. Franz Och, the computer scientist who led the effort, noted that the team had built “very, very large language models, much larger than anyone has ever built in the history of mankind.”
8

In 2005, Google entered its system in the annual machine translation competition held by the National Bureau of Standards and Technology, an agency within the US Commerce department that publishes measurement standards. Google’s machine learning algorithms were able to easily outperform the competition—which typically employed language and linguistic experts who attempted to actively program their translation systems to wade through the mire of conflicting and inconsistent grammatical rules that characterize languages. The essential lesson here is that, when datasets are large enough, the knowledge encapsulated in all that data will often trump the efforts of even the best programmers. Google’s system is not yet competitive with the efforts of skilled human translators, but it offers bidirectional translation between more than five hundred language pairs. That represents a genuinely disruptive advance in communication capability: for the first time in human history, nearly anyone can freely and instantly obtain a rough translation of virtually any document in any language.

While there are a number of different approaches to machine learning, one of the most powerful, and fascinating, techniques involves the use of artificial neural networks—or systems that are designed using the same fundamental operating principles as the human brain. The brain contains as many as 100 billion neuron cells—and
many trillions of connections between them—but it’s possible to build powerful learning systems using far more rudimentary configurations of simulated neurons.

An individual neuron operates somewhat like the plastic pop-up toys that are popular with very young children. When the child pushes the button, a colorful figure pops up—perhaps a cartoon character or an animal. Press the button gently and nothing happens. Press it a bit harder and still nothing. But exceed a certain force threshold, and up pops the figure. A neuron works in essentially the same fashion, except that the activation button can be pressed by a combination of multiple inputs.

To visualize a neural network, imagine a Rube Goldberg–like machine in which a number of these pop-up toys are arranged on the floor in rows. Three mechanical fingers are poised over each toy’s activation button. Rather than having a figure pop up, the toys are configured so that when a toy is activated it causes several of the mechanical fingers in the next row of toys to press down on their own buttons. The key to the neural network’s ability to learn is that the force with which each finger presses down on its respective button can be adjusted.

To train the neural network, you feed known data into the first row of neurons. For example, imagine inputting visual images of handwritten letters. The input data causes some of the mechanical fingers to press down with varying force depending on their calibration. That, in turn, causes some of the neurons to activate and press down on buttons in the next row. The output—or answer—is gathered from the last row of neurons. In this case, the output will be a binary code identifying the letter of the alphabet that corresponds to the input image. Initially, the answer will be wrong, but our machine also includes a comparison and feedback mechanism. The output is compared to the known correct answer, and this automatically results in adjustments to the mechanical fingers in each row, and that, in turn, alters the sequence of activating neurons. As
the network is trained with thousands of known images, and then the force with which the fingers press down is continuously recalibrated, the network will get better and better at producing the correct answer. When things reach the point where the answers are no longer improving, the network has effectively been trained.

This is, in essence, the way that neural networks can be used to recognize images or spoken words, translate languages, or perform a variety of other tasks. The result is a program—essentially a list of all the final calibrations for the mechanical fingers poised over the neuron activation buttons—that can be used to configure new neural networks, all capable of automatically generating answers from new data.

Artificial neural networks were first conceived and experimented with in the late 1940s and have long been used to recognize patterns. However, the last few years have seen a number of dramatic breakthroughs that have resulted in significant advances in performance, especially when multiple layers of neurons are employed—a technology that has come to be called “deep learning.” Deep learning systems already power the speech recognition capability in Apple’s Siri and are poised to accelerate progress in a broad range of applications that rely on pattern analysis and recognition. A deep learning neural network designed in 2011 by scientists at the University of Lugano in Switzerland, for example, was able to correctly identify more than 99 percent of the images in a large database of traffic signs—a level of accuracy that exceeded that of human experts who competed against the system. Researchers at Facebook have likewise developed an experimental system—consisting of nine levels of artificial neurons—that can correctly determine whether two photographs are of the same person 97.25 percent of the time, even if lighting conditions and orientation of the faces vary. That compares with 97.53 percent accuracy for human observers.
9

Geoffrey Hinton of the University of Toronto, one of the leading researchers in the field, notes that deep learning technology “scales beautifully. Basically you just need to keep making it bigger and
faster, and it will get better.”
10
In other words, even without accounting for likely future improvements in their design, machine learning systems powered by deep learning networks are virtually certain to see continued dramatic progress simply as a result of Moore’s Law.

Big data and the smart algorithms that accompany it are having an immediate impact on workplaces and careers as employers, particularly large corporations, increasingly track a myriad of metrics and statistics regarding the work and social interactions of their employees. Companies are relying ever more on so-called people analytics as a way to hire, fire, evaluate, and promote workers. The amount of data being collected on individuals and the work they engage in is staggering. Some companies capture every keystroke typed by every employee. Emails, phone records, web searches, database queries and accesses to files, entry and exit from facilities, and untold numbers of other types of data may also be collected—with or without the knowledge of workers.
11
While the initial purpose of all this data collection and analysis is typically more effective management and assessment of employee performance, it could eventually be put to other uses—including the development of software to automate much of the work being performed.

The big data revolution is likely to have two especially important implications for knowledge-based occupations. First, the data captured may, in many cases, lead to direct automation of specific tasks and jobs. Just as a person might study the historical record and then practice completing specific tasks in order to learn a new job, smart algorithms will often succeed using essentially the same approach. Consider, for example, that in November 2013 Google applied for a patent on a system designed to automatically generate personalized email and social media responses.
12
The system works by first analyzing a person’s past emails and social media interactions. Based on what it learned, it would then automatically write responses to future emails, Tweets, or blog posts, and it would do so employing the person’s usual writing style and tone. It’s easy to imagine such
a system eventually being used to automate a great deal of routine communication.

Google’s automated cars, which it first demonstrated in 2011, likewise provide important insight into the path that data-driven automation is likely to follow. Google didn’t set out to replicate the way a person drives—in fact, that would have been beyond the current capabilities of artificial intelligence. Rather, it simplified the challenge by designing a powerful data processing system and then putting it on wheels. Google’s cars navigate by relying on precision location awareness via GPS together with vast amounts of extremely detailed mapping data. The cars also, of course, have radars, laser range finders, and other systems that provide a continuous stream of real-time information and allow the car to adapt to new situations, such as a pedestrian stepping off the curb. Driving may not be a white-collar profession, but the general strategy used by Google can be extended into a great many other areas: First, employ massive amounts of historical data in order to create a general “map” that will allow algorithms to navigate their way through routine tasks. Next, incorporate self-learning systems that can adapt to variations or unpredictable situations. The result is likely to be smart software that can perform many knowledge-based jobs with a high degree of reliability.

The second, and probably more significant, impact on knowledge jobs will occur as a result of the way big data changes organizations and the methods by which they are managed. Big data and predictive algorithms have the potential to transform the nature and number of knowledge-based jobs in organizations and industries across the board. The predictions that can be extracted from data will increasingly be used to substitute for human qualities such as experience and judgment. As top managers increasingly employ data-driven decision making powered by automated tools, there will be an ever-shrinking need for an extensive human analytic and management infrastructure. Whereas today there is a team of knowledge workers
who collect information and present analysis to multiple levels of management, eventually there may be a single manager and a powerful algorithm. Organizations are likely to flatten. Layers of middle management will evaporate, and many of the jobs now performed by both clerical workers and skilled analysts will simply disappear.

WorkFusion, a start-up company based in the New York City area, offers an especially vivid example of the dramatic impact that white-collar automation is likely to have on organizations. The company offers large corporations an intelligent software platform that almost completely manages the execution of projects that were once highly labor-intensive through a combination of crowd sourcing and automation.

The WorkFusion software initially analyzes the project to determine which tasks can be directly automated, which can be crowd sourced, and which must be performed by in-house professionals. It can then automatically post job listings to websites like Elance or Craigslist and manage the recruitment and selection of qualified freelance workers. Once the workers are on board, the software allocates tasks and evaluates performance. It does this in part by asking freelancers to answer questions to which it already knows the answer as an ongoing test of the workers’ accuracy. It tracks productivity metrics like typing speed, and automatically matches tasks with the capabilities of individuals. If a particular person is unable to complete a given assignment, the system will automatically escalate that task to someone with the necessary skills.

While the software almost completely automates management of the project and dramatically reduces the need for in-house employees, the approach does, of course, create new opportunities for freelance workers. The story doesn’t end there, however. As the workers complete their assigned tasks, WorkFusion’s machine learning algorithms continuously look for opportunities to further automate the process. In other words, even as the freelancers work under the direction of
the system, they are simultaneously generating the training data that will gradually lead to their replacement with full automation.

Other books

Finally a Bride by Vickie Mcdonough
The Visitation by Frank Peretti
Riptide by Dawn Lee McKenna
Blue Heart Blessed by Susan Meissner
100. A Rose In Jeopardy by Barbara Cartland
Into the Woods by Kim Harrison
The Great Divide by T. Davis Bunn
Found by Harlan Coben