, Internet, Modernización de Empresas

BY ERIK BRYNJOLFSSON AND ANDREW MCAFEE JUL 18 2017Machine learning systems have been around since the 1950s, so why are we suddenly seeing breakthroughs in so many diverse areas? Three factors are at play: enormously increased data, significantly improved algorithms, and substantially more-powerful computer hardware. Over the past two decades (depending on the application) data availability has increased as much as 1,000-fold, key algorithms have improved 10-fold to 100-fold, and hardware speed has improved by at least 100-fold. According to MIT’s Tomaso Poggio, these can combine to generate improvements of up to a millionfold in applications such as the pedestrian-detection vision systems used in self-driving cars.


Let’s look at each factor in turn.
Data. Music CDs, movie DVDs, and web pages have been adding to the world’s stock of digitally encoded information for decades, but over the past few years the rate of creation has exploded. Signals from sensors in smartphones and industrial equipment, digital photos and videos, a nonstop global torrent of social media, and many other sources combine to put us in a totally unprecedented era of data abundance. Ninety percent of the digital data in the world today has been created in the past two years alone. With the burgeoning internet of things (IoT) promising to connect billions of new devices and their data streams, it’s a sure bet we’ll have far more digital data to work with in the coming decade.

Algorithms. The data deluge is important not only because it makes existing algorithms more effective, but also because it encourages, supports, and accelerates the development of better algorithms. The algorithms and approaches that now dominate the discipline — such as deep supervised learning and reinforcement learning — share a vital basic property: Their results improve as the amount of training data they’re given increases. The performance of an algorithm usually levels off at some point, after which feeding it more data has little or no effect. But that does not yet appear to be the case for many of the algorithms being widely used today. At the same time, new algorithms are transferring the learning from one application to another, making it possible to learn from fewer and fewer examples.

Computer hardware. Moore’s Law — that integrated circuit capability steadily doubles every 18 to 24 months — celebrated its 50th anniversary in 2015, at which time it was still going strong. Some have commented recently that it’s running up against the limits of physics and so will slow down in the years to come; and indeed, clockspeed for standard microprocessors has leveled off. But by a fortuitous coincidence, a related type of computer chip, called a graphic processing unit, or GPU, turns out to be very effective when applied to the types of calculations needed for neural nets. In fact, speedups of 10X are not uncommon when neural nets are moved from traditional central processing units to GPUs. GPUs were initially developed to rapidly display graphics for applications such as computer gaming, which provided scale economies and drove down unit costs, but an increasing number of them are now used for neural nets. As neural net applications become even more common, several companies have developed specialized chips optimized for this application, including Google’s tensor processing unit, or TPU. According to Shane Legg, a cofounder of Google DeepMind, a training run that takes one day on a single TPU device would have taken a quarter of a million years on an 80486 from 1990. This can generate about another 10-fold improvement.
These improvements have a synergistic effect on one another. For instance, the better hardware makes it easier for engineers to test and develop better algorithms and, of course, enables machines to crunch much larger data sets in a reasonable amount of time. Some of the applications being solved today — converting sound waves from speech into meaningful text, for example — would take literally centuries to run on 1990s-vintage hardware. Successes motivate more bright researchers to go into the field and more investors and executives to fund further work.
Further amplifying these synergies are two additional technologies: global networking and the cloud. The mobile internet can now deliver digital technologies virtually anywhere on the planet, connecting billions of potential customers to AI breakthroughs. Think about the intelligent assistants you’re probably already using on your smartphone, the digital knowledge bases that large companies now share globally, and the crowdsourced systems, like Wikipedia and Kaggle, whose main users and contributors are smart people outside your organization.
Perhaps even more important is the potential of cloud-based AI and robotics to accelerate learning and diffusion. Consider a robot in one location that struggles with a task, such as recognizing an object. Once it has mastered that task, it will be able to upload that knowledge to the cloud and share it with other robots that use a compatible knowledge-representation system (Rethink Robotics is working on such a platform). In this way robots, working independently, can effectively gather data from hundreds, thousands, and eventually millions of eyes and ears. By combining their information in a single system, they can learn vastly more rapidly and share their insights almost instantaneously.

Source: https://hbr.org

Noticias Relacionadas con este Artículo

Nosotros le podemos ayudar
Etcheberry Consultores