Technology

Artificial intelligence is reinventing what computers are


Fall 2021: Season of pumpkins, pecan pies, and cool new phones. Every year, right on the cue app, Apple, Samsung, Google and others drop their latest releases. These formulations in the consumer tech calendar are no longer the inspiration for the surprise and wonder of those strong early days. But behind all this marketing glamor, there is something wonderful going on.

Google’s latest offering, the Pixel 6, is the first phone with a separate chip dedicated to artificial intelligence alongside its standard processor. And the chip that powers the iPhone for the past two years has contained what Apple calls a “neural engine”, also dedicated to artificial intelligence. Both chips are better suited to the kinds of computations involved in training and running machine learning models on our devices, like AI powering the camera. Artificial intelligence has become a part of our daily lives almost without us even noticing. And it’s changing the way we think about computing.

what does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors running with instructions from humans. Artificial intelligence is changing that on at least three fronts: how computers are made, how they are programmed, and how they are used. In the end, he will change their target.

“The core of computing is changing from analyzing numbers to making decisions,” says Pradeep Dube, director of the Intel Parallel Computing Lab. Or, in the words of MIT CSAIL Director Daniela Ross, AI is freeing computers out of their boxes.

More speed, lower speed

The first change concerns how computers are made — and the chips that control them. The gains in traditional computing came when machines became faster at performing one computation after another. For decades, the world has benefited from the chip acceleration that came with standard regularity as chipmakers kept pace with Moore’s Law.

But the deep-learning models that make today’s AI applications work require a different approach: they need huge numbers of less precise calculations to be done all at the same time. This means that a new type of chip is required: a chip that can transfer data as fast as it can, making sure it’s available when and where you need it. When deep learning exploded onto the scene a decade or so ago, there were really specialized computer chips that were very good at this: GPUs, or GPUs, that are designed to display an entire screen of pixels tens of times per second.

Anything can become a computer. In fact, most household items, from toothbrushes to light switches to doorbells, already come in a smart version.

Now chip makers such as Intel, Arm, and Nvidia, who supplied many of the first GPUs, are moving toward making devices tailored for AI. Google and Facebook are also making their way in the industry for the first time, in a race to find the advantage of artificial intelligence through hardware.

For example, the chip inside the Pixel 6 is a new mobile version of Google’s Tensioner Processing Unit, or TPU. Unlike traditional chips, which are geared toward ultra-fast microcomputations, TPUs are designed for the high-volume but low-resolution computations required by neural networks. Google has been using these chips in-house since 2015: they process people’s photos and natural language search queries. Google’s sister company DeepMind uses it to train its AI systems.

In the past two years, Google has made TPUs available to other companies, and these chips — along with similar chips being developed by others — have become the default choice within the world’s data centers.

AI even helps design its own computing infrastructure. In 2020, Google used a reinforcement learning algorithm – a type of artificial intelligence that learns how to solve a task through trial and error – to design a new TPU layout. AI eventually came up with weird new designs that no human could possibly think of — but it worked. This type of AI could one day develop better and more efficient chips.

Show, don’t tell

The second change relates to how computers are told what to do. For the past 40 years, we’ve programmed computers; In the next 40 we’ll be training them, says Chris Bishop, head of Microsoft Research in the UK.

Traditionally, to get a computer to do something like speech recognition or recognize objects in an image, programmers first had to make rules for the computer.

With machine learning, programmers no longer write the rules. Instead, they create a neural network that learns these rules on its own. It’s a fundamentally different way of thinking.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button