Technology

The story of artificial intelligence as told by its inventors


you welcome in I was there when, the new oral history project from In the machines we trust Audio notation. It shows stories of how breakthroughs in artificial intelligence and computing have occurred, as told by people who witnessed them. In this first episode, we meet Joseph Attic – who helped create the first commercially viable facial recognition system.

Credits:

This episode was produced by Jennifer Strong, Anthony Green, and Emma Selickens with the help of Lindsay Moscato. Edited by Michael Riley and Matt Honan. Mixed by Garret Lang, with sound and music design by Jacob Gorski.

full copy:

[TR ID]

Jennifer: My name is Jennifer Strong, hostess In the machines we trust.

I want to tell you about something we’ve been working on for a little while behind the scenes here.

it’s called I was there when.

It’s an oral history project that presents stories of breakthroughs in artificial intelligence and computing…as told by the people who witnessed them.

JOSEPH ATIC: And when I walked into the room, she spotted my face, pulled it out of the background, and said, “I see Joseph” and that was the moment I felt my back hair… It felt like something had happened. We were witnesses.

Jennifer: We started things off with a guy who helped create the first commercially viable facial recognition system…in the ’90s…

[IMWT ID]

I am Joseph Attic. Today, I am the CEO of ID for Africa, a humanitarian organization focused on giving people in Africa a digital identity so they can access services and exercise their rights. But I wasn’t always in the humanitarian field. After my PhD in mathematics, my colleagues and I made some fundamental breakthroughs, leading to the first commercially viable face recognition. That’s why people refer to me as the founding father of the facial recognition and biometrics industry. The algorithm for how to recognize familiar faces in the human brain became apparent while we did research, mathematical research, while I was at the Institute for Advanced Study in Princeton. But you had no idea how to implement such a thing.

It was a long period of months of programming and failure and programming and failure. And one night, early in the morning, in fact, we had just finished releasing the algorithm. We provided the assembly source code in order to get the running code. And we got out, and I went out to go to the bathroom. And then when I came back to the room and the source code was compiled by the machine and came back. And usually after you collect it, it turns on automatically, and when I entered the room I spotted a human moving into the room and spotting my face, pulling it out from the background and saying, “I see Joseph.” And that was the moment I had hair on my back—it felt like something had happened. We were witnesses. And I started calling the other people who were still in the lab and they would each come into the room.

And he will say, “I see Norman. I see Paul, I see Joseph.” And we kind of took turns running around the room just to see how many he could find in the room. It was a moment of truth where I would say several years of work finally led to a breakthrough, although in theory, no further progress was required. Just the fact that we figured out how to implement it and finally saw that being able to work was very rewarding and satisfying. We’ve developed a team that’s more of a development team, not a research team, that has been focused on putting all of these capabilities into a PC platform. And that was the birth, really the birth of commercial face recognition technology, I’ll put it, in 1994.

My anxiety started very quickly. I saw a future with nowhere to hide as cameras became ubiquitous, computers were commodified, and the processing capabilities of computers became better and better. And so in 1998, I lobbied the industry and said, We need to establish principles for responsible use. And I felt good for a while, because I felt like we got it right. I felt like we put together a responsible use code that’s followed by whatever is the implementation. However, this symbol has not lived the test of time. The reason for this is that we did not anticipate the advent of social media. Basically, at the time we created the code in 1998, we said that the most important component of the face recognition system was the tagged database of known people. We said, if I wasn’t in the database, the system would be blind.

It was difficult to build the database. At most we could build a thousand 10,000, 15,000, 20,000 because each image had to be scanned and entered manually – the world we live in today, we’re now in a system where we let the monster out of the bag by feeding it billions of faces and help it by tagging ourselves. Hmm, we’re now in a world where it’s hard to have any hope of controlling and requiring everyone to take responsibility for using facial recognition. Meanwhile, there is no shortage of well-known faces on the Internet because you can only find out, as has been done recently by some companies. And so I started to panic in 2011, and I wrote an editorial saying it was time to hit the panic button because the world is going in a direction where face recognition will be ubiquitous and faces will be ubiquitous in databases.

And at the time people said I was worrisome, but today they realize that’s exactly what’s happening today. And where do we go from here? I have been lobbying for legislation. I have been pushing for legal frameworks that make it a liability to use someone’s face without their consent. So it is no longer a technological problem. We cannot contain this powerful technology through technological means. There has to be some kind of legal framework. We can’t let technology get in our way too much. Accept our values, before what we think is acceptable.

The issue of consent is still one of the most difficult and challenging matters when you are dealing with technology, just giving someone a notice does not mean that it is enough. For me consent must be communicated. They have to understand the consequences of what that means. And not just to say, well, we made a recording and that was enough. We told people, if they didn’t want to, they could have gone anywhere.

I also find it very easy to be drawn to the flashy technological features that may give us a short-term edge in our lives. And then, we realize we’ve given up on something that was so precious. By that time, we will have desensitized the population and reached a point where we cannot back down. This is what worries me. I’m concerned about the fact that facial recognition by Facebook, Apple and others works. I’m not saying all this is illegal. Much of it is legit.

We have reached a point where the general public may be indifferent and may become insensitive because they see it all over the place. And maybe in 20 years, you’ll be out of your house. You will no longer have the expectation that you will not be. The dozens of people you cross along the road won’t recognize. I think at that time the public will be very upset because the media will start reporting cases where people have been stalked. People were targeted, even chosen based on their net worth in the street and kidnapped. I think this is a great responsibility on our hands.

And so I think the issue of approval will continue to haunt the industry. And until this question becomes a consequence, it probably will not be resolved. I think we need to put limits on what can be done with this technology.

My career has also taught me that getting too far is not a good thing because face recognition, as we know it today, was actually invented in 1994. But most people think it was invented by Facebook and machine learning algorithms, and it’s now spreading all over the world. Basically, at one point, I had to step down as CEO general because I was cutting back on technology that my company was going to promote out of fear of negative consequences for humanity. So I feel that scientists need to have the courage to plan ahead and see the consequences of their work. I’m not saying they should stop making breakthroughs. No, you should do your best, make more hacks, but we also have to be honest with ourselves and basically alert the world and decision makers that this hack has its pros and cons. Thus, when using this technique, we need some kind of guidance and frameworks to ensure that it is oriented to a positive and not a negative application.

Jennifer: I was there when… It is an oral history project that presents the stories of people who have witnessed or made breakthroughs in artificial intelligence and computing.

Do you have a story to tell? Know someone who does? Email us at podcasts@technologyreview.com.

[MIDROLL]

[CREDITS]

Jennifer: This episode was taped in New York City in December of 2020 and produced with the help of Anthony Green and Emma Selickens. Edited by Michael Riley and Matt Honan. Our mixing engineer is Garret Lang… with sound and music design by Jacob Gursky.

Thanks for listening, I’m Jennifer Strong.

[TR ID]



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button