gettyimages_samaltman_051723559499
Win McNamee/Getty Images

(NEW YORK) — OpenAI CEO Sam Altman, whose company developed the widely used AI conversation program ChatGPT, warned federal lawmakers Tuesday that artificial intelligence could pose significant risks to the world if the technology goes wrong.

“We understand that people are anxious about how it could change the way we live. We are, too. But we believe that we can and must work together to identify and manage the potential downsides, so that we can all enjoy the tremendous upsides,” Altman told a Senate committee on tech and privacy.

Altman also acknowledged the risks in a March interview with ABC News’ Rebecca Jarvis, saying he was “a little bit scared” of the type of technology his own company is developing. Despite the dangers, Altman said AI can also be “the greatest technology humanity has yet developed.”

Start Here host Brad Mielke spoke to Gizmodo technology reporter Thomas Germain, who broke down Altman’s testimony, discussing the risks and potential challenges with proposing and implementing regulations on the technology.

BRAD MIELKE: Thomas, can you just help me break down what happened in this hearing?

THOMAS GERMAIN: Yeah, it was a little unusual. Sam Altman went in front of Congress and he said, basically begged them to protect the public from the technology that he is creating, which can seem a little weird if you take it at face value. One of the things that I think is interesting about this, it’s actually not unusual for the tech industry to ask to be regulated. That’s exactly what we’ve seen on privacy issues.

Some of the biggest proponents of privacy laws are Microsoft and Google and Meta, in fact, because it gives tech companies a huge advantage if there are laws that they can comply with. That way, if something goes wrong, they can just say, ‘Oh well, we were following the rules. It’s the government’s fault for not passing better regulation.’

It was an interesting hearing. And one of the things that was unusual about it was how friendly and positive everything was for the most part. You know, if you’ve seen any of the other hearings from other tech CEOs, they’re usually pretty combative. But Sam Altman managed to buddy up with all of these lawmakers, and they agree on some things, which is that AI should be regulated. But exactly how? No one really seems to know. And there were some very vague proposals thrown around. But it really seems like an open question, what AI regulation would even mean?

MIELKE: It felt like the Spider-Man meme, where everyone’s pointing at each other being like, “You, you, me?” Because this will sound like a dumb question, although maybe it’s not, since you just said that. What would you even regulate? When it comes to AI, what are the things that are even on the table right now?

GERMAIN: Yeah, that’s a really good question. And the fact that there’s no good answer says a lot about the state of the technology, right? We have no idea what this technology is capable of. I’ve spoken to people who are heading up companies that are at the forefront of building this technology. And if you ask them how far it’s going to go, they really have no idea.

We don’t know what the hard technical limits of these tools are. We don’t really know whether they can replace all of human labor like we’ve been told we’re supposed to be so afraid of. But there are a couple of things that Congress could do. I think the most important thing when it comes to AI regulation is transparency, right? The public, or at least regulators, need to know what data sets AI models are being trained on, because there can be baked in problems, right? If you train an AI on a data set that includes all of the internet, for example, you’re going to get a lot of racism and hate speech and other unpleasant things in there, and that’ll be spat back out.

MIELKE: Oh, because the way AI works is like, “Hey, use this model, use all these books to teach yourself how to talk, chat bot.” And yet, if you’re not using the right books or only a very narrow set of books, all of a sudden it’s trickier.

GERMAIN: Yeah, that’s one of the interesting things about this kind of technology, right? I think people have this image of computers that they’re really smart and they’re better than people. But really, it’s garbage in, garbage out. Whatever you feed in AI, it’s going to spit a similar thing back out. And we’ve seen that with other technologies, where they just repeat biases that are baked in.

Another big one is copyright issues. We really have no good answer whether an artist whose work goes into creating one of these models should be compensated or who owns the products of a tool like Chat GPT’s work. Congress really needs to answer that question and, if they don’t, the people who are going to answer it will be the Supreme Court. And our system is not designed for the court to be writing laws. That’s Congress’ job. So they really need to step up there.

And the last thing I think is picking out individual areas where there are particularly high risks. And this is what AI regulation proposals in the EU have been like. There are specific rules about issues like hiring decisions, banking, health care, the military, police, for example. How can and should cops be allowed to use this technology? That’s something we’ve seen with facial recognition. It’s gone totally off the rails.

MIELKE: I was about to ask for an example because it sounds like when you talk about hiring, it would be like sifting through all these resumes and figuring out who the best person is for your job. And yet, maybe you’re leaving out like a group of people that should not have been dismissed like that.

GERMAIN: That’s exactly right. Or maybe, you know, the person who designs the AI has a preference for male employees. So when the AI is going through and picking who the most qualified candidate is, it’ll end up doing that. I mean, then it’s not even about the person’s bias, right? If you look at the history of employment, men tend to be paid at higher rates and be more likely to be hired for particular kinds of jobs. If you train an AI on a data set of all the existing employees in the world, it’ll end up replicating the problems that already exist in our society.

MIELKE: So then, you say, here are some things that the government could regulate. Will they regulate it? Because I got to say, listening to the lawmakers here, and we’ve talked about before with like some of these old senators, it doesn’t sound like they’re particularly keen on getting involved in this.

GERMAIN: No, we’ve been talking about regulating privacy, for example, for the better part of a decade, and we’re no closer to a law today than we were a couple of years ago. And I think that speaks to a broader issue, which is a lot of the problems of AI are the problems of society. This technology could make the rich richer. It could make the poor poorer. It could replicate a lot of society’s worst impulses, from misinformation to discrimination, any number of issues. And Congress hasn’t addressed those problems as they stand today, let alone what technology will look like in the future. So in terms of regulating, I would start with the problems that we already have as opposed to some future hypothetical about what this tech will be like. Will Congress do that? I’m not hopeful. They don’t seem to be able to get anything done, let alone wrapping their minds around an entirely new technology.

MIELKE: As Congress is sort of like, would you like to take the lead on this? And the head of OpenAI is like, “No, no, I have a job.” All right, Thomas Germain from Gizmodo, thank you so much.

GERMAIN: Thanks for having me on.

Copyright © 2023, ABC Audio. All rights reserved.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>