AI regulations can’t be one-size-fits-all: Google legal chief



AI regulations can’t be one-size-fits-all: Google legal chief

Kent Walker urges flexibility and shared responsibility in developing new rules

Kent Walker, president of global affairs and chief legal officer of Alphabet and Google, called for a “hub-and-spoke” approach to AI regulations.(Photo by Satoko Kawasaki) 

TOKYO — As governments and companies worldwide debate how best to regulate artificial intelligence, Google executive Kent Walker is calling for a hub-and-spoke approach that allows different sectors and agencies to deal with the technology in their own way.

Walker, president of global affairs and chief legal officer of Google and parent Alphabet, rejected the idea of a one-size-fits-all fix in an interview with Nikkei.

Edited excerpts follow.

Q: Rival AI developer OpenAI has proposed creating a regulatory body for AI. What are your thoughts on regulation in the field?

A: We have supported what we call a hub-and-spoke model. While it’s tempting to believe there is a one-size-fits-all solution, the issues in AI will be very different in banking versus medicine versus transportation versus retail. Each agency, every agency, will need to become an AI agency. There is a need for expertise that helps all the different agencies.

The notion of a single agency, a “department of AI,” is like having a “department of electricity.” There are so many different applications that it will be hard to actually wrestle with those different problems.

Because of the chatbots, people are focused there. But the real promise of AI is far beyond chatbots. It’s progress in the way we do science and technology.

I’m not sure if you’ve heard about our AlphaFold team. They took the ability to fold 200 million proteins, each of which would have taken a biology PhD student three or four years, and did them all in a few weeks.

Now we have 1.4 million researchers around the world using that technology.

So we have to understand, what do we mean by these frontier technologies that would be regulated specially? Because regulation is always complex, and there’s a risk that if you do it too soon, or too inflexibly, you can slow innovation.

Google has launched its own AI chatbot, Bard.   © AP

There are various products we have not released publicly, because we thought the risk of abuse was too great.

We had a tool for lip reading, which is very helpful for people who are hard of hearing, or speaking impediments, but could also be used by authoritarian governments for surveillance of people in the street.

We’ve been very cautious about making that publicly available, because it could be misused by authoritarian societies.

Q: Some have advocated for a framework that does what the International Atomic Energy Agency does for nuclear technology.

A: Internationally, we have supported the notion of joint research into these technologies, and the evolution of best practices. So, more like the IPCC [Intergovernmental Panel on Climate Change], the climate organization at the United Nations, where you’re sharing information and accords.

I don’t want to be critical of other approaches [like the IAEA] — but they’re probably less well suited to this technology.

Q: There is also talk of building government frameworks for screening and licensing AI technology.

A: In the United States, we have industry groups … that look at products and certify that they are safe and appropriate for the market. There’s regulation as well that builds on those frameworks. But if you buy a toaster and it has the Underwriters Laboratories seal of approval, you can be confident that the toaster is not going to explode.

As we start to evolve the technological side, we also have to evolve the regulatory and the government side — research into safety, into explainability. “Why did the system predict the thing it predicted?”

That makes it easier for humans to say, “No, the machines made a mistake,” or, “Yes, I understand why it’s coming to that conclusion.”

[AI has] been created in the private sector, not by governments. It is a multi-use technology, one being used for all kinds of purposes … and it is a very fast-evolving technology, which makes it challenging to regulate.

Flexibility will be very important.

Q: With AI, developers may be in one country and users in another. There’s the question of responsibility for when something bad happens.

A: We need frameworks for understanding what are the responsibilities of companies, to make sure their products are of high quality and serving the right purpose. I think those frameworks will be easiest in specific cases.

Japan will develop a large group of companies that are doing specific applications, using AI to make your economy more productive.

So it should be a shared responsibility to make sure we get this right.

Q: The European Union’s proposed AI act would impose tough penalties and disclosure requirements on corporations.

A: There is a competition in the technology and a competition in the regulation of the technology.

I would say the race should not be for the first regulations, but for the best regulations.

Europe has the GDPR [General Data Protection Regulation]. Japan and the United States have talked about “data transfers with trust” and cross-border privacy regulations, a lighter-touch way of giving people confidence in the security of their data without slowing down innovation. So I think having a variety of regulatory models is actually a good thing.

Q: The U.S. Justice Department has sued Google for alleged anti-competitive practices in the search market.

A: The trial is underway, so I can’t say very much. But we have said people use Google because they want to, not because they have to.

There are lots of different ways people are finding information, whether it’s TikTok or Reddit or Amazon or many other services. And it’s never been easier to switch. If you want to use a different search engine, if you want to use a different browser, competition is a click away.


About Sponsored Content

No tags 0 Comments 0

No Comments Yet.