Why Google is struggling with the AI ​​ethics

Share your love

Google started to formulate “AI Principles” back in 2017. It summarizes rules that the Group intends to adhere to when developing AI – for example, not to reproduce any bias, to take social and cultural contexts into account and to observe high security standards.

Critics repeatedly accused the group, however, of the fact that these principles were too vague and vague for practice. “We have set out rules in this Code of Ethics to use the same guidelines each time and to track our progress over time,” said Charina Chou, Global Policy Lead for Emerging Technologies at Google. Chou was part of the team that developed the Google AI Principles.

The concrete ethics efforts would focus on data and models, among other things, says Chou. They are working on improving the quality of the data sets with various tools. A tool called “Crowdsource”, for example, is supposed to ensure that the data sets are collected in a more diverse way and also from other parts of the world. Volunteers can help by adding images or better labeling dates. One of the resulting data sets is called “Open Images Extended” and, according to Chou, is “much more representative of different countries than previous versions”.

More recent approaches are working with synthetic data, which are particularly helpful where the data situation is limited – for example with rare languages ​​or accents. “How can we expand the data we have and create more data from which a model can learn?” Asks Chou. Google is just starting to experiment with languages ​​and images.

Read Also   Do electric cars cause twice as much CO2 emissions as expected? That's what experts say

Why Google is struggling with the AI ​​ethics

Charina Chou is Global Policy Lead for Emerging Technologies at Google and was part of the core team that developed the Group’s “AI Principles”.

As for the models themselves, one of the measures to reduce bias is to install special rules. For example, Google Translate often has the problem of gender bias. Doctors, for example, are often translated as male and nurses as female. The rules are intended to minimize these prejudices and deliver appropriate translations.

At this year’s Google IO, the group announced MUM, short for “Multitask Unified Model”. It should be trained with 75 languages ​​and be able to answer even complex questions in Google search. Which shows that the company is thinking about including powerful language models, so-called LLMs (Large Language Models), in the search.

“In the future, we simply have to be able to have real conversations, just like we do with real people in everyday life. But there are still a lot of hurdles, ”explains Chou. This is precisely why the AI ​​guidelines exist to be able to constantly evaluate new technologies and thus ensure that the risks do not outweigh the benefits.

Why Google is struggling with the AI ​​ethics

This article is from issue 5/2021 of the Technology Review. The magazine will be available from July 8th, 2021 in stores and directly in the heise shop. Highlights from the magazine:

The guidelines also guide the company in deciding how openly the group will deal with its models and data sets. When it comes to face recognition, for example, Google decided not to provide an open API – at this point in time the possibility of abuse is too great.

Read Also   software-architektur.tv: loose coupling for higher maintainability of systems

And about the Timnit Gebru cause? “Timnit did a really good job highlighting key points like the importance of women and underrepresented groups. We need diversity. She made these points very clear. ”Chou did not want to answer why she was released even though she did a good job.

More from MIT Technology Review

More from MIT Technology Review

More from MIT Technology Review

More from MIT Technology Review


(bsc)

Article Source

Share your love