Is Google Opinion Rewards has no algorithms

Algorithms are invading our lives more and more. But their transparency and control leave a lot to be desired

In the past, machines often didn't care who we are. But now, more and more often, their algorithms determine what we are allowed to do or not. Our personal characteristics such as name, origin, appearance and gender suddenly play a major role. Therefore, diverse groups should be involved in its development - and not just typical Silicon Valley representatives.

Imagine you come home and want to go to your apartment. But the door that leads into the building remains closed. Or: You go to an airport toilet. How you want to wash your hands afterwards, you stop short. The faucet stays dry. And why is this happening to you? Because you are a black woman. And the face recognition in the camera, which would have opened the door or the faucet for you, did not recognize you because of these two characteristics.

These two examples are still harmless. But they show how technology based on algorithms and artificial intelligence (AI) can exclude people from everyday life. This technology is being developed for people all over the world. It is truly globalized. But it is becoming an increasing problem that it is usually only developed by a small, homogeneous group.

For a long time, machines didn't really care who was using them. A Japanese television worked in Europe or Africa too. But now machines should no longer just be operated, they should also, to a certain extent, think along. They should recognize faces and voices and decide whether someone should open the door or whether the tap is running. There is also AI in services. Algorithms can decide who gets insurance, a loan, or a job. Or calculate the likelihood that a criminal will recidivist.

Good resolutions

It is undisputed that AI and the use of algorithms cannot work without ethics - at least in the western world. The researchers are concerned with fundamental questions, such as whether an autonomously driving car should run over a child or an old person in case of doubt. Or whether the energy consumption for calculating large AI models is justified. Another important aspect of their job is to question the quality and choice of data used to train the AI. Large tech companies like Google now have their own guidelines and departments that are supposed to prevent algorithms from causing damage.

Google saw an example of how such efforts can noisily fail in December. Timnit Gebru, co-head of Google's Ethical Artificial Intelligence team, has been fired; however, the circumstances have not yet been conclusively clarified. Together with other scientists, Gebru wanted to submit a paper on ethical problems of so-called natural language processing - a field of AI that deals with language and that Google uses for its text search, for example. She was denied that in Google's review process, whereupon the situation escalated.

What is certain is that Gebru had to vacate her post after she complained about the corporate culture in an email to colleagues and threatened to resign if Google did not respond to demands. In particular, she wanted to know who had blocked her paper.

Since then there has been a heated discussion in the AI ​​scene. The conflict highlights the self-regulation of tech companies and the problems that go with it: a lack of diversity, research funding, lack of transparency and inadequate regulation.

Harsh accusations damage credibility

Timnit Gebru is a renowned researcher who is heavily involved when it comes to injustices and prejudices built into AI. She is also co-founder of “Black in AI”, an organization for black AI scientists. Before joining Google, she was involved in a study that shows that commercial facial recognition systems are far more wrong when it comes to recognizing black women than whites and men because the former group is underrepresented in the data sets used to train the programs.

So Google knew that Gebru is on the one hand loudly committed to the interests of minorities, but on the other hand also stands for credibility. The way of Gebrus Abgang has now proven to be an own goal for Google. There is great criticism in the ranks of the research community dealing with ethics in AI. And Gebrus outraged comments, in which she also accused Google of racism, triggered a reaction, especially on Twitter, that is reminiscent of the #MeToo movement. Black researchers in the tech industry jumped in and talked about their discriminatory experiences in companies.

In a sensitive area like AI, lip service is not enough in favor of diversity and ethical behavior. That is a big challenge for companies. Bad data sets that reinforce prejudice can often inadvertently be used for algorithms. The more diverse the composition of a group, the higher the probability that they will be discovered - i.e. the more people from different ethnicities, different sexes, different social backgrounds or different age groups it includes.

There are ways to check if an algorithm is problematic. For example, suppose it's about whether someone gets a loan. If you only use a female first name instead of a male and the algorithm comes to a different decision, your training data had a sexism bias. If the facial recognition program has few problems recognizing a white face, but big problems with a black one, then this suggests that his training data was racially biased.

Efforts towards diversity versus dominance in research

The American tech industry is very white, male, politically inclined towards Democrats, and very well educated. For a few years now, large companies in particular have been trying to change the image of such homogeneity and have been publishing diversity reports since 2014. In the past few years there has been an upward trend. But at Google, Microsoft and Facebook, the proportion of black employees is still below six percent. Women are also still underrepresented; at Google they make up almost a third of the workforce. Companies in Silicon Valley are also very similar in terms of political views. For example, a large part of the donations in the past presidential election went to Democrats. And many employees graduated from top American universities.

Now one could ask the question whether it is even the job of companies to take care of ethical aspects of their AI. But internal research is important. This is the only way to have a close look at the algorithms at the moment. External researchers usually do not know how these work exactly and what data they were trained with. In addition, it is only possible to test the AI ​​for possible problems internally.

The big tech companies also ensure a talent drain in the academic market, while at the same time giving away a lot of money for AI research at universities, which in many cases are happy about it. It is obvious that seen as a whole, this development could have an impact on science that leads to results that are beneficial for these sponsors. Comparisons are made with the time when the tobacco industry funded health research.

Various studies have also shown that there are ongoing cases in which AI researchers switch from academic institutions to tech companies. According to a recently published study, Google alone hired more assistant professors (tenure-track professors) from American universities between 2004 and 2018 than any other company.

External auditors and financial incentives

There are various considerations to solve the problem of breaking this dominance and at least supplementing corporate self-control, especially in the area of ​​ethics. One possibility would be to have algorithms examined by independent external auditors. In addition, financial rewards could be added as an incentive to identify built-in prejudices. In the cybersecurity area, this principle is already used under the name of "bug bounties" with the aim of uncovering security gaps. The suggestions come from a paper by more than 60 authors, including Google and Intel researchers.

So far there have been few guidelines for algorithms on the part of politics. The importance of research on ethical aspects of AI is shown by an example that Timnit Gebru was involved in presenting. The study on facial recognition, in which she was involved, contributed to the fact that the use of this technology was increasingly questioned by politicians and in some cases even banned. Because while the aforementioned examples of facial recognition are rather harmless, they can develop a completely different power as a comprehensive surveillance tool.

When it comes to identifying criminals and the error rate for black women is much higher than for white men, then that is a big problem. When the big "Black Lives Matter" demonstrations took place in the USA last summer and the police apparently used facial recognition, the technology group IBM decided to withdraw from the business. Amazon followed suit with a one-year moratorium, and Microsoft stopped selling the technology to American police.

These arguments may seem far from us and somewhere in the future. But that is a fallacy. They are of great importance. They affect companies that develop technologies that will be used all over the world and will therefore reach us sooner or later. It is already being used in Switzerland in some sensitive areas such as police work and criminal justice - without most of us being aware of it.

A broad discussion must now begin in society about how we want to shape the control of AI and algorithms. Politicians must take responsibility and, as in other industries, establish rules that apply to the use of these technologies. But until then, tech companies have to take their self-regulation seriously and do everything in their power to be transparent and to create an environment in which as heterogeneous groups as possible can research ethical aspects of AI independently.