THE RISKS OF LANGUAGE MANIPULATION
Ethics, understood not so much as part but as a premise of artificial intelligence itself, has made Timnit Gebru one of the most esteemed researchers in the international scientific community. His work aimed at bias prevention (prejudices, preconceptions) enjoyed considerable popularity when she identified several, penalizing for black people, for example in facial recognition systems in use by the American police.
More recently, the scientist highlighted in an article, then published by MIT, the risks related to large language models, fundamental to the business of Google, the company in which she worked. The processing of artificial intelligences finds difficult to detect and track discriminatory and violent content due to the huge volumes of texts found on the internet. This involves the possibility of legitimizing deviant forms of communication that homologate established prejudices and do not consider, instead, natural advances in language, frequently promoted by social movements (e.g. Black Lives Matter).
Gebru also raises the issue of the consequences on the environment induced, even just for the “once time” training of these models, by the computational capacity required. For example, Google’s BERT language processing model uses as much CO2 for training only (which is replicated continuously) as a round-trip flight between San Francisco and New York.
His views generated the controversy that led to her exit from Google on December 2. Now, Congress is also asking Google for explanations, after the protest expressed by many researchers around the world, also in view of the fact that this is not a first time. A year ago it was Meredith Whittaker’s place , for her protests against military contracts and gender discrimination.
THE DISPUTED PRIMACY BETWEEN ETHICS AND TECHNOLOGY
The international resonance of the case must prompt, in my opinion, much broader reflections on the news fact, the accusations of Timnit Gebru and the defenses put in place by Google CEO Sundar Pichai. The implications of what has happened concern all of us, our way of life and work, the responsibilities that come with it and the consequences for those who will take our place.
The impact of advanced technologies sees Europe at 1.7% of GDP, China at 2.2%, and the United States at 3.3%. Italy is only 1.2% (*). Our reading of the facts must therefore necessarily be forward-looking, because our country’s low digital development limits us in our awareness of the nature and extent of the problems.
The ethics of much of artificial intelligence is a far from consolidated theme, the definition of which the premises and limits are, is far from common sharing. The speed of technological development raises enormous concerns about the ways and principles necessary to make advanced systems transparent, comprehensible and correct.
The European Commission’s guidelines for reliable artificial intelligence, updated in September by a study by the European Parliament’s study center, present concrete indications for the development of law-based and ethical artificial intelligence systems, linked to an anthropocentric perspective.
This scenario includes regulatory initiatives by the US government; those of Europe seeking autonomy and identity (see GDPR for data protection); those of China, aimed at exporting its competing models. We will see a no-holds-barred fight.
RETAIL, CONSUMERS AND CIVIL SOCIETY
In our immediate future, we will condition our work and status as consumers and members of civil society with information derived from algorithms based on huge amounts of data, originating from Google and other web giants. This is a simple, but important, fact that must alert the common sense of responsibility and raise the level of alertness.
In retail, data control and analysis also through AI is the closest frontier of business and innovation. Having a clear position in terms of transparency and ethics is essential to the credibility of the brand.
Let’s try to think about the kiosk at the entrance of the supermarket, the customer assistance services managed by a virtual assistant or the promotional engine based on artificial intelligence and, therefore, the possible problems induced by incorrect information, linguistic misunderstandings or preconceptions hired by mistake, discriminating against customers, excluding some from a promotion campaign or, more generally, from marketing communication and other forms of direct or indirect relationship.
The attention to the basic meanings of Timnit Gebru’s case, after all, costs much less than the price we could pay for not being able to guess and anticipate what is already close to us, but we are not yet used to perceiving.