Zahlavi

Latest development in the field of artificial intelligence (AI) regulation

Fri Jun 16 15:20:32 CEST 2023

The EU is trying to regulate AI within the European space and transatlantically, but the development of AI will not yet be limited in science and research, the European Parliament (EP) has decided. However, MEPs have introduced a total ban on the introduction of AI systems for biometric tracking, emotion recognition and predictive policing.

As Science/Business notes in its latest reports, regulatory efforts are motivated by concerns that unregulated AI services like ChatGPT "could be used to spread misinformation or become part of decision-making processes, but also hidden." Even the Center for AI Security issued a brief warning about AI: "Reducing the risk of extinction due to AI should be a global priority, along with other societal risks such as pandemics and nuclear war."

MEPs therefore decided in a draft EU legislation to introduce "a total ban on artificial intelligence systems for biometric tracking, emotion recognition and predictive policing". In addition, generative AI systems such as OpenAI's ChatGPT or Google's Bard must disclose whenever any type of content has been generated." This is "primarily if such a product influences human behavior or causes physical and psychological harm."

At the same time, MEPs call on Member States to "invest more in AI research involving AI developers, academics, experts on inequality and non-discrimination and other representatives of civil society."

The first legislation was already proposed by the European Commission in April 2021 and this step by the EP brings the adoption of the so-called AI law closer to the target and is expected as early as the end of 2023, but adoption across all Member States may take another two to three years. And given the rapid and dramatic development in AI research and use, it can be assumed that by then it will be too late to tame the potential negative impacts of this technology.

The threat of unregulated AI influence has also led the EU and the US to "now attempt to organize an international coalition of governments and companies to develop and sign a voluntary code of conduct" on AI to fill a temporary loophole in legislation. This proposal was presented at a US/EU Trade and Technology Council (TTC) meeting in Luleå, Sweden, this week by EC Vice President Margrethe Vestager. During the meeting, a map of trusted AI and risk management associated with AI were also presented. Vestager hopes Canada, the UK, Japan, India and others will join the code.

As Science/Business further reports, "The TTC has already established three expert groups to work to identify standards and tools for trusted artificial intelligence. This work will now include a focus on generative AI systems. Expert groups have agreed on taxonomy and terminology for AI and are monitoring emerging risks posed by AI."

Finally, it is important to note that private sector representatives also express concerns and have an interest in regulation. As Dario Amodei, head of American AI start-up Anthropic, has stated, the risks of AI are that no one knows what will happen when AI is made available to millions of people, and because it is extremely difficult to detect dangerous AI capabilities and therefore impossible to mitigate.

In connection with this issue, we also recommend a podcast with philosopher and expert on the ethical challenges of contemporary technologies Mark Coeckelbergh of the Center for Environmental and Technological Ethics – Prague, who is inclined to regulate AI.

 

Photo: pixabay

Newsletter

Subscribe to news