Sprinter Posted April 2, 2024 Posted April 2, 2024 Around the world, policymakers are coming to grips with the implications of artificial intelligence and its role in the broader digital and tech ecosystem. While different countries have different priorities and governance structures, all have reasons to worry about the technology's potential to cause harm. Since serving for a decade as a member of the European Parliament from the Netherlands, Marietje Schaake has become a leading transatlantic voice in debates about technology and the policy responses to digital innovations. Now the international policy director at Stanford University’s Cyber Policy Center and international policy fellow at Stanford HAI (Human-Centered Artificial Intelligence), she weighs in regularly on the risks of privately governed AI, social media- and AI-augmented disinformation, and related topics, including as a member of the United Nations AI Advisory Body’s Executive Committee. MS: We all have seen the cut-throat competition between the United States and China over the past years. The differences between democracies and autocracies are inevitably also playing out in the way governments approach AI. Another division is between countries that can focus on regulation and investment, and the many governments of developing economies that are concerned more about access and inclusion. These unique economic and social contexts need to be appreciated when we analyze the impact of AI. We are dealing with a technology that can be deployed both to advance scientific breakthroughs and to intensify state surveillance. It would help to see policymakers address more of these specifics. PS: As always, Europe seems to be ahead of the US when it comes to AI regulation. Could you briefly walk us through the main strengths and weaknesses of the draft AI Act, and what regulators elsewhere might learn from Europe’s recent legislative debate? MS: The European Union has indeed been ahead of the curve. In fact, when work started on the AI Act, many cautioned that it was too soon. But now, with market developments racing ahead, one could say that the recent political agreement on the law came in the nick of time. The AI Act is primarily designed to mitigate the risks associated with how AI applications are used. Only at a later stage did lawmakers add requirements for foundation models (the underlying trained datasets that power all the chatbots and other AI tools being released into the market). Those later provisions represent an effort to regulate the technology itself. While I see the EU as an important values-based frontrunner in regulating AI, the tensions between regulating uses and regulating the technology have not been resolved. That is something all regulators will have to deal with sooner or later. PS: Since Europe’s General Data Protection Regulation entered into force to much fanfare in 2018, the law has drawn much criticism for being difficult to interpret, implement, and enforce, and for generally falling short of expectations. Are these concerns justified? What do critics get right, and what are they missing? MS: With the amount of hype surrounding the GDPR, it could only disappoint. In practice, it will always be challenging to harmonize 27 national data-protection regimes into one framework that applies to everyone – governments and companies alike. The good news is that EU leaders foresaw the need for periodic reviews and improvements, which are now being undertaken. As with AI, enforcement will need to be shored up to ensure the GDPR does not go down in history as a paper tiger. PS: In an ideal world, what kind of disinformation safeguards would you like to see ahead of the European Parliament and US national elections this year? Are we still ultimately left with no choice but to trust figures like Elon Musk and Mark Zuckerberg to police election interference campaigns? MS: Unfortunately, companies (with their own changing policies and priorities) are setting the guardrails of our information ecosystem. Many have laid off or substantially downsized their “trust and safety” teams. Even worse, YouTube declared last year that, as a matter of policy, it will no longer remove or take action against videos peddling blatant lies about the 2020 election. It will not have escaped anyone that those lies form the basis for Donald Trump’s 2024 election campaign. Not only are disinformation researchers being politically targeted and sidelined, but many recent measures designed to improve the conditions of online debate are being reversed. On top of that, AI – and particularly generative AI – could be a gamechanger for elections worldwide, given its ability to generate effectively infinite volumes of disinformation and target that information more precisely. We urgently need more transparency so that independent researchers can study the effects of changes in corporate policies. Right now, most of that information and data is shielded behind intellectual-property protections. For its part, the EU is taking important steps to prevent the abuse of “dark patterns” (deceptive user interfaces designed to trick people into making harmful decisions, such as opting in to invasive levels of data tracking), and to regulate the targeting of political ads (which are not permitted to use sensitive personal information). The EU has also agreed on new rules requiring that all political ads “be clearly labeled as such and must indicate who paid for them, how much, to which elections, referendum, or regulatory process they are linked, and whether they have been targeted.” These are important measures. But I fear they will not come in time for the next EU elections, or for elections in other parts of the world. With democracy already under unprecedented strain worldwide, we may soon bear witness to a major experiment in technologically augmented mani[CENSORED]tion. PS: Should TikTok be banned? Given China’s intelligence laws, there is ample reason for concern about how corporate data can end up being used by the state. Many in Europe had deep concerns following revelations concerning how the US National Security Agency (NSA) could use tech-company data for intelligence gathering and law enforcement. That never led to a ban, however. In general, protecting children from addictive apps is a good idea. I would prefer that decisions about banning the use of an app like TikTok be based on transparent security or child-protection rules that apply equally to all. Secure your copy of PS Quarterly: Profit and Peril The newest issue of our magazine, PS Quarterly: Profit and Peril, is here. To gain digital access to all of the magazine’s content, and receive your print copy, subscribe to PS Premium now. SUBSCRIBE NOW https://www.project-syndicate.org/magazine/ai-social-media-risks-governance-regulatory-challenges-by-marietje-schaake-2024-03?a_la=english&a_d=65c504c909243a8d31b4340f&a_m=&a_a=click&a_s=&a_p=homepage&a_li=ai-social-media-risks-governance-regulatory-challenges-by-marietje-schaake-2024-03&a_pa=curated&a_ps=main-article-a3&a_ms=&a_r=
Recommended Posts