Can ‘we the people’ keep AI in check?

[ad_1]

Technologist and researcher Aviv Ovadya isn’t sure that generative AI can be governed, but he thinks the most plausible means of keeping it in check might just be entrusting those who will be impacted by AI to collectively decide on the ways to curb it.

That means you; it means me. It’s the power of large networks of individuals to problem solve faster and more equitably than a small group of individuals might do alone (including, say, in Washington). It’s essentially relying on the wisdom of crowds, and it’s happening in many fields, including scientific research, business, politics, and social movements.

In Taiwan, for example, civic-minded hackers in 2015 formed a platform — “virtual Taiwan” — that “brings together representatives from the public, private and social sectors to debate policy solutions to problems primarily related to the digital economy,” as explained in 2019 by Taiwan’s digital minister, Audrey Tang in the New York Times. Since then, vTaiwan, as it’s known, has tackled dozens of issues by “relying on a mix of online debate and face-to-face discussions with stakeholders,” Tang wrote at the time.

A similar initiative is Oregon’s Citizens’ Initiative Review, which was signed into law in 2011 and informs the state’s voting population about ballot measures through a citizen-driven “deliberative process.” Roughly 20 to 25 citizens who are representative of the entire Oregon electorate are brought together to debate the merits of an initiative; they then collectively write a statement about that initiative that’s sent out to the state’s other voters so they can make better-informed decisions on election days.

So-called deliberative processes have also successfully helped address issues in Australia (water policy), Canada (electoral reform), Chile (pensions and healthcare), and Argentina (housing, land ownership), among other places.

“There are obstacles to making this work” as it relates to AI, acknowledges Ovadya, who is affiliated with Harvard’s Berkman Klein Center and whose work increasingly centers on the impacts of AI on society and democracy. “But empirically, this has been done on every continent around the world, at every scale” and the “faster we can get some of this stuff in place, the better,” he notes.

Letting large cross sections of people decide on acceptable guidelines for AI may sound outlandish to some, but even technologists think it’s part of the solution. Mira Murati, the chief technology officer of the prominent AI startup OpenAI, tells Time magazine in a new interview, “[W[e’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies— definitely regulators and governments and everyone else.”

Asked if Murati fears that government involvement can slow innovation or whether she thinks it’s too early for policymakers and regulators to get involved, she tells the outlet, “It’s not too early. It’s very important for everyone to start getting involved given the impact these technologies are going to have.”

In the current regulatory vacuum, OpenAI has taken a self-governing approach for now, instituting guidelines for the safe use of its tech and pushing out new iterations in dribs and drabs — sometimes to the frustration of the wider public.

The European Union has meanwhile been drafting a regulatory framework — AI Act — that’s making its way through the European Parliament and aims to become a global standard. The law would assign applications of AI to three risk categories: applications and systems that create an “unacceptable risk”; “high-risk applications,” such as a “CV-scanning tool that ranks job applicants” that would be subject to specific legal requirements; and applications not explicitly banned or listed as high-risk that would largely be left unregulated.

The U.S. Department of Commerce has also drafted a voluntary framework meant as guidance for companies, but there remains no regulation– zilcho — when it’s sorely needed. (In addition to OpenAI, tech behemoths like Microsoft and Google  — despite being burned by earlier releases of their own AI that backfired — are very publicly racing again to roll out AI-infused products and applications, lest they be left behind.)

A kind of World Wide Web consortium, an international organization created in 1994 to set standards for the World Wide Web, would seemingly make sense. Indeed, in that Time interview, Murati observes that “different voices, like philosophers, social scientists, artists, and people from the humanities” should be brought together to answer the many “ethical and philosophical questions that we need to consider.”

Maybe the industry starts there, and so-called collective intelligence fills in many of the gaps between the broad brush strokes. 

Maybe some new tools help toward that end. Open AI CEO Sam Altman is also a cofounder, for example, of a retina-scanning company in Berlin called WorldCoin that wants to make it easy to authenticate someone’s identity easily. Questions have been raised about the privacy and security implications of WorldCoin’s biometric approach, but its potential applications include distributing a global universal basic income, as well as empowering new forms of digital democracy.

Either way, Ovadya thinks that turning to deliberative processes involving wide swaths of people from around the world is the way to create boundaries around AI while also giving the industry’s players more credibility.

“OpenAI is getting some flack right now from everyone,” including over its perceived liberal bias, says Ovadya. “It would be helpful [for the company] to have a really concrete answer” about how it establishes its future policies.

Ovadya similarly  points to Stability.AI, the open-source AI company whose CEO, Emad Mostaque, has repeatedly suggested that Stability is more democratic than OpenAI because it is available everywhere, whereas OpenAI is available only in countries right now where it can provide “safe access.”

Says Ovadya, “Emad at Stability says he’s ‘democratizing AI.’ Well, wouldn’t it be nice to actually be using democratic processes to figure out what people really want?”

Can ‘we the people’ keep AI in check? by Connie Loizos originally published on TechCrunch

[ad_2]

Source link

Comments are closed.