Is OpenAI CEO Sam Altman the new Mark Zuckerberg?

[ad_1]

Nearly two decades ago, a young Mark Zuckerberg launched a social media platform that was, as he described it, less a company and more a “social mission” designed to “get everyone in the world connected.”  Lawmakers seemed to buy into this sunny vision, doing little to regulate the company even as it grew from a small college database to a global conglomerate with nearly 3 billion users. Now, Zuckerberg’s “social mission” is a breeding ground for hate and disinformation, a vessel for foreign actors to manipulate elections, and infrastructure that fomented a violent insurrection against the U.S. government. 

The unscrutinized rise of Facebook should serve as a cautionary tale. But as the latest tech darling Sam Altman, CEO of OpenAI and the creator of ChatGPT, heads to Congress this week, lawmakers seem determined to repeat history. Instead of welcoming Altman with a bipartisan steak dinner in his honor, Congress must subject the new Zuckerberg to the scrutiny his technology merits and the public deserves. 

If the story of Facebook has taught us anything, it’s that innovation must come with regulation. Just a few years ago, OpenAI was chartered as a responsible nonprofit whose “primary fiduciary duty is to humanity”—not too far from Zuckerberg’s own sunny vision for Facebook. Now, OpenAI is a $30 billion company leading the high-stakes AI arms race.

To its credit, OpenAI is encouraging regulation. But we’ve seen a similar story play out with Facebook and other Big Tech companies who call for regulation—as a positive PR facade—and then run million-dollar campaigns to defeat it. 

Take March 2019, for example, when Zuckerberg penned a Washington Post op-ed begging lawmakers to pass privacy legislation after spending millions to kill a California bill that would do just that. At the time, Facebook was spending at least $17 million to lobby against tech regulation at the federal level—nearly double what the company spent in 2015.

Four years later, Facebook parent company Meta and other Big Tech firms spent over $230 million on lobbying and attack ads both directly and through dark-money groups to defeat antitrust legislation—a move that Republican Senator Grassley said left lawmakers “cowered.” And lawmakers continue to cower on this issue, trusting Facebook to self-regulate even after the company’s blatant mishandling of the 2016 election meddling and acknowledgement of contributing to widespread mental health harms for children and teens. 

On the precipice of a new era driven by AI, Congress is missing similar warning signs from Sam Altman. Lawmakers appear poised to trust Altman to self-regulate under the guise of “innovation,” even as the speed of AI is ringing alarm bells for technologists, academics, civil society, and yes, even lawmakers. 

Unfortunately, while AI is very much in its infancy, its potential dangers make Facebook’s damage look like child’s play. Although the potential for AI to be used to advance the common good—from innovations in medicine to more custom educational experiences—is enormous, we’ve already seen chatbots convincingly spread misinformation, validate heinous conspiracy theories, create propaganda, and even encourage children to engage in predatory sexual situations with adults. Earlier this year, a mother in Arizona picked up a call to hear her daughter’s voice—reproduced by AI—claiming she had been kidnapped and pleading for a ransom. And companies are currently testing new generative search functions that will spoonfeed users machine-generated answers to search queries, making it virtually impossible to find reputable sources online. 

The consequences of AI are particularly dire for our democracy. There are no rules governing the use of AI in elections, and the RNC has already released an AI-produced campaign ad. What’s to stop a campaign from recording a robocall using an opponent’s voice? Or manipulating video to show an opponent doing or saying something untrue? 

Earlier this month, the Biden administration convened CEOs of companies developing AI to discuss its risks, and Federal Trade Commission (FTC) Chair Lina Khan committed to “vigorously enforce laws” as needed. But Altman’s testimony before Congress presents yet another opportunity to demand guardrails and bolster safety for the public, especially as the next presidential election looms. 

Among other measures, Congress can use this week’s hearing to lay the foundation for an independent body that regulates and provides critical oversight of generative AI products. Much like the FTC or Food and Drug Administration (FDA), this body would be tasked with regulating AI tools before they come to market. 

By pressing Altman on exactly how AI can be used, Congress can also begin developing basic safeguards around the use of AI in our elections to ensure it won’t be used to fuel the spread of near-infinite disinformation online. And Congress can use this hearing to ensure that the voices and perspectives of young people are included in this critical process. Gen Z, the generation that grew up on social media, is often the savviest—and most impacted—users of emerging technology. Lawmakers must harness the experience and input of technologists, mental health experts, ethicists, and others in developing a response to the massive growth in AI. 

If the 2010s were shaped by Mark Zuckerberg and the rise of social media, the 2020s will be shaped by generative AI and how it is interwoven into our lives—whether we want it or not. This week, Congress can course correct from its embrace of Zuckerberg’s reckless mantra of “move fast and break things.” If it fails, what breaks this time will be impossible to repair.



[ad_2]

Source link

Comments are closed.