[ad_1]
Martin Shkreli, the convicted fraudster best known for hiking the price of a lifesaving pharmaceutical, has been getting into the AI game—and he’s already making enemies on the internet.
Shkreli was sentenced in 2018 to seven years in jail for two counts of securities fraud and one count of securities fraud conspiracy. He was released early from prison in September 2022, and in April 2023 launched Dr. Gupta, a medical AI chatbot that The Daily Beast called “a medical and legal nightmare.”
Shkreli has been feuding with researchers this week over the validity of his new AI product. After Sasha Luccioni, an AI researcher at Hugging Face, claimed that “[large language models] shouldn’t be used to give medical advice,” Shkreli went on the offensive, calling her an “AI Karen.” He also (seemingly jokingly) threatened critics on Twitter.
The social media scuffles have highlighted broader concerns about both the use of AI in healthcare settings, and about the risks of a platform handling personalized health data being run by someone with Shkreli’s checkered past. “Generative language models are, by design, badly suited for medical diagnosis,” Luccioni tells Fast Company. “They simply generate the most plausible text based on user inputs, which can result in entirely false and misleading information being provided. Diagnostic medicine also involves taking into account patient characteristics such as their medical history, which language models simply can’t do in their current form.”
Shkreli did not respond to an emailed interview request made through the Dr. Gupta website, nor a separate follow-up made to his direct email address.
Luccioni’s colleague at Hugging Face, chief ethics scientist Margaret Mitchell, echoes those concerns. “[Shkreli’s] claim that ‘the latest techniques in AI and LLMs can answer any health-related question’ is either ignorance or an intentional lie,” she says. Mitchell is concerned that LLMs, which are trained via the internet to provide the most plausible word in a sentence at any given moment, could provide harmful medical misinformation. “These systems can be seen as ‘plausible sentence generators,’ helping folks with stuff like drinking bleach to cure COVID, in fluent English sentences,” she adds.
While any storyline involving Shkreli is sure to raise an eyebrow, it also points to a larger issue within the AI sector: those seeking fame and fortune diving headlong into the gold rush with little care for the consequences. AI remains a relatively unregulated space, with many touting LLM-based systems as far more than they are—simple pattern-matching tools— and capitalizing on the hype around AI.
The healthcare space in particular has already seen issues with AI. “The history of AI in patient-facing tech isn’t very optimistic so far,” says Helen Salisbury, a U.K. based family doctor, pointing to Babylon Health, an AI-based system trialed in the U.K. in the last five years that has been blighted by accusations of ineffectiveness and data breaches. “I am still very skeptical that a symptom checker can do more than a Google search,” she says..
Shkreli bought the exclusive rights to manufacture Daraprim, a drug that can treat a rare parasitic disease, in 2015 and hiked the price from $13.50 per pill to $750, to much controversy. The entrepreneur was ordered in January 2022 to return $64.6 million in profit made by the price hikes and creating what the Federal Trade Commission alleged was “a web of anti-competitive restrictions” to prevent rivals from making a cheaper generic version.
“When a convicted fraudster and well-documented rent-seeker like Martin Shkreli offers ‘solutions’ to healthcare access using the latest buzzword technology, our scam alerts should be going off,” says Mike Katell, an ethics fellow at the Alan Turing Institute, the U.K.’s national institute for data science and artificial intelligence. “He has shown us who he is.”
[ad_2]
Source link
Comments are closed.