[ad_1]
In a surreal twist stolen from a sci-fi B movie, Elon Musk’s lawyers have used an implausible high-tech explanation as a way of trying to shield their boss in a lawsuit over a fatal crash, which was allegedly caused by Tesla’s half-baked automatic pilot. The lawyers argued that Musk’s multiple clips misrepresenting his cars’ autopilot abilities were actually not his, but instead could have been faked with generative AI.
But Santa Clara County Superior Court Judge Evette Pennypacker wasn’t having any of that, calling this bizarre defense “deeply troubling.” According to Reuters, Judge Pennypacker replied that “Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deepfake to avoid taking ownership of what they did actually say and do.”
As a result, the judge ordered Musk to attend a three-hour deposition in which he will have to declare under oath about every public promotional statement about Tesla’s autopilot technology since 2014. All these tales about the cars’ abilities were made in media interviews, tweets, and blog posts, including in multiple promises of fully autonomous driving that never really materialized.
Early in 2023, Bloomberg unearthed evidence that showed how Musk oversaw the production of a video that greatly overstated Tesla’s automatic pilot abilities in 2016.
The family of the victim believes that these claims about the automatic pilot—already the subject of a federal criminal investigation for multiple crashes—were fundamental in the death of their son, former Apple engineer Walter Huang. He met his fatal fate in 2018 when his 2017 Model X crashed into a concrete barrier about 45 minutes south of San Francisco.
An investigation by the National Transportation Safety Board found that Huang was playing a video game on his phone when his Tesla crashed, while the company says that, at the time of impact, he hadn’t touched the wheel for 19 minutes. The family argues the autopilot system didn’t live up to Musk’s promises and the engineer was overconfident, hence the importance of establishing the validity of Musk’s public statements as real— and not, as the lawyers claim, fake. The trial is scheduled to begin on July 31.
The need to authenticate
While Tesla’s defense is positively ludicrous—given the state of today’s generative AI technology—expect to hear more and more of this in the coming years as synthetic video gets near perfect and more difficult to identify, even by the most dedicated and expert of forensic scientists.
Coincidentally, our 10-year timeline on the evolution of generative AI talks about exactly this. In that fictional threat projection, 2025 would be the year in which sound-synthetic technology would be so perfect that a team of lawyers would manage to have real audio recorded by the police dismissed as evidence in a case of political corruption in Italy, arguing that they may have been fabricated with AI. Unlike with Musk, the fictional case is dismissed and the defendants are acquitted.
Further down this sci-fi timeline, in 2029, Elon Musk himself falls victim to the technology:
“A hidden camera video, in which he admits that Tesla is going to go bankrupt imminently, causes the company to lose more than half of its value on the stock market. The magnate claims it’s a lie, but only his most ardent fans believe him. Nobody knows if the video is fake or true, but it doesn’t matter: A majority of buyers succumb to fear and cancel their orders, and a few months later, the company declares bankruptcy. Those who believed Musk lose their cars and their money.”
That’s all fiction but, as Tesla lawyers have demonstrated, it is a very real possibility. The only way we can avoid these scenarios would be to establish cryptographic standards to authenticate any real content recorded by any sensor.
As Ziad Asghar—senior vice president of product management at Snapdragon Technologies and Roadmap at Qualcomm—told me on a video chat, we already have the technology to preserve the authenticity of all the pixels and sound waves captured by any device. We only need “multiple layers of security” and file formats that work similarly to NFTs, using blockchain certificates to authenticate that captured images, videos, and audio files are 100% real and not AI generated or modified.
“It’s a big concern for everyone,” he told me. “As these [AI] technologies become more prevalent, this is going to be a challenge.”
Fortunately, Apple, Adobe, and other major tech companies are already working on such a standard. Until it is done and adopted, we would just have to be careful and stop anyone trying to weasel themselves out of their responsibility, just like Judge Pennypacker did with Elon Musk.
[ad_2]
Source link
Comments are closed.