Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more
While researching AI experts, I came across a deepfake. It wasn’t obvious at first, given his seemingly legitimate profile and involvement on social media. However, after seeing the same creepy AI-generated photo of Dr. Lance B. Eliot all over the web, it became clear that he wasn’t a real person. So I followed him and learned the trap from him.
The ubiquitous Dr. Lance B. Eliot
Eliot has more than 11,000 followers on LinkedIn and we have two common connections. Both have thousands of followers on LinkedIn and decades of experience in AI with roles spanning investors, analysts, speakers, columnists and CEOs. LinkedIn members interact with Eliot despite all of his repetitive posts leading to his many Forbes articles.
At Forbes, Eliot posts every one to three days with nearly identical headlines. After reading a few articles, it’s obvious that the content is AI-generated tech jargon. One of the biggest problems with Eliot’s extensive Forbes portfolio is that the site limits readers to five free stories a month until they’re prompted to purchase a subscription for $6.99 a month or $74.99 a year. This is complicated now that Forbes has officially been put up for sale with a price tag of around $800 million.
Eliot’s content is also available behind a medium paywall, charging $5 per month. And a small profile of Eliot appears on Cision, Muckrack and the Sam Whitmore Media Survey, paid media services that are expensive and trusted by the vast majority of PR professionals.
Event
transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they’ve integrated and optimized AI investments to achieve success and avoid common pitfalls.
Register now
Then there’s the sale of Eliot’s books online. He sells them through Amazon, getting a little over $4 per title, though Walmart offers them for less. At Thriftbooks, Eliot’s Pearls of Wisdom sell for around $27, which is a steal compared to Porchlight’s $28 price. A safe bet is that book sales are driven by fake reviews. However, some disappointed humans bought the books and gave them low ratings, accusing the content of being repetitive.
The damage to big brands and individual identities
After clicking a link to Eliot’s Stanford University profile, I used another browser and landed on the actual Stanford website, where a search on Eliot produced zero results. A side-by-side comparison shows that the red color of the mark on Eliot’s Stanford page was not the same shade as the authentic page.
A similar experience occurred on Cornell’s ArXiv site. With just a minor tweak to Cornell’s logo, one of Eliot’s scholarly papers was published, riddled with typos and more low-quality AI-generated content presented in a standard scholarly research paper format. The newspaper cited an extensive list of sources, including Oliver Wendell Holmes, who apparently published in an 1897 edition of the Harvard Law Review, three years after his death.
Those not interested in reading Eliot’s content can head over to his podcasts, where a bot spews out nonsense lingo. An excerpt from a listener’s review reads, “If you like listening to someone read verbatim from a paper script, this is a great podcast for you.”
The URL posted next to Eliot’s podcasts promotes his self-driving car website, which initially led him to a dead end. An update on the same link led to Techbrium, one of Eliot’s fake employer websites.
It’s amazing how Eliot is able to do all of this and still find time to speak at executive leadership summits hosted by HMG Strategy. The fake events feature big-name tech companies listed as partners, with real advisors and biographies of executives from Zoom, Adobe, SAP, ServiceNow, and the Boston Red Sox, among others.
Attendance at HMG events is free for senior technology executives, provided they register. According to HMG’s terms and conditions, “if for any reason you are unable to attend and are unable to submit a direct report in your place, a $100 no-show fee will be charged to cover the costs of meals and staff of service”.
The cost of ignoring deepfakes
Further digging into Eliot led to a two-year-old Reddit thread calling him out and quickly veering off into hard-to-follow conspiracy theories. Eliot may not be an anagram or linked to the NSA, but he is one of millions of counterfeiters making money online who are getting harder to spot.
Watching the financial ripple effect of deepfakes raises questions about who is responsible when they generate revenue for themselves and their partners. That’s not to mention the cost of downloading malware, targeting fake prospects, and paying for unwanted affiliate marketing links.
Arguably, a keen eye can spot a deepfake from the blurry or missing background, weird hair, weird eyes, and robotic voices that don’t sync up with their mouths. But if this were a universal truth, the cost of deepfakes wouldn’t be billions in losses as they spawn financial scams and impersonate real people.
AI hasn’t fixed all the issues that make it difficult to detect the inauthenticity of a deepfake, but it is actively fixing them. It’s this kind of article on deepfakes that helps the AI learn and improve. This leaves the responsibility of spotting deepfakes to individuals, forcing them to be vigilant about who they let into their networks and lives.
Kathy Keating is a real person and the founder of ProsInComms, a public relations consultancy.
Data Decision Makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers