When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
The photorealistic capabilities of Sora have taken many of us by surprise.
‘Fake news!’
It’s a bit unsettling that the possible harbinger of truth’s destruction has arrived in the form of golden retriever puppies.
Unfortunately, just as withAdobePhotoshopbefore it, Sora and other generative AI toolswillbe used for nefarious purposes.
Trying to deny this is trying to deny human nature.
Sora, much likeOpenAIs flagship AI productChatGPT, probablywontbe the tool used to produce this fakery.
It’s a bit unsettling that the possible harbinger of truth’s destruction has arrived in the form of golden retriever puppies.
For example, prompts that request explicit sexual content or the likeness of others will be rejected.
These knock-offs (much like chatbots based onChatGPT) wont necessarily have the same safety and security features.
Robocop, meet Robocriminal
AI tools are already being used for alotof dodgy stuff online.
It only takes one person with malicious intent for an AI tool to become dangerous.
The power of something like Sora could make this even worse, allowing for even more sophisticated fakery.
Its only going to get harder to determine the fakes from reality.
Despite what some AI proponents might tell you, theres currentlynoreliable way to definitively confirm whether footage is AI-generated.
OpenAI CEO Sam Altman has previously come under fire for misuse of ChatGPT by malign third-party groups.
Software for this does exist, but doesnt have a great track record.
The most accurate free AI detector (Sapling) only offered 68% accuracy.
Accused of a crime, but youve got a video recording that exonerates you?
The rise of AI has enabled online phishing scams to become faster, easier, and larger in scale than ever before.
I hate this argument, though.
Im not even blaming the people who make it.