When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.

During its first year, ChatGPT raised as many questions about generative AI as it answered.

Security Risks

Take the known security risks.

A person�s fingers type at a keyboard, with a digital security screen with a lock on it overlaid.

ChatGPT often generates incorrect yet believable information that can lead to dangerous consequences when its advice is blindly followed.

Cybercriminals use it to improve and scale up their attacks.

Many believe that the greatest risk is its further development, or rather the next-generation of generative AI.

With all of generative AIs advancements over the past year, the security risks are becoming more complex.

This is evident when you consider both its plausibility and its perniciousness.

Cryptographic and cybersecurity advisor, Keeper Security.

As skeptical and aware people, we are good at recognizing when something feels off.

ChatGPT can make it feel on.

They are trained on internet data, and as we know, one shouldnt believe everything one reads online.

Experts can identify the obviously-wrong answers, but non-experts wont know the difference.

In this regard, generative AI can be weaponized to overwhelm the world with false information.

This is a fundamental issue with any generative AI solution.

More issues stem from the criminal use of these tools.

For example, cybercriminals are using ChatGPT to generate very realistic phishing emails and URLs for spoofed websites.

Now, however, that email may appear to come from your family, friend or colleague.

Whats more, generative AI solutions can generate variants very quickly (without costs) to circumvent spam detectors.

By weaponizing generative AI in this manner, malicious actors can rapidly scale up their attacks.

Generative AI will likely give password cracking attackers a leg up.

This context might ramp up effectiveness.

Privacy implications

The sharing of sensitive data is another security risk associated with generative AI tools.

ChatGPT is available to anyone and everyone even those with no understanding ofcybersecurityor best practices.

Cybercriminals sell sensitive information like this on the dark web or use the information to target their victims.

Is it possible to legislate the use and development of generative AI?

Will the industry collaborate enough to create guardrails that can be universally followed?

Generative AI has made regulating, censoring and legislating technology more complex than ever.

Jurisdictions make moratoriums on research ineffective and even exacerbate problems.

For example, imagine what would happen if the U.S. mandated a temporary pause on generative AI research.

Researchers outside the U.S. do not have to comply.

Any powerful technology can be wielded for good or evil.

ChatGPT and its counterparts are no different.

If its powerful, it also has a dark side.

But should this lead to restricting further R&D?

Suppose a jurisdiction temporarily bans research on the Large Language Models (LLMs) on which ChatGPT is built.

Beyond that, any moratorium disproportionately favors those using AI for nefarious purposes.

Remember that criminals dont follow congressional regulations.

We’ve listed the best cloud antivirus.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc.

If you are interested in contributing find out more here:https://www.techradar.com/news/submit-your-story-to-techradar-pro