When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.

Humans are fundamentally social creatures.

And language lies at the heart of how we socialise and communicate.

It is the basis of understanding and therefore coexistence.

It could further bolster their efforts to socially engineer victims, and conduct convincing fraud and disinformation campaigns.

European Business Consultant at TrendMicro.

It creates a sense of empathy with the person writing it.

Even when we see it being artificially generated by GenAI it can have a similar impact.

However, there are unfortunately also opportunities here for threat actors.

They might do so by using official logos and sender domains.

But language also plays a key role.

This is where GenAI could give opportunistic threat actors a leg-up.

Now, this is unlikely to work in an enterprise setting.

But it could be used in scams targeting consumers.

GenAI is already predicted to supercharge phishing by generating grammatically perfect content in multiple languages.

Why not multiple dialects too?

Its a cybercrime that already cost victims $734m in 2022, according to the FBI.

But the bad guys are always looking for innovative ways to increase their haul.

Building bombs and faking news

Another threat looms large this year: misinformation/disinformation.

First, it is not widely used.

That means we may pay more attention to content written in a specific dialect.

If its our own dialect, we might feel instantly closer to the person or machine that posted it.

Politicians andcybersecurityexperts may warn us about election interference from foreigners.

But what could be less foreign than an account posting in a local or regional dialect close to home?

Finally, consider how dialects may allow threat actors to jailbreak GenAI systems.

Researchers at Brown University in the US used rarely spoken languages like Gaelic to do exactly this toChatGPT.

But we must remember that although GenAI seems intelligent, it can sometimes have the naivety of a four-year-old.

Time to educate

So whats the solution?

Certainly, AI developers must build better protections against abuse of GenAIs dialect-generating capabilities.

Companies should include dialect in their anti-phishing/fraud training programs.

And governments and industry bodies may want to run public awareness campaigns more widely.

That isnt where we are right now.

But as cybersecurity professionals, we have to acknowledge that it could be soon.

We’ve featured the best AI Writer.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc.

If you are interested in contributing find out more here:https://www.techradar.com/news/submit-your-story-to-techradar-pro