When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.

These vulnerabilities include privilege escalations, back door credentials, possible injection exposure and unencrypted data.

Such tools can assist the defense capabilities of developers, enabling an easier pathway to a security-first mindset.

A person at a laptop with a cybersecure lock symbol floating above it.

But like any new and potentially impactful innovation they also raise potential issues that teams and organizations should explore.

In addition, teams cannot blindly trust the output of AI coding and remediation assistants.

Hallucinations, or incorrect answers, are quite common, and typically delivered with a high degree of confidence.

Ultimately, we will always need the people perspective to anticipate and protect code from todays sophisticated attack techniques.

2) How should training evolve to maximize the benefits of AI remediation?

3) How can DevSecOps providers add value to teams that use AI remediation?

It all comes down to innovation.

LLMs are trained on publicly available data, and they are only as good as the dataset.

This is, again, a key area in which security-aware developers fill a void.

We’ve featured the best online learning platform.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc.

If you are interested in contributing find out more here:https://www.techradar.com/news/submit-your-story-to-techradar-pro