7 common mistakes CTOs make when implementing AI code tools

Emilien Coquard
3 min readMay 7, 2024

--

AI is unleashing a new industrial revolution.

Just as the steam engine and factories caused an exponential increase in production, so will AI tools in the workplace. It’s no surprise so many CTOs, including our offshore development partners, are exploring how they can best leverage these technologies.

But while AI is going to revolutionise business, like any tool it needs to be used in the right way. And AI code tools are the same. If you don’t adopt it properly, you can cause your company financial and reputational damage.

Here are seven common mistakes to avoid to make sure you leverage AI code tools effectively.

Selecting the first tool you come across

Not all AI code tools are the same.

They use different LLM databases, support different programming languages, and have different feature sets. Just because everyone is talking about the hot tool of the week, doesn’t mean it’s right for your team. There might be a better option that is more tailored to your setup.

So instead of rushing in and buying licences for eight different tools whenever one team member requests it, take a moment to do some proper research and get familiar with the lay of the land.

Action: Conduct an internal review of existing tools and new tool candidates.

Implementing without a clear strategy

How will these tools fit into your development process?

Will you run your AI security checking tool first or last in your testing process? Are developers free to run code-generating tools as they wish, or should they limit their use to smaller blocks of code?

These questions might seem insignificant but they are critical for ensuring these tools are helpful, not harmful.

Action: Prepare a strategy document outlining good and bad practices for using AI coding tools and where they fit within your development strategy.

Failing to mitigate data and security risks

A recent study by a student at Stanford University into AI coding tools showed a troubling takeaway.

They caused more security vulnerabilities than average.

And a survey of software engineers by Snyk agrees. More than 50% reported increased security issues in AI-generated code.

Part of this was environmental, where coders trusted the AI to write quality code. Plus, the code-writing process was more obscure than human-written code, meaning it was harder to check code quality. But the other factor is training data.

AI models are trained on legacy data and often open-source databases which don’t often follow best practices. These often include vulnerabilities leading to AI tools copying the weaknesses.

Then there’s data.

Many AI tools train on the data they are fed and all send their inputs back to the servers where they are based. That means there’s the potential for data leaks. It’s not enough to just trust these tech companies to stick to data regulations, you need to make sure you aren’t disclosing real data.

Neither issue is insurmountable, but both need to be mitigated to prevent costly damage.

Action: In your strategy document, note the associated risks of each tool and how you will mitigate against them.

Read the full article at : https://thescalers.com/7-common-mistakes-ctos-make-when-implementing-ai-code-tools/

--

--