Technology NewsGeneral News

Assessing the risk of AI in enterprise IT

We speak to security experts about how IT departments and security leaders can ensure they run artificial intelligence systems safely and securely

Given the level of tech industry activity in artificial intelligence (AI), if they haven’t already, most IT leaders are going to have to consider the security implications of such systems freely running in their organisations. The real risks of AI are that it offers employees easy access to powerful tools and the implicit trust many place in AI-generated outputs.

Javvad Malik, security awareness advocate at KnowBe4, urges IT and security leaders to address both. While the possibility of an AI system compromise might seem remote, Malik warns that the bigger immediate risk comes from employees making decisions based on AI-generated content without proper verification.

“Think of AI as an exceptionally confident intern. It’s helpful and full of suggestions, but requires oversight and verification,” he says.

 

In a podcast on the topic of AI and security, Gartner analyst Nader Heinen tells Computer Weekly that one of the big risks to corporate IT security is AI’s access to corporate data, even data that is managed by access control to prevent unauthorised access.

“There’s internal data leakage – oversharing – which occurs when you ask the model a question and it gives an internal user information that it shouldn’t share. And then there’s external data leakage,” says Heinen.

“If you think of an AI model as a new employee who has just come into the company, do you give them access to everything? No, you don’t. You trust them gradually over time as they demonstrate the trust and capacity to do tasks,” he says.

Heinen recommends taking this same approach when deploying AI systems across the organisation.

 

Similarly, to avoid malicious injections, Sood advises IT leaders to make sure AI-generated outputs are sanitised and validated before being presented to users or used in downstream systems. Wherever feasible, he says systems should be deployed with explainable AI capabilities, allowing for transparency into how decisions are made.

Bias is one of the most subtle and dangerous risks of AI systems. As Fox points out, skewed or incomplete training data bakes in systemic flaws. Enterprises are deploying powerful models without fully understanding how they work or how their outputs could impact real people. Fox warns that IT leaders need to consider the implications of deploying opaque models, which make bias hard to detect and nearly impossible to fix.

“If a biased model is used in hiring, lending or healthcare, it can quietly reinforce harmful patterns under the guise of objectivity. This is where the black box nature of AI becomes a liability,” he says.

 

For high-stakes decisions, Sood urges CIOs to mandate human oversight for handling sensitive data or performing irreversible operations as a way of providing a final safeguard against compromised AI output.

Alongside securing data and AI training, IT leaders should also work on establishing resilient and secure AI development pipelines.

“Securing AI development pipelines is paramount to ensuring the trustworthiness and resilience of AI applications integrated into critical network infrastructure, security products and collaborative solutions. It necessitates embedding security throughout the entire AI lifecycle,” he says.

This includes the code for generative artificial intelligence (GenAI), where models and training datasets are part of the modern software supply chain. He urges IT leaders to provide secure AI for IT operations (AIOps) pipelines with continuous integration/continuous delivery (CI/CD) best practices, code signing and model integrity checks. This needs to include scanning training datasets and model artefacts for malicious code or trojaned weights, and vetting third-party models and libraries for backdoors and licence compliance.

Given the growing openness of AI models, which fosters transparency, collaboration and faster iteration across the AI community, Fox notes that AI models are still software. This software can include extensive codebases, dependencies and data pipelines. “Like any open source project, they can harbour vulnerabilities, outdated components, or even hidden backdoors that scale with adoption,” he warns.

 

In Fox’s experience, many organisations don’t yet have the tools or processes to detect where AI models are being used in their software. Without visibility into model adoption, whether embedded in applications, pipelines or application programming interfaces (APIs), governance is impossible. “You can’t manage what you can’t see,” he says. As such, Fox suggests that IT leaders should establish visibility into AI usage.

Overall, IT and security leaders are advised to implement a comprehensive AI governance framework (see Tips for a secure AI strategy box).

Elliott Wilkes, CTO at Advanced Cyber Defence Systems, says: “CISOs must champion the creation of an enterprise-wide AI governance framework that embeds security from the outset.”

He says AI risks need to be woven into enterprise-wide risk management and compliance practices.

 

The governance framework needs to define explicit roles and responsibilities for AI development, deployment and oversight to establish an AI-centric risk management process. He recommends putting in place a centralised inventory of approved AI tools, which should include risk classifications.

“The governance framework helps substantially in managing the risk associated with shadow AI – the use of unsanctioned AI tools or services,” he adds.

And finally, IT teams need to mandate that only approved AI tools are run in the organisation. All other AI tools and services should be blocked.

Gartner’s Heiner recommends that CISOs take a risk-based approach. Tools like malware detection or spell checkers are not high risk, whereas HR or safety systems carry a much greater risk.

“Just like with everything else, not every bit of AI operating in your environment is a critical component or a high risk,” he says. “If you’re using AI to hire people, that’s probably an area you want to pay attention to,” he adds. “If you’re using AI to monitor safety in a factory, then you may want to pay more attention to it.”

 

Source: Kofi Acquah

Tags

FEDkastleMultimedia

|| Blending tradition with style||News, Politics, Sports, Entertainment||Part of FED KASTLE Multimedia||NB: Retweets are not Endorsements.

Related Articles

Close