New data begs the question - is an AI power struggle in the C-suite imminent?

More than 69% of executives expect to spearhead their organisation’s AI efforts.

  • 3 months ago Posted in

Recent research from Dialpad found that over 69% of executives in the C-suite reporting to the CEO - or what is commonly known as CxOs - are expecting to spearhead their organisation’s AI efforts. The research also found that three quarters (75%) of respondents believe AI will have a significant impact on their roles in the next three years, and more than 69% are already using the technology. Does this mean that an AI power struggle is imminent in the C-suite?

It’s no surprise that AI is top of mind for the C-Suite, but with everyone expecting to lead the charge - does this risk confusion, duplication of efforts, and even a power struggle over who holds the keys to an organisation’s AI-related decisions? And what can help tackle these challenges and cut out confusion?

The case for Chief AI Officers

Whilst most (69%) of executives are already using AI, this doesn’t mean they all feel comfortable with the technology. In fact, there are some notable concerns, with 54% of leaders worried about AI regulation and 38% moderately to extremely concerned about AI in general. In light of this, the case for Chief AI Officers (CAIOs) is a strong one - something which the Biden administration in the US has called for recently. In theory, a CAIO will be able to take on these concerns and act as the main point of contact and the final authority on all things AI. It will be their responsibility to understand regulatory developments, shielding other executives and teams, and to dictate how AI is used across the business. Crucially, a CAIO can also stop any AI ‘power struggle’ from forming, acting as a figurehead for AI-related decisions and planning in a business.

“We will likely begin to see other governments echo the Biden administration’s call for more Chief AI Officer roles to be created,” said Jim Palmer, CAIO of Dialpad. “The CAIO role is one that can manage and mitigate much of the risk that comes with AI development, ensuring privacy and security standards are met and that customers understand how and why data is being used. The specifics of the CAIO role are far from fully mapped out, but as the AI boom continues, this will no doubt change in the coming months and years.”

Too many tools

The research also found that, often, leaders across the business are using multiple AI solutions - 33% executives are using at least two AI solutions, 15% are using three, and 10% are using four or more. If multiple different AI solutions are being used across a business, it runs a real risk of duplication across different departments - meaning companies are often unnecessarily paying for tools with the same functionality. Disparate tools and solutions can also tamper with a single source of truth, making it harder to align teams around the same data, goals and objectives - fuelling the potential AI power struggle among executives even further.

This is another area where the role of a CAIO can be so important, reducing duplication and ensuring the business invests smartly into AI.

Data protection and practices

Half (50%) of executives are moderately to extremely concerned about the possibility of a data leakage. On top of this, they have security concerns (22%), accuracy worries (18%), and fears over the cost of models, compute power, and expertise (12%). Additionally, 91% of companies will determine they do not have enough data to achieve a level of precision their stakeholders will trust.

It’s clear that, for all its value, AI is still full of unknowns for many leaders. A CAIO can, again, take on the responsibility to tackle these concerns head on. Alongside this, partnering with AI native companies that develop their own, proprietary AI services can offer much reassurance. Companies built upon their own proprietary AI stack will have greater control over data, privacy, security, performance and cost – key topics of discussion by organisations and regulators currently. Additionally, partnering with an AI native should, in theory, ensure more accuracy. Why? Because, while public AI is trained off the entire world wide web of data, AI natives can train their models on data that is relevant - making it faster and more accurate. It's like writing a history book with guaranteed facts versus a combination of facts and lies, with no way to distinguish between the two.

By embracing distributed leadership, fostering alignment, and standardising AI tools, businesses can navigate the evolving landscape of AI with confidence and clarity. 

New suite is designed to break down silos and provide comprehensive visibility and security across...
Strategic collaboration delivers data science and IT teams a common platform and processes for AI...
Almost a third (31%) of UK small businesses admit they’re scared to implement AI - despite...
HP has released the second annual HP Work Relationship Index (WRI), a comprehensive study that...
Report reveals 85% of cyber risk owners are confident in the success of AI policies but only 34% of...
AI-powered platform offers contextual and actionable recommendations based on identified...
Just 19% of the UK’s mid-sized enterprises have embraced artificial intelligence (AI) already,...
60% of global respondents say their focus on AI has boosted their personal reputation.