Data Integrity in AI: Navigating the Path to Trust and Transparency in Tech

By Barb Hyman, CEO and founder of Sapia.ai.

  • 1 month ago Posted in

It's hard to think of a form of technology that has generated as much buzz and discourse in just one year of existence. Within just 12 months, AI has evolved from a fringe technology issue to a board-level talking point in organizations. Given its rapid development over the past year, it's hard to predict where this trend will be by 2024. But I would hazard a guess that it will revolve around two key points: data and trust.

First off, data is everything with AI. An AI business is only as valuable as the data pool it has access to. In this regard, the EU has recently taken steps to mandate that AI tools disclose the data sources they use.

This should bring about a broader discussion as to what data should be used to fuel and build various models. The long-running discussion about data ownership and individuals’ rights to their own data has largely been ignored.

This is because for the individual, there’s little consequence for having loose controls over your personal data. However, the unauthorized use of this data in AI models is likely to change opinions on this matter. This feeds into my second point, trust. While regulation may not come into effect for at least a few years, AI companies should be trying to build trust with their customers by being transparent about their data now.

We’re at a crucial juncture in AI adoption, where building and maintaining trust is everything. Many questions we receive about our platform concern data usage. After all, much of the information they provide during the hiring process is personal. For instance, applicants often ask if the company will retain their responses and whether they have the right to request their removal or deletion after the hiring process. Our policy is that while we store the data, users retain ownership and can manage it as they see fit.

I also strongly emphasize to our team the importance of ensuring our platform does not use external data sources, such as web searches or social media, in our models. We weren’t pre-empting regulation with these decisions. In fact, we put them in place early to ensure there’s trust in our platform. Regardless of how sophisticated our technology is, if we don’t have the trust, it won’t be used.

Right now, we are scratching the surface of what AI is capable of doing and in turn are testing ethical boundaries. The ordinary person is also warming up to the idea of actively engaging with an AI instead of a person. My hope is that AI companies will go into 2024 with these points in mind, already acting in the best interests of society ahead of any potential regulation. It will be a fascinating talking point as we progress into 2024 and continue to find new and exciting ways to leverage AI technology.

By Ram Chakravarti, chief technology officer, BMC Software.
By Darren Watkins, chief revenue officer at VIRTUS Data Centres.
By Steve Young, UK SVP and MD, Dell Technologies.
By Richard Chart, Chief Scientist and Co-Founder, ScienceLogic.
By Óscar Mazón, Senior Product Manager Process Automation at Ricoh Europe.
By Chris Coward, Director of Project Management, BCS.