Co-Founder & CEO of TheMathCompany†
With more and more leaders recognizing the usefulness of AI in unlocking actionable insights and effective problem solving, AI models are increasingly becoming the key value drivers for large enterprises. However, these models are not without risk. Gartner estimates that by 2022, 85% of AI-generated insights will produce incorrect results due to bias in data, algorithms, or the teams responsible for managing them.
As AI is still in its early stages globally, risks such as learning disabilities, cyber-attacks and a lack of user understanding lead to trust issues. This not only limits the scalability of organizations but also creates gray areas in the quest to successfully align AI efforts and business strategy, slowing down digital transformation.
Since successful AI adoption requires human trust, the trust gap that currently exists between AI systems and decision-makers must be bridged. The only way teams can do this is by developing trust-optimized models that compare intelligibility with honesty. But before we get into how to make AI systems more accessible and accountable, let’s take a look at what this fundamental “issue of trust” is.
Data often comes in incomplete or skewed forms, leading to biases being “baked” into algorithms. Machine learning (ML) models are prone to algorithmic and cognitive biases and therefore snowball into analytical errors, skewed results, and compromised accuracy. In real-world scenarios, this translates into missteps, such as the infamous in-house AI recruiting engine that, fed with historical recruiting data, chose a pool of candidates made up of 60% males, highlighting prejudice against people of opposite sexes.
The accuracy of AI models is often inversely related to their interpretability. Add to this “closed-box” ML algorithms, and it becomes more difficult for teams to understand why and how a model generates a result, damaging user trust.
Lack of traceability
This lack of transparency leads to another risk: traceability. As shadow IT services proliferate and teams move to speculative SaaS applications, threats to API security have increased: attackers can execute malicious code remotely, making it nearly impossible for teams to regress to levels where the network input parameters.
This can have serious consequences for data security; for example, a path-engineered classification application that creates customer buckets through social listening may misclassify customer cohorts, impacting customer-centric decision-making efforts.
Why (and how) should we rely on AI?
Until now, business leaders have compared AI efficiency to performance (how well it performs), process (the functions it serves), and purpose (the value it delivers). However, the above factors have made it clear that a new benchmark is needed to assess the usefulness of AI: trust.
Building trust in AI solutions is crucial. However, doing so stands at a crossroads between applying simple algorithms for the sake of transparency or opting for opaque models that offer greater efficiency. This dilemma can be solved if trust in AI can be optimized on a few key levels.
Integrating ethics at the heart of AI
The concept of ethical AI goes beyond implementing best practices during model development: it involves changing the fabric of AI. To empower AI with ethical values at its core, governance bodies should be created and enterprise-level AI ethics programs introduced that align with corporate and industry regulations.
Operationalizing ethics across all systems – for example, considering impacts on society, climate and resources and using responsible AI-driven technology to optimize supply chains and minimize waste – is a step companies can take in this regard. . Institutionalizing ethics in AI in this way will not only help companies solve data bias and transparency issues, but will also actively put people at the center of long-term policies, enhancing customer trust.
Centered on humanization and empathy
The idea that an AI system must be reliable is necessary for us humans. That said, it is necessary for a system to be people-oriented.
AI already performs near-human functions and recognizes speech and images via NLP; however, it lacks an important human quality: empathy. Bringing empathy into AI would mean developing algorithms with humanized decision-making capabilities and more “sensible” data structures that explain the accuracy, reliability, and confidentiality of data. For example, AI-based learning platforms embedded with capabilities to observe stress, confidence and difficulties encountered by students can help develop personalized course recommendations and promote individualized learning.
By leveraging AI coded with empathy, companies can obtain granular, individualized data, enabling hyper-personalized experiences alongside improvements in data quality and completeness, which is essential for increasing trust.
Reinforcing transparency with explainability
As AI systems become more complex, it has become almost impossible to understand their decision-making reasons. However, teams can better understand such systems by introducing “explanatory” methods at various levels into a model and extending them to machine reasoning (MR), an area of AI that computationally mimics abstract thinking to draw conclusions. about uncertain data.
High-impact use cases for explainable AI (XAI) include context-aware systems in hospitals, where models can analyze location data, staff availability, and patient data — including vital signs, medical history and imaging reports linked to electronic health records — to publish “reasonable” warnings for patient condition, mobilize staff and improve patient outcomes. For leaders, explainable AI provides a better understanding of the behavior and risks of such systems, giving them the confidence to take greater responsibility for a system’s actions and then encouraging further confidence in AI adoption.
A future based on trust
In all sectors, AI is rewriting the rules of engagement, and we can only trust it if we trust its inner workings. Improving a technology with confidence will not only reduce the risks of innovation, but also inspire responsible innovation. Right now, both AI developers and incubators need to make sure they build systems that meet not only legal, but also ethical and emotional rubrics. Ultimately, AI underpinned by trust, transparency and traceability will enable unambiguous and robust models, reinforcing trust in a secure future.
Forbes Technology Council is an invite-only community for world-class CIOs, CTOs, and technology executives. Am I eligible?