In this article

Analytics-driven applications that rely on artificial intelligence (AI) have grown in importance over the last decade. In fact, they touch almost every aspect of our lives, from healthcare and transportation to food, banking and much more. As AI solutions proliferate and increasingly influence human behavior, the need for transparent, explainable and fair models has never been greater.

"Responsible AI" is a practice that endeavors to ensure AI products are designed, developed and deployed with good intentions and the interests of impacted stakeholders positioned firmly at the forefront.

Historically, it has been common for businesses to solely consider their own interests in shaping what they produce and how. "What is the most profitable product? What is the market opportunity? Will this help build our brand?"

Responsible AI, however, aims to consider the interests of and impact on all stakeholders, not just internal ones. "Does the product violate any regulations or professional standards? Who might be harmed — financially, psychologically or socially? Who might benefit?"

Due to the scale at which AI is poised to impact society, and the complex interplay of interests, ethical norms and laws, Responsible AI is not a silver bullet that guarantees AI products won't have a negative impact. However, the normalization of its practice can improve our ability to anticipate how AI products might affect society, enabling us to modify them to prevent or mitigate harm, or even fully cancel development if warranted.

Risks of increasing AI adoption to attain ESG goals

Environmental, social and governance (ESG) initiatives have the potential to shape business practices, sustainability and social responsibility. In relation to an organization's ESG goals, AI product development likewise has the potential to impact business performance — both positively and negatively.

Environmental impact

One way AI might negatively impact a firm's ESG goals relates to environmental impact. Many companies are actively aiming to decrease energy usage and minimize their carbon footprints. However, AI development can consume significant amounts of energy, resulting in larger carbon footprints. In fact, an oft-cited study shows that training AI models has the potential to produce as much carbon as several vehicles would produce over their lifetime.

Social impact

Regarding social impact, if a firm's ESG goals include improving the equitable treatment of customers, AI might undermine this goal if not approached in a responsible manner. Why? Because by relying primarily on historical data, AI tends to pull through inherent bias and perpetuate, if not amplify, unequal treatments. Common examples include higher mortgage rates and underrepresented healthcare needs for minorities. Despite these clear social risks, a recent McKinsey survey found that fewer than one in four respondents actively checked their work for skewed or biased data during ingestion.

According to Gartner, trustworthy, purpose-driven AI innovations are estimated to have a 35+ percent better chance of success compared to innovations that do not. That positions Responsible AI as a common-sense best practice for long-term business viability and profitability.

If environmental and social risks are not identified early enough to be prevented or mitigated, the business risks incurring losses or reduced profitability in the form of regulatory violation fines, chronically stopping and restarting AI projects, or lost future engagements due to a tarnished reputation.

Three ways Responsible AI can accelerate ESG goals


1. Increasing transparency

A Responsible AI program can improve alignment between an organization's development and use of AI products and its pursuit of ESG goals. This occurs when the impact of AI systems is made observable and transparent, and by capturing information about how AI is being developed as well as metrics about its usage and behavior.

One metric that can be captured when developing AI is its expected carbon footprint. One example of a software package that can calculate an organization's carbon production is CodeCarbon. It shows the estimated C02 produced by executing the code, plus how developers can improve the code to reduce the carbon footprint.

2. Incorporating fairness

Since Responsible AI considers the impact of AI products on individuals and society, it's a natural fit for advancing the social component of ESG. Responsible AI can help identify in advance who may be impacted and how, and determine relevant metrics that can be tracked throughout the development and deployment of the product to ensure people are treated equitably.

For example, an aspect of Responsible AI is the consideration of fairness in the output of AI products. As part of an AI R&D project, WWT worked to identify a lack of fairness within an AI system's predictions relative to users' ethnicity, then helped improve the system's fairness. The achievement was accomplished by identifying relevant fairness-related metrics, improving the data the AI model was trained on, and tracking the impact on the metrics.

3. Improving "explainability"

When AI systems influence how people are treated, responsibility often entails providing those impacted with an explanation of how the AI system reached its conclusion. Enabling the provision of this explanation often requires a deliberate effort by developers to build in "explainability."

For instance, WWT built an AI product to help a health agency better predict which patients are at greater risk of opioid abuse. Because of the significance of the issue and its impact on patients, the development team created a way to view what features were impacting the predictions the most using metrics such as model gain, which measures how much a particular variable impacts the final prediction (see Figure 1 below). The higher the percentage, the more influence it has on the predicted value.

Table listing many variables considered by the healthcare AI.
Figure 1: Adding "explainability" by providing visibility into important factors considered.

Conclusion

Responsible AI can help organizations achieve business outcomes in addition to ESG goals. Our experience working with clients on a myriad of AI/ML use cases has taught us that taking the time and effort to reflect on and examine the downstream impact of an AI product can not only improve its overall value but enable more sustainable business outcomes. Moreover, by considering the needs and desires of all stakeholders, organizations can better tailor products to benefit all parties.

WWT recently introduced Responsible AI into our rapidly growing suite of ESG product and service offerings. Contact us to learn how AI and ESG are ready to help you drive more sustainable business outcomes in your industry.

Follow "ESG Services" for more news and trends.
Go Now