Select Page

To achieve success, any entrepreneur, executive, or business owner needs to bear in mind the enormous potential that artificial intelligence (AI) offers for both innovation and customized service—as well as its potential for unintended consequences. Rather than rushing to jump on the AI bandwagon, businesses should think carefully before proceeding and develop a plan for the ethical use of AI. The technology can have substantial impacts on a business’ data privacy and security, transparency, fairness, interpretability, and social responsibility commitments.

Data privacy and security

Privacy, including data privacy, is recognized as a fundamental human right by the European Union through its General Data Protection Regulation (GDPR) law. In the United States, the situation is less clear-cut and comprehensive, although various state and federal regulations address the issue to some extent.

California, Virginia, and some other states have adopted comprehensive laws similar to the GDPR on their own. In states without such laws, it’s possible for companies to share or sell personal information without notifying individual consumers.

Experts and the general public are increasingly recognizing data privacy as essential in a world in which cybercrime, cyberstalking, and cyber-harassment continue to proliferate, affecting the daily lives of everyone from schoolchildren and senior citizens to celebrities and government officials.

AI systems typically store and process large datasets containing highly sensitive personally identifiable information (PII). Because of the enormous potential for intentional or unintentional misuse, companies and governments with access to this data cannot maintain public confidence unless they take ongoing steps to ensure its security.

Data security measures must encompass the storage, retrieval, handling, transmission, and modification of any PII, as well as the maintenance of transparency as to the authenticity of this data, how it is used, and to whom it can be provided. Companies should provide consumers with clearly articulated options for electing whether and how their own data is used.

Fairness

Because algorithms and AI networks are built on—and constantly grow through—machine learning, they are dependent on the data that goes into them. The early computing acronym GIGO—“garbage in, garbage out”—expresses this concept well. If the data sets on which an AI system is trained contains biased, incomplete, or inaccurate information, the AI’s answers to questions will be equally flawed.

If AI systems lack proper guardrails and oversight, they are capable of perpetuating factually wrong and harmful stereotypes, and even of creating situations that violate human rights.

Transparency and interpretability

This point brings us to the next important ethical considerations: transparency and interpretability, as well as the related concept of explainability.

AI systems have a “black box” problem. Where humans are concerned, there is a lack of transparency and traceability in the AI decision-making process. Humans don’t have a way to directly follow how an AI system interprets data and comes up with its predictions. Most deep neural networks and similar AI systems have such complex stratification and varied webs of interactions that they can become functionally impenetrable to humans.

Interpretability tools currently on the market are designed to help make these processes intelligible, thus increasing transparency and confidence.

Experts often link the concept of interpretability to that of explainability: While “interpretability” refers to the more complex task of understanding how an AI system works—the pathways it travels to arrive at a response—“explainability” is a simpler process focused on providing a basic and more intuitive explanation of an AI’s decisions. In order to achieve explainability, it’s not necessary to understand every nuance involved in the functioning of the AI.

Clear attribution is one important aspect of transparency in AI deployment. It’s essential for companies to publicly and consistently note which aspects of their products, research, and written content are AI-generated, human-generated, or a hybrid. Human experts should independently verify AI-produced data and monitor its usage along the information chain.

Social responsibility

Social responsibility encompasses all of these aspects of ethical AI deployment, and goes further to uphold the human rights of workers, individual end-users, and communities. As companies in a wide range of sectors deploy AI systems, concerns are mounting about their potential to increase inequalities and destabilize society by replacing human workers.

In a 2023 survey, one-third of American companies reported that AI had replaced some portion of their workforce. But that’s only part of the real picture: Experts also point out that as the nature of traditional work shifts, new opportunities are opening for jobs that require insight and creativity to evaluate and responsibly deploy AI.

In addition, we continue to see the capacity of AI systems to spread misinformation, disinformation, and hatred widely. Social media and other AI-driven communications mechanisms have toppled governments, fomented social divisiveness, and enabled authorities to more easily spy on and track the movements of their citizens. Deepfakes, in which AI systems ever more realistically recreate the appearances and voices of real people in falsified scenarios, are further eroding public trust in media and institutions.

Responsible companies can ensure they are on the right side of the social equation: Channel innovations in service to ethics of responsibility and transparency. Stay in compliance with local and national regulations on AI usage. And take a public leadership role in the corporate world by putting human well-being first in every aspect of operations.