Ethics and governance are getting lost in the AI frenzy (2024)

Mike Ananny is an assistant professor of Communication and Journalism at the USC Annenberg School for Communication and Journalism. Taylor Owen is assistant professor of Digital Media and Global Affairs at the University of British Columbia.

On Thursday, Prime Minister Justin Trudeau announced the government's pan-Canadian artificial intelligence strategy.

This initiative, which includes a partnership with a consortium of technology companies to create a non-profit hub for artificial intelligence called the Vector Institute, aims to put Canada at the centre of an emerging gold rush of innovation.

There is little doubt that AI is transforming the economic and social fabric of society. It influences stock markets, social media, elections, policing, health care, insurance, credit scores, transit, and even drone warfare. AI may make goods and services cheaper, markets more efficient, and discover new patterns that optimize much of life. From deciding what movies get made, to which voters are valuable, there is virtually no area of life untouched by the promise of efficiency and optimization.

Related: Government, business leaders launch Toronto-based AI initiative

Yet while significant research and policy investments have created these technologies, the short history of their development and deployment also reveals serious ethical problems in their use. Any investment in the engineering of AI must therefore be coupled with substantial research into how it will be governed. This means asking two key questions.

First, what kind of assumptions do AI systems make?

Technologies are not neutral. They contain the biases, preferences and incentives of their makers. When technologists gather to analyze data, they leave a trail of assumptions about which data they think is relevant, what patterns are significant, which harms should be avoided and which benefits should be prioritized. Some systems are so complex that not even their designers fully understand how they work when deployed "in the wild."

For example, Google cannot explain why certain search results appeared over others, Facebook cannot give a detailed account of why your newsfeed may look different from one day to the next, and Netflix is unable to explain exactly why you got one movie recommendation over another.

While the opacity of movie choices may seem innocuous, these same AI systems can have serious ethical consequence. When a self-driving car decides to choose the life of a driver over a pedestrian; when skin colour or religious affiliation influences criminal-sentencing algorithms; when insurance companies set rates using an algorithm's guess about your genetic make-up; or, when people and behaviours are flagged as 'abnormal' by algorithms, AI is making an ethical judgment.

This leads to a second question: how should we hold AI accountable?

The data and algorithms driving AI are largely hidden from public view. They are proprietary and protected by corporate law, classified by governments as essential for national security, and often not fully understood even by the technologists who make them. This is important because the existing ethics that are embedded in our governance institutions place human agency at their foundation. As such, it makes little sense to talk about holding computer code accountable. Instead, we should see AI as a people-machine hybrid, a combination of human choices and automated decisions.

Who or what can be held accountable in this cyborg mix? Is it individual engineers who design code, the companies that employ them and deploy the technology, the police force that arrests someone based on an algorithmic suggestion, the government that uses it to make a policy? An unwanted movie recommendation is nothing like an unjust criminal sentence. It makes little sense to talk about holding systems accountable in the same way when such different types of error, injustice, consequences and freedom are at stake.

This reveals a troubling disconnect between the rapid development of AI technologies and the static nature of our governance institutions. It is difficult to imagine how governments will regulate the social implications of an AI that adapts in real time, based on flows of data that technologists don't foresee or understand. It is equally challenging for governments to design safeguards that anticipate human-machine action, and that can trace consequences across multiple systems, data-sets, and institutions.

We have a long history of holding human actors accountable to Canadian values, but we are largely ignorant about how to manage the emerging ungoverned space of machines and people acting in ways we don't understand and cannot predict.

We welcome the government's investment in the development of AI technology, and expect it will put Canadian companies, people and technologies at the forefront of AI. But we also urgently need substantial investment in the ethics and governance of how artificial intelligence will be used.

Ethics and governance are getting lost in the AI frenzy (2024)

FAQs

What is ethical governance in the age of AI? ›

Establishing AI Governance Frameworks

This includes: Developing Ethical Guidelines: Formulating policies that ensure AI systems are transparent, accountable, and free from biases. This is crucial for maintaining public trust and safeguarding human rights in the AI age.

What are the arguments against artificial intelligence? ›

The drawbacks of AI include job displacement, ethical concerns about bias and privacy, security risks from hacking, a lack of human-like creativity and empathy.

Do you agree that the rise of artificial intelligence rises ethical concerns? ›

However, the development of AI raises important ethical questions about its impact on society. These issues include the potential impact of AI on jobs, bias and discrimination, privacy, and the potential for malicious use of AI.

What are three main concerns about the ethics of AI? ›

But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.

Is AI ethical or unethical? ›

AI projects built on biased or inaccurate data can have harmful consequences, particularly for underrepresented or marginalized groups and individuals.

Why do we need AI governance? ›

Why AI governance is needed. AI governance is necessary when machine learning algorithms are used to make decisions. Machine learning biases, particularly in terms of racial profiling, can incorrectly identify basic information about users.

Is AI a threat to humanity? ›

Hinton has said there is a 10% chance that AI will lead to human extinction within the next three decades. Hinton and dozens of other AI industry leaders, academics and others signed a statement last June that said “mitigating the risk of extinction from AI should be a global priority.”

What is the biggest problem in AI? ›

The main issues surrounding AI are data security and privacy since AI systems require large amounts of data for operation and training. To avoid leaks, breaches, and misuse, one must ensure data security, availability, and integrity.

Is AI good or bad for society? ›

AI can help improve access to education, healthcare, and clean water, and can also aid in the fight against climate change, poverty, and hunger. However, it is crucial to ensure that AI is developed and used in an ethical and responsible manner, to avoid any unintended negative consequences.

Why is the biggest challenge facing AI an ethical one? ›

There is another ethical concern surrounding AI bias. Although AI does not inherently come with bias, systems are trained using data from human sources and deep learning which can lead to the propagation of biases through technology.

What is the unethical use of AI? ›

The ethical issues arising from the use of AI in academia include “the distortion and/or inaccuracy of data, unfair authorship, the formation of plagiarism, and reaching a correct or incorrect result without exerting effort.”

What is an example of unethical AI? ›

The Bad Side of Artificial Intelligence

One example of this is AI algorithms sending tech job openings to men but not women. There have been several studies and news articles written that have shown evidence of discriminatory outcomes due to bias in AI.

What are the ethical issues of AI and data science? ›

Bias and Fairness

Bias in AI and data analytics is a pervasive ethical challenge. Machine learning models can inadvertently perpetuate biases present in the training data, leading to discriminatory outcomes. This bias can affect various domains, from hiring practices to criminal justice sentencing.

Top Articles
Latest Posts
Article information

Author: Jeremiah Abshire

Last Updated:

Views: 6129

Rating: 4.3 / 5 (74 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Jeremiah Abshire

Birthday: 1993-09-14

Address: Apt. 425 92748 Jannie Centers, Port Nikitaville, VT 82110

Phone: +8096210939894

Job: Lead Healthcare Manager

Hobby: Watching movies, Watching movies, Knapping, LARPing, Coffee roasting, Lacemaking, Gaming

Introduction: My name is Jeremiah Abshire, I am a outstanding, kind, clever, hilarious, curious, hilarious, outstanding person who loves writing and wants to share my knowledge and understanding with you.