Artificial Intelligence

The Problem With AI Ethics

Is Big Tech’s embrace of AI ethics boards actually helping anyone?

Last week, Google announced that it is creating a new external ethics board to guide
its “responsible development of AI.” On the face of it, this seemed like an admirable
move, but the company was hit with immediate criticism.

Researchers from Google, Microsoft, Facebook, and top universities objected to the board’s
inclusion of Kay Coles James, the president of right-wing think tank The Heritage
Foundation. They pointed out that James and her organization campaign against anti-
discrimination laws for LGBTQ groups and sponsor climate change denial, making her unfit
to offer ethical advice to the world’s most powerful AI company. An open petition
demanding James’ removal was launched (it currently has more than 1,700 signatures),
and as part of the backlash, one member of the newly formed board resigned.

Google has yet to say anything about all of this (it didn’t respond to multiple requests for
comment from The Verge), but to many in the AI community, it’s a clear example of Big
Tech’s inability to deal honestly and openly with the ethics of its work.

ETHICS BOARDS AND CHARTERS AREN’T CHANGING HOW
COMPANIES OPERATE
This might come as a surprise if you’ve followed recent debates over AI ethics. In the past
few years, tech companies certainly seem to have embraced ethical self-scrutiny:
establishing ethics boards, writing ethics charters, and sponsoring research in topics like
algorithmic bias. But are these boards and charters doing anything? Are they changing how
these companies work or holding them accountable in any meaningful way?

Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just “ethics
washing,” a strategy to avoid government regulation. When researchers uncover new ways
for technology to harm marginalized groups or infringe on civil liberties, tech companies can
point to their boards and charters and say, “Look, we’re doing something.” It deflects
criticism, and because the boards lack any power, it means the companies don’t change.

“Most of the ethics principles developed now lack any institutional framework,” Wagner tells
The Verge. “They’re non-binding. This makes it very easy for companies to look [at ethical
issues] and go, ‘That’s important,’ but continue with whatever it is they were doing
beforehand.”

Think of it like CEO Jack Dorsey’s repeated assurances that’s he thinking hard about
Twitter’s problems with abuse, harassment, and neo-Nazis. He keeps thinking, and things
on the site stay pretty much the same. At a certain point, all of this contemplation looks like
a substitute for actual policy change.

AN ETHICS EXPLOSION

Google isn’t the only company with an ethics board and charter, of course. Its London AI
subsidiary DeepMind has one, too, though it’s never revealed who’s on it or what they’re up
to. Microsoft has its own AI principles, and it founded its AI ethics committee in 2018.
Amazon has started sponsoring research into “fairness in artificial intelligence” with the help
of the National Science Foundation, while Facebook has even co-founded an AI ethics
research center in Germany.

Despite their proliferation, these programs tend to share fundamental weaknesses, says
Rumman Chowdhury, a data scientist and lead for responsible AI at management
consultancy Accenture. One of the most important points is that they lack transparency.

Chowdhury notes that many of society’s most important institutions, from universities to
hospitals, have, over time, developed effective review boards that represent the institutions’
values in the public interest. But in the case of Big Tech, it’s unclear whose interests are
being represented.

“THIS BOARD CANNOT MAKE CHANGES, IT CAN JUST MAKE
SUGGESTIONS.”
“It’s not that people are against governance bodies, but we have no transparency into how
they’re built,” Chowdhury tells The Verge. With regard to Google’s most recent board, she
says, “This board cannot make changes, it can just make suggestions. They can’t talk
about it with the public. So what oversight capabilities do they have?”

When boards do make interventions, only the company who operates it really knows why.
When Microsoft set up its AI ethics oversight committee Aether, for example, the company
said that “significant sales” had been cut off because of the group’s recommendations. But
it’s never explained what customers or applications were vetoed. Where exactly does
Microsoft draw the line on unethical uses of AI? Only the company itself knows.

A report last year from research institute AI Now said there’s been a “rush to adopt” ethical
codes, but there’s no corresponding introduction of mechanisms that can “backstop these
… commitments.” The report points out that there’s also scant evidence on a corporate and
an individual level if these codes and charters even have much effect.

One study from 2018 tried to test whether codes of conduct could influence programmers’
ethical decision-making. It quizzed two groups of developers with a set of hypothetical
problems they might face at work. Before answering, one group was told to consider a code
of ethics issued by the Association for Computing Machinery, while the other group was
simply told the fictional firm they worked for had strong ethical principles. The study found
that priming test subjects with the code of ethics “had no observed effect” on their answers.

Things are equally discouraging when considering how companies act. Google, for
example, only created its AI ethics charter after employees objected to its work in helping
the Pentagon design analytics tools for drones. After this, it continued to develop its
censored Chinese search engine, a project that will probably involve AI to some degree and
that many think infringes human rights. (Google says work on this project has stopped, but
reports from employees say otherwise.)
Got a tip for us? Use SecureDrop or Signal to securely send messages and files to The Verge without revealing
your identity.

IBM offers similar evidence. The company has been vocal about its ethical efforts in AI, and
last year, it created an ethnically diverse dataset to help remove racial bias from facial
recognition systems. Reports from The Intercept have detailed how, as recently as 2016,
the company’s surveillance tech was used by police forces in the Philippines where
thousands have been killed in “extrajudicial executions” as part of a brutal war on drugs. An
interest in ethical algorithms doesn’t stop companies from assisting deeply unethical
causes.

These problems don’t mean AI ethics boards should be done away with completely. These
are hard problems, and discussing best practices for algorithmic systems raises awareness
of their potential flaws. With so much expertise working for big tech companies, it would be
foolish to shut these firms out of the conversation. But if we are to believe that these
projects will be enough to keep society safe from the most harmful effects of new
technology, we need to go further.

“TRUST US, WE’RE PLENTY ETHICAL”

Part of the problem is that Silicon Valley is convinced that it can police itself, says
Chowdhury.

“It’s just ingrained in the thinking there that, ‘We’re the good guys, we’re trying to help,” she
says. The cultural influences of libertarianism and cyberutopianism have made many
engineers distrustful of government intervention. But now these companies have as much
power as nation states without the checks and balances to match. “This is not about
technology; this is about systems of democracy and governance,” says Chowdhury. “And
when you have technologists, VCs, and business people thinking they know what
democracy is, that is a problem.”

About the author

James

Add Comment

Click here to post a comment

Sign up for newsletter

Create an account for free access to an exclusive email newsletter delivering a selection of articles hand-picked by the editors (event updates, special offers and competitions)