2017 Was The Year We Fell Out of Love with Algorithms

Fears of bias, election hacking, and damaged children have earned algorithms a bad reputation.
Ben Bours

We owe a lot to 9th century Persian scholar Muhammad ibn Musa al-Khwarizmi. Centuries after his death, al-Khwarizmi's works introduced Europe to decimals and algebra, laying some of the foundations for today’s techno-centric age. The latinized version of his name has become a common word: algorithm. In 2017, it took on some sinister overtones.

Take this exchange from the US House Intelligence Committee last month. In a hearing about Russian interference in the 2016 election, the panel’s top Democrat, Adam Schiff, threw this accusation at Facebook’s top lawyer Colin Stretch: “Part of what made the Russia social media campaign successful is that they understood algorithms you use that tend to accentuate content that is either fear-based or anger-based.”

Algorithms that amplify fear and help foreign powers put a finger on the scale of democracy? These things sound dangerous! That’s a shift from just a few years ago, when “algorithm” primarily signified modernity and intelligence, thanks to the roaring success of tech companies such as Google---an enterprise founded upon an algorithm for ranking web pages. This year, growing concern about the power of technology companies---a cause uniting some unlikely fellow travelers---has leant al-Khwarizmi’s eponym a newly negative aura.

In Februrary, the congregation of digital elite at TED received a warning about “algorithmic overlords” from mathematician Cathy O’Neil, author of the book Weapons of Math Destruction. Algorithms used by Google’s YouTube to curate videos for children earned hostile headlines for censoring inoffensive LBGT content, and steering kids towards disturbing content. Meanwhile, academic researchers demonstrated how machine-vision algorithms can pick up stereotyped views of gender and how governments using algorithms in areas such as criminal justice shroud them in secrecy.

No wonder that when David Axelrod, formerly President Obama’s chief strategist, spoke to the Nieman Journalism Lab last week about his fears for the future of media and politics, the A-word sprang to his lips. “Everything is pushing us toward algorithm-guided, customized offerings,” he said. “That worries me.”

Frank Pasquale, a professor at the University of Maryland, gives Facebook special credit for dragging algorithms through the mud. “The election stuff really got people understanding the implications of the power of algorithmic systems,” he says. The concerns are not entirely new---the debate about Facebook encompassing users inside thought-muffling “filter bubbles” started in 2011. But Pasquale says there’s now a stronger feeling that algorithms can and should be questioned and held to account. One watershed, he says, was a 2014 decision by the European Union’s highest court that granted citizens a “right to be forgotten” by search engines like Google. Pasquale calls that an early “skirmish about the contestability and public obligation of algorithmic systems.”

Of course the accusations fired at Facebook and others shouldn’t really be aimed at algorithms or math, but at the people and companies who create them. It’s why Facebook’s chief counsel appeared on Capitol Hill, not a cloud server. “We can’t view machine learning systems as purely technical things that exist in isolation,” says Hanna Wallach, a researcher at Microsoft and professor at UMass Amherst trying to increase consideration of ethics in AI. “They become inherently sociotechnical things.”

There’s evidence that some of those toiling in Silicon Valley’s algorithmic mines understand this. Nick Seaver, an anthropologist at Tufts, embedded inside tech companies to learn how workers think about what they create. “‘Algorithms are humans too,’ one of my interlocutors put it,” Seaver writes in a paper on the term’s fuzziness, “drawing the boundary of the algorithm around himself and his co-workers.”

Yet the pressure being brought to bear on Facebook and others sometimes falls into the trap of letting algorithms become a scapegoat for human and corporate failings. Some complaints that taint the word imply, or even state, that algorithms have a kind of autonomy. That’s unfortunate, because allowing “Frankenstein monster” algorithms to take the blame can deflect attention from the responsibilities, strategies, and choices of the companies crafting them. It reduces our chance of actually fixing the problems laid at algorithms’ feet.

Letting algorithms become bogeymen can also blind us to the reason they are so ubiquitous. They are the only way to make sense of the blizzard of data the computing era blinds us with. Algorithms provide an elegant and efficient way to get things done---even to make the world a better place.

Audrey Nasar, who teaches math at Manhattan Community College, points to applications like matching kidney donors and recipients as a reminder that algorithms aren’t all about sinister manipulation. “To me an algorithm is a gift, it’s a means for finding a solution,” says Nasar, who has published research on how to encourage algorithmic thinking in high schoolers.

It’s a sentiment that may have resonated with al-Khwarizmi. He wrote in the introduction to his famous tract on algebra that it would help with the tasks “men constantly require in cases of inheritance, legacies, partition, lawsuits, and trade, and in all their dealings with one another.” We need algorithms. In 2018, let’s hope we can hold the companies, governments, and people using them to account, without letting the word take the blame.