Social media has revolutionised how we communicate. This article by ÂÒÂ×ÊÓÆµ Professor Jonathan Albright is part of a series of articles published by The Conversation that looks at how it has changed the media, politics, health, education and the law.
By , assistant professor of communications at
What’s the problem?
As become entrenched into society, the debate about their effects rages on.

And problems we have.
On the one hand, algorithms play the role of — responsible for the recent UK pound’s Brexit-induced , used for political and on social networks, and part of what Harvard Professor Shoshanna Zuboff calls
On the other hand, algorithms : they help us find information, , connect us to friends and family, show us products we’re likely to be interested in, , and direct us around traffic delays, saving us valuable time and money.
Algorithms are everywhere
Much has been written on what algorithms do and . This includes , what types of information of our social media feeds, and the thousands of calculated outcomes .
This piece isn’t about these issues, or about breaking down the complex nature of how algorithms work.
The relevance of algorithms at the moment isn’t because they are used in Google’s search, maps, autocomplete, photos, and services; and Trends; Twitter’s ; Netflix’s ; Amazon’s prices and ; or for , , and home and car insurance liabilities. It’s not because most computer software and mobile applications are essentially .
To return to my very first point, algorithms are important because they are : decision-making.
AI_gorithms
Algorithms, in a sense, are the “nervous system” of AI. They are the models that underpin machine learning, prediction, and problem solving. Yet, as many researchers argue, due to their design by humans, algorithms .
As , co-inventor of the Internet Protocol, Turing Award winner, and Google VP pointed out in a :
“We need to remember that [AI systems] are made out of software. And we don’t know how to write perfect software … the consequence is that however much we might benefit from these devices …, they may not work exactly the way they were intended to work or the way we expect them to. And the more we rely on [AI systems], the more surprised we may be when they don’t work the way we expect.”
“The way we expect” is key here, because algorithms are a computer-simulated reflection of encoded human expectations.
Engineering memories
Facebook’s famous “On This Day” prompt involves “.” Likewise, Instagram algorithmically sorts its timeline so you “.”
The more we, as humans, rely on algorithms, the more our reality becomes encoded with other people’s flawed expectations. As more AI-powered systems come online, this type of calculated bias will permeate every level of our lives — even our memories and past experiences.
Take, for instance, Google Photos, which uses AI-powered “deep learning” to organise people’s photos beyond normal metadata (GPS, time, date, lens, etc.). It uses advanced to classify material objects, facial expressions, and emotional relevance.
The robotic “assistant” even can touch up images, suggest creative filters and .
Biased learning, troubled future?
As algorithms “learn” more about us through our financial data, location history, biometric features, , social networks, stored memories, and “smart home” devices, we move towards a reality systems which try to understand us through other people’s expectations and sets of “rules.”
Algorithms are the literal manifestation of “playing by someone else’s rules.” For dating app Tinder’s algorithmic “Smart Photos” matching, , and enforced on users.
Does this mean that ? I’ll who has said, “there’s a billion to one chance we’re living in base reality”. Cerf, however, warns that it’s a mistake to “imbue artificial intelligences with a breadth of knowledge that they don’t actually have, and also with social intelligence that they don’t have.”
The algorithmic end game, AI, will get better with time, but it will always be flawed. Even in straightforward applications like a game of chess, algorithms can leave people clueless as to how they arrived at a certain outcome.
Great expectations
Cerf talked about a scenario in which IBM’s “Deep Blue” supercomputer, playing world chess champion , made a move that Kasparov could not understand.
“I mean, it made no sense whatsoever. And he was clearly concerned about it, because he thought for quite a long time and had to play the endgame much faster … .
It was just a mistake. The computer didn’t know what it was doing. But Kasparov assumed that it did, and lost the game as a result.”
The implications of bias today might result in because of predictive data modeling; tomorrow, it will mean people die when the algorithms controlling self-driving cars are programmed to .
Bad or good?
Is the social use of algorithms inherently “bad,” provided they form the basis of “intelligence” in AI?“ David Lazer, a computer scientist at Northeastern University, is sceptical. he said:
It does mean that companies, governments, and institutions that employ algorithms, and soon, AI powered deep learning “” need to be more transparent in showing us .
Given how proprietary algorithms are , this is doubtful, even despite current .
A recent SSRN piece maintains the need for a “” Some scholars go so far as to argue that . According to Cerf:
“It’s a little unnerving to think that we’re building machines that we don’t understand … Not only in the technical sense, like what’s it going to do or how is it going to behave, but also in the social sense, how is it going to impact our society?”
Just like us
So, algorithms, the underlying process of decision making in artificial intelligence systems are imperfect, prone to bias, and make unpredictable decisions that impact the future.
Sound familiar?
—
This article was originally published on . Read the .
