We’ve Sold Out to the Algorithms – Are we Ready for the Consequences?

Written by Graeme Murray

Algorithms are part of our everyday lives whether we like it or not. They diligently work in the background, solving problems, aiding decision-making, and optimising things. 

Remember those mathematical equations from school; they are algorithms. 

The languages we use to program computers are algorithmic.

Algorithms power the internet and all internet searches. 

Everything we see, hear and read on social media is served up via algorithms. 

Facebook, Instagram, YouTube, TikTok and LinkedIn use advanced algorithms to personalise content.

Talking to Alexa or using your face to open your smartphone are performed with algorithmic help. 

Modern-day life would grind to a halt if we suddenly stopped using them. 

The reality is that algorithms will become even more embedded into our lives, driven by the relentless growth of Artificial Intelligence (AI), machine learning and deep learning.

For all the hype and benefits of algorithms saving lives and making things more accessible and convenient, algorithms also have a dark side that we often overlook.

Algorithms dictate what we see on our social media feeds, decide the type of news we receive, select the results of our online searches, greenlight whether we qualify for a loan, screen our job application before humans ever see our CV, are used to determine medical diagnoses - they shape many decisions in our lives often without our consent or knowledge.

Yet algorithms are not infallible; they can and do make systematic mistakes and errors. They can perpetuate biases based on the way they were programmed to discriminate against certain people. 

They create filter bubbles where we only see things from a particular point of view whilst discarding the broader perspective of a story or an issue (all in the name of personalisation and convenience). They also eliminate serendipity. Algorithms can quickly spread fake news and disinformation. They can distort narratives and alter our view of history. Think of the growth of fringe viewpoints and conspiracy theories populating social media today.

We are seeing powerful corporations at play in this new AI arms race – Open AI, Google, Microsoft, Meta, Apple, et al. all want a slice of the AI cake. It represents the new gold rush as these companies rush to bring out new products that entice and take over the world through what I call ‘silent osmosis’.

But we must be careful not to be so easily seduced by this golden apple we are being offered. As the author and social activist Naomi Klein recently highlighted in her article, ‘AI machines aren’t hallucinating. But their makers are” (The Guardian Opinion, 8th May 2023); we have already witnessed the Silicon Valley powerplay with their lofty promises proclaiming to change the world for the better.

Firstly, these tech giants make an attractive product – they develop fancy names and logos.

They hype it to the max with swanky product demonstrations and slick videos. Often, their claims and demonstrations are not what they say they are.

The makers proclaim how this new technology will improve the world. They focus on the superlatives, never mention the dark side, and their PR depicts them as responsible, customer-focused companies.

They design easy-to-use, convenient products and give them away for ‘free’. 

They make excuses when the product doesn’t always work quite as envisioned or fails to live up to early expectations – it’s part of the so-called development process, and they will apologise profusely to iron out the kinks. But they carry on regardless as the end goal is too important - the holy grail of even greater profits and further market domination.

They high-five and rejoice as their product becomes the fastest to a million users in history. As the product scales and gains traction worldwide, a million becomes hundreds of millions. In the end, it reaches a billion people or more.

Sometimes, if things don’t quite go to plan, these corporations will rebrand their product with a fancy new name, logo and a fresh lick of paint. It is better not to dwell on the past when the future (with this new product) is so rosy.

They rub their hands in glee as they see their competitors fall like flies, one by one until they have a clear monopoly. Or they form a cosy oligopoly. Shareholders and investors get excited.

Once they have the playing field all to themselves, they introduce targeted ads, new data policies, constant surveillance, new fees and the like. By this time, we have become so dependent on these products, and they have become so commonplace in everyday life that they are almost impossible to regulate, close or shut down.

Finally, the genie is out of the bottle, and the world must live with the consequences. Corporations make billions in the name of so-called human progress. Then, they move on to the next ‘big thing’ and repeat their play.

The Center for Humane Technology (C.H.T.) in its thought-provoking presentation called the ‘A.I. dilemma’ pointed to the harmful by-products of the development of social media as tech companies obsessed with maximising user engagement: information overload, addiction, doom-scrolling, influencer culture, sexualisation of kids, QAnon, shortened attention spans, polarisation, bots/deep fakes, cult factories, fake news, breakdown of democracy. Indeed, it is not the panacea they initially promised.

C.H.T. also highlighted the benefits that the A.I. narrative is using – the promised land where AI will make us more efficient, write faster, make our code quicker, solve impossible scientific challenges, solve climate change, and make certain people much money.

Yet, like social media, C.H.T. believes that AI will open a pandora’s box of harmful by-products: reality collapse, fake everything, trust collapse, automated loopholes in the law, automated fake religions, exponential blackmail, automated cyberweapons, automated exploitation of code, automated lobbying, biology automation, exponential scams, A-Z testing of everything, and synthetic relationships.

Yet the people who develop these algorithms often have a distorted view of reality and are impervious to their potential trail of destruction. Their codes perpetuate biases, lack transparency, create filter bubbles, raise privacy concerns, create societal division (especially for the poor and the uneducated), and spread misinformation. The rise of the algorithm will also have a profound effect on unemployment and its implications.

It’s essential for the companies developing products and services using algorithms to be aware of these potential negative impacts and work together collaboratively to address them to ensure a fair, safe and transparent experience. 

Is it too much to expect corporations to focus on societal good rather than profit? Probably, but ultimately, it’s on these corporations to be responsible for what they unleash on the world and take full responsibility for their actions.

Policing the algorithms will require strong governance, regulatory oversight, and clear accountability. Consumers must be better informed and educated on algorithms and have more control over their use.

So, what does the future hold? We are living in the age of the algorithm. Their rise in influence will undoubtedly continue, spurred on by the current AI arms race. Algorithms are here to stay for better or worse.

We would be wise to heed the words of Karel Čapek, the famous Czech writer and intellectual. He was deeply sceptical of the utopian notions of science and technology. “The product of the human brain has escaped the control of human hands,” Čapek told the London Saturday Review following the premiere of his play “R.U.R.” (Rossum’s Universal Robots) in Prague in January 1921. 

One hundred years later, Čapek’s critique of mechanisation, the rise of robots and how they can dehumanise people remains remarkably prescient and accurate.

Previous
Previous

The Unstoppable Rise of Synthetic Advertising.

Next
Next

Greenwash at Your Peril.