Algorithms, bubbles and social manipulation

sebastian-pichler-20071.jpg

Whether you’re aware of it or not your life is almost certainly influenced by a range of algorithms all the time - many times a day if you are a frequent internet / social media user.

The dictionary definition (always a fun place to start) of algorithm is “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.” That's a pretty innocuous sounding idea, but we are now in the age of the super-star algorithm.

You’ve heard of Google right? A billion dollar organisation born of a single algorithm: PageRank, a way of calculating the relevance of a webpage based on a user entered string of words. Google gave the world PageRank, the world gave Google their personal details (the 21st century equivalent of gold), Google took over the world.

Anyone who has ever built or owned a website will know just how much power Google’s golden algorithm has - want visitors to find your site? You better be on the first page of Google’s results or things aren’t going to go well for you.

Another algorithm we’re all familiar with is Facebook’s News Feed - it's really hard to over emphasis just how much influence, how much power, this algorithm has, not just on an individual level but on a global scale. Let’s take a look at a couple of examples of how influential the things we see on our news feed can be:

  • In 2012, Facebook’s Core Data Science Team, along with researchers at Cornell University and the University of California at San Francisco, experimented with “emotional priming” on Facebook, without seeking any form of explicit user approval they manipulated the newsfeeds of approximately 700,000 users to see whether the things they saw would affect the positivity or negativity of their own posts. It worked (read more here).

  • In the 2016 US Elections, Facebook came under fire from large swathes of the media for facilitating the propagation of ‘Fake News’ - content created with the specific intent of spreading misinformation (there is a larger debate about exactly what constitutes fake news, but that’s not important here). Many argued that these stories had a very real impact on the result of the presidential election of the most powerful country on Earth.

In the latter instance, Facebook’s position was that they are neutral, they are not the gatekeepers of what people can and can’t see, yet through what we can only assume is the best of intentions that is exactly what they they have become. It all starts with an innocent idea: “why don’t we create an algorithm which will help people to find other cool content based on what we know they like?” Great idea! However we very quickly find ourselves being filtered down a very specific route and all of a sudden we’re inhabiting our own personal bubble. Read an article about climate change being a lie? Your feed is then full of other ‘suggested’ articles which other people who liked that climate change denial article also read. All of a sudden it's hard to believe that anybody could possibly think anything other than the ‘fact’ that climate change is a fabrication in the minds of a small group of crazy scientists.

That’s a simplified example, but you get the picture.

Serendipity becomes increasingly important, algorithms must evolve to break people out of their bubbles, their comfort zones. You liked that song? Try this one, its totally different, but equally brilliant. Enjoyed that article about cats? This one’s about dogs, they’re kind of cool too. But even here we have to be careful - the positive re-enforcement offered by our personal bubble creates the perfect breeding ground for extreme views, emphasising that anyone who thinks differently is very clearly wrong. Just showing someone the other side of the argument isn’t always helpful: A vegan reading about the evils of the dairy industry isn’t going to be happy when offered a story about how great burgers are, but perhaps an article about ethical farming would offer an alternative view that would be of value to them.

It's not just algorithms (or the creators of them) that are having to tread a fine ethical line here - the internet is far from entirely automated - there are legions of people now tasked with moderating what we see online (at least in the more well-lit areas of cyberspace). The rapid, exponential growth of the internet has resulted in a whole range of people finding themselves in positions of huge, ethically complex, power. Take the YouTube moderation team - they often have to tread an invisible line between censorship and freedom of information. What do they do if someone uploads a graphically violent video which is of great political or cultural importance? Who makes the decision that this bit of content is ok, but the next isn’t?

The internet is still in its teenage years (metaphorically speaking), it’s beginning to know what it wants to be and how to interact with other things, but it’s still a bit emotionally unstable. Lets just hope it doesn’t become a boring adult who plays by all the rules, rather an ethical rogue who helps us to see the full spectrum of life around us.

Further reading: Invisible Manipulators of your Mind |The Secret Rules of the Internet | Social Networking & Ethics 

Previous
Previous

Best practice JIRA management for your scrum team

Next
Next

Wink, wink, nudge, nudge – the psychology of emojis