Hey Thoughty2 here. Have you ever called a customer support line and heard the phrase “this call may be recorded for training and quality assurance purposes”? Well the truth is your recorded phone call will most likely never be used for training. An extremely small number of calls are. So why are all these companies recording millions of phone calls on a daily basis? Data. All these recorded phone calls are automatically sent to companies that develop algorithms, as soon as you hang up the phone your conversation is being analysed by computer code. One such company is Chicago-based Mattersight. Mattersight and other such companies have developed intelligent machine learning algorithms that analyse billions of phone conversations every single day. Sent to them directly by large companies, Mattersight’s largest client sends them over 250 million recorded phone calls a day. Their computers examine what you say, how you say it, the words you choose and the tone you use to determine your “personality” and put you into a group of other people with the same personality.
You may for example respond well to hard facts and figures or you may respond better to personal sentiments and compliments. You may be short and aggressive with your tone, or you may be patient. You might be shy, sarcastic, blunt, outgoing. Within minutes after you first call a call centre, Mattersight’s algorithm has attached a personality label to your phone number, then the next time you call that or any other company that buys Mattersight’s services, you are automatically routed to a customer service agent that has a similar personality to yours. You are routed to a person who can better tap into your psyche to more efficiently sell you products, or to solve your problem quicker. The result is shorter, more pleasant phone calls for everyone involved and happier clients and boosted sales for the company. This has happened to you hundreds of times over hundreds of phone calls without you ever realising it.
So how do you feel about this? Should you feel like your trust has been betrayed, your privacy invaded? Well you could do, but in reality, these are what’s known as black-box algorithms, no real humans are listening to your phone call, and the data that these algorithms extract and use from millions of aggregated phone conversations isn’t even visible to the engineers who create the algorithms. It’s all just maths, your words are converted into numbers and then transmogrificated millions of times by computer software, the end result is so obfuscated and so vastly complex that all the people behind the scenes actually see is a black box of billions of numbers that takes phone calls as an input and outputs a personality label. No one is listening in to you complain that the new pijamas you just purchased have a tear in the crotch, because no one cares. And in the end, thanks to these algorithms, you will be put through to someone who assimilates really well with yourself, and you’ll probably hang up having had a shorter, happier and more resolutful conversation.
Algorithms are not a new thing, we have been living by algorithms and using them to enhance our lives for literally thousands of years. A simple recipe is an algorithm, you take an input, ingredients, follow a set of pre-defined instructions, and you get an output, a tasty meal, potentially. The Ancient Greeks, Babylonians and ancient Egyptians all developed mathematical algorithms to accomplish a variety of tasks and make life simpler. But today algorithms define your life like never before. Your whole life, everything and everyone in it has already been decided by an algorithm without you even realising it.
Whether your can get a credit card, a loan or a mortgage to buy your dream home has been decided by an algorithm. Which school you go to or your children go to, your exam scores, your university degree, are determined by algorithms. When you sit down to watch a film on Netflix or Amazon, statistics show that you’re most likely to watch a film that has been recommended to you by an algorithm. The very same goes for the products you buy. But it gets a lot more personal than that.
Since 2010 online dating has been the most popular way for new couples to get together, and today the majority of new couples meet online. Now if you’re the type of person who believes in fate and one-true-loves, then maths would like to have a word with you. If you met your partner online, a series of steps has had to happen to lead up to that point. Both you and your partner had to first discover and sign up to the dating website or app. That most likely happened because you both saw an ad or search listing for said dating service. An algorithm, whether it was Google’s or another, decided that you, based on your search and web browsing history, should see that ad or search result at that point in time.
Once you’ve signed up to the dating website you will be asked to fill out your profile and answer a series of never-ending questions about your personality and whether you enjoy long walks on the beach, which is a really strange question because the vast majority of people don’t live near a beach. Based on your answers and the words you put on your profile an algorithm will decide which matches should be shown to you. You may then take it upon yourself to flirt with one of said matches and become romantically entangled.
So maths, not fate, has narrowed down a small handful of people from your area out of a few thousand possibilities. Maths has decided who you might spend the rest of your life with and the children you may have. So if you think about it, today, computers are breeding humans. Now that’s trippy.
You know what else is trippy? Skillshare’s online learning community with thousands of classes in design, business, technology and so much more. Premium membership gives you unlimited access to high quality classes on must-know topics, so you can improve your skills, unlock new opportunities and do the work you love. If like me you find the power of algorithms and artificial intelligence fascinating, then you should definitely watch the Skillshare course “Deep Learning and Neural Networks with Python” it’s a must watch if you want to create your own powerful machine learning algorithms that can even rival those of the big tech companies. Don’t know how to program with Python? Well, there are also loads of great courses on Skillshare to teach you to be a skilled coder in a just a few days.
Skillshare is also more affordable than most learning platforms out there: an annual subscription is less than $10 a month. I don’t need no algorithm to tell me that’s good value. And since Skillshare is sponsoring this video, the first 1000 people to use the promo link in the description will get their first 2 months for 99¢.
Algorithms have already been running the world’s financial systems for over five years now. Not only do algorithmic systems that use big data decided automatically whether you should be accepted for a credit card, loan or mortgage, but the world’s biggest markets, the financial markets are today, traded autonomously by computers. Traders used to leverage algorithms as guidance, but the final decision on whether to buy or sell stocks, bonds, options or futures would be made by a human, an experienced trader. But over the past five years a huge shift has occured, due to rapid advancements in computer processing power and speed. Today around 80% of financial trades are made by black-box algorithms, thousands of times every second. In 2006 only 30% of trades were made algorithmically. But the scary part is, humans are not making these trades, the City of London and Wall Street are no longer employing experienced traders, they’re employing mathematicians and physicists to create algorithms that will make trades automatically, every millisecond, with no human input or decision process, whatsoever.
Trading floors have been emptied of people and replaced with servers. Generally these lightning fast algorithms make money or move vast sums of money around in the most cost-efficient way. But we don’t know how they work, we just rely on the fact that over a period of time they’re more accurate, faster and make far less mistakes than humans. That’s great, but the cost of that is that when an algorithm does go wrong, when it does make a mistake, not only can it be devastating, but we have no way of knowing, why or how it happened, because they are black boxes. So happened in the Flash Crash of 2010. On May 6th 2010, all was running smoothly, when suddenly, without warning, at 2:45pm, all of the world’s largest financial markets, such as the S&P 500, and the Dow Jones, collapsed, within seconds. A drop larger than that of the infamous Black Monday and the Wall Street Crash of 1929. Within a minute, over a trillion dollars was wiped from the US and World economies. A trillion dollars of money, vanished in less than the time it takes to make a cup of tea.
When it comes to predicting future trends and events algorithms today are far more accurate than humans. In recent years several mathematicians have created algorithms for predicting which films will win Oscars each year, just based on their plot and actors. These algorithms have be able to predict with between 85 to 100 per cent accuracy, the winners of all major categories at the past two Academy Awards.
London based company Epagogix has made these predictive algorithms commercial and uses them to consult large film studios on what will make them the most money for their future film releases. Epagogix’s machine learning algorithms can accurately predict which film scripts will make that studio money and which will be a flop. But interestingly they have also computed which film stars are worth hiring. This may come as a surprise but one of their algorithms computed that there are only three actors in existence that add value to a film, that is to say, their name appearing on the movie poster will increase the box office numbers. Every other actor makes no difference whatsoever, no matter how much of a household name they may be.
According to the algorithm, the only three actors in the World that can make a film more money are Brad Pitt, Johnny Depp and strangely Will Smith. The algorithm also worked out that there is one actress, who is a huge household name, but actually causes every film she appears in to make less money. For obvious reasons the company refused to say who she is.
But when algorithms start to be utilised for more serious issues, things start to get really controversial. One public sector that has dived head first onto the algorithmic hype-train is the police. In most major countries police forces are now trialing out sci-fi like algorithms that can do crazy stuff such as predict when and where a crime is likely to happen, before it does. And the reason police forces are getting so hyped up about these minority report style algorithms is that they work really well.
Using thousands of data points such as the current weather conditions, traffic, statistics about neighbourhoods such as average income, social backgrounds and education, these algorithms in use by the police today can predict with uncanny precision where a crime might happen to the minute. Police use this system to more effectively allocate their officers to specific areas of a city. But this isn’t new, so called predictive policing has been used for the past few years now and is becoming more common all the time, in fact I talked about it in more depth in a video last year. But something is new, something even more controversial.
New algorithms have been recently developed that can calculate any civilian’s “Threat Score”. What is a threat score? Well billions of data points are taken into account within seconds. Obviously this includes a person’s arrest records but worrying it goes a lot more personal than that, the algorithm looks at a person’s property records, education history, commercial databases from companies that person has used or associated with and most personally, their social media network, who they’re friends with, and every single one of their tweets, posts, images, videos and even their web searches. It uses all this information to algorithmically calculate a threat score about that person, similar to a credit score. The higher the threat score, the more of a potential risk that individual could be to the police or other civilians.
So in practice, when a call comes in to an officer, they can tap that person’s name into their in-car computer and the system will instantaneously reveal the suspect’s threat score, before they even arrive on scene. If they have a high threat score the suspect will be treated far more cautiously, tasers and guns will be at the ready, as that person’s threat score indicates they are more likely to be carrying a weapon and more likely to use it against officers. When the algorithm gets it right, that is. Currently being trialed by many police forces in the US, this kind of social rating system opens up a whole can of controversial worms.
How do you feel that you might be held at gunpoint and potentially fired at after a minor fist fight just because you tweeted five years ago that “you thought Hitler had a good dress sense for an evil dictator” causing your threat score to sky rocket, because of course that must indicate you’re a neo Nazi. Whereas someone who has a “clean” social history might be treated with more compassion and calmly arrested with no guns involved.
And this is essentially the glaring issue with algorithms when they are used for life-or-death situations. There is no doubt algorithms are frighteningly accurate in our modern age of big data but what if that tweet you made about Hitler’s dress sense was meant ironically, sarcastically or satirically and has no real reflection on your general character and empathy towards other humans. This is what separates algorithms from humans, computers are not yet able to tell the difference between genuine intention and sarcasm or humor, they don’t yet fully understand and appreciate the subtleties of human conversation and interaction.
Algorithms are being used today by court rooms, to mathematically determine a convicted person’s punishment, how much jail time will they serve and where they will serve it, using similar datapoints as the threat score algorithm. Is this right? One has to ask if a computer is more fair at said sentencing that a human would be.
And this is just the tip of the mathematical iceberg when it comes to punishment and law enforcement. It’s no secret that certain countries have long been collecting vast amounts of data about their citizens in a drag net approach, phone calls, emails, photos, videos, thanks to whistleblowers like Snowden. But it’s how they have been using that data in combination with algorithms that’s really powerful. The vast amount of data the NSA, GCHQ and other agencies collects on its citizens, such as phone records, mobile GPS locations, online searches, Facebook activity and so much more, is sifted through, organised and tagged by an algorithm, so that intelligence service operatives can run a simple text search on anybody and see exactly where they have been, who they have spoken to, and because of the predictive power of algorithms and mass data, their systems can even predict what we might be planning to do in the future, launching a terrorist attack for example. If you like a certain page on Facebook and have recently shopped at a particular hardware store and you happen to be friends with someone from a particular country, then the automated algorithms at use at the NSA and other agencies may have put you on a watch list, even if you don’t actually have any negative intentions. My god, for all the stuff I’ve searched over the years making these videos, I would be surprised if I wasn’t on every watch list going.
A study at Cambridge university found that the algorithms developed by intelligence services could build an exact character and personality description of someone, including their sexuality, political views and ideologies, based solely on the Facebook pages they have liked. Could a human do this given the same information? No. Humans don’t have access to billions of data points to compare that information against and identify these subtle trends that the algorithms pick up on, and even if we did, we are computationally unable to process such vast amounts of data.
But whether this dragnet, 1984 style mass surveillance is a good or a bad thing is a highly philosophical question. There were four terrorist attacks in the UK in 2017, but according to MI5, there were also nine other planned terrorist attacks that were prevented. It’s highly likely that these algorithmically driven mass surveillance systems were at least partially responsible for discovering and stopping said attacks. This can only be a good thing, right? Lives have without a doubt been saved, but at the cost of our freedom, privacy, anonymity. Is the tradeoff between a lack of privacy and state security worth it? That’s not for me to decide for you but it’s a question we should all consider.
The more we integrate algorithms into our lives, the more they are enhanced in many ways, we have cheaper airline tickets because the prices are more efficiently calculated by algorithms. We arguably find better relationship matches today thanks to our personalities being evaluated, rather than whoever smiled at us from across the bar. We get better shopping recommendations than ever, we visit better and more interesting cafes and restaurants thanks to recommendation algorithms, we go on better holidays. But the dark side to integrating all these mathematical systems into our lives is that huge decisions that affects us in irreparable ways, such as mortgage approvals, prison sentences and surveillance have lost their human touch, we are now only numbers waiting to be crunched. Thanks for watching.