Currently reading this! Engaging from the start, I read nearly a fifth of it in one sitting. A “just in time” read, since I’ve been thinking about digital footprints.
Ultimately, we must maintain a certain distance in assessing the impact of algorithms in human lives. We have a tendency to overreact when an algorithm makes a mistake and a tendency to underappreciate when an algorithm works well.
Highlights 🔗
Power 🔗
-
All around us, algorithms provide a kind of convenient source of authority. An easy way to delegate responsibility; a short cut that we take without thinking.
-
…there’s a distinction that needs making here. Because trusting a usually reliable algorithm is one thing. Trusting one without any firm understanding of its quality is quite another.
-
…there’s a paradox in our relationship with machines. While we have a tendency to over-trust anything we don’t understand, as soon as we know an algorithm can make mistakes, we also have a rather annoying habit of over-reacting and dismissing it completely, reverting instead to our own flawed judgement. It’s known to researchers as algorithm aversion. People are less tolerant of an algorithm’s mistakes than of their own – even if their own mistakes are bigger.
-
If we’re going to get the most out of technology, we’re going to need to work out a way to be a bit more objective. We need to learn from Kasparov’s mistake and acknowledge our own flaws, question our gut reactions and be a bit more aware of our feelings towards the algorithms around us. On the flip side, we should take algorithms off their pedestal, examine them a bit more carefully and ask if they’re really capable of doing what they claim. That’s the only way to decide if they deserve the power they’ve been given.
Data 🔗
-
All around the world, people have free and easy access to instant global communication networks, the wealth of human knowledge at their fingertips, up-to-the-minute information from across the earth, and unlimited usage of the most remarkable software and technology, built by private companies, paid for by adverts. That was the deal that we made. Free technology in return for your data and the ability to use it to influence and profit from you. The best and worst of capitalism in one simple swap.
-
Whenever we use an algorithm – especially a free one – we need to ask ourselves about the hidden incentives. Why is this app giving me all this stuff for free? What is this algorithm really doing? Is this a trade I’m comfortable with? Would I be better off without it?
Justice 🔗
-
Algorithms can’t decide guilt. They can’t weigh up arguments from the defence and prosecution, or analyse evidence, or decide whether a defendant is truly remorseful. So don’t expect them to replace judges any time soon. What an algorithm can do, however, incredible as it might seem, is use data on an individual to calculate their risk of re-offending. And, since many judges’ decisions are based on the likelihood that an offender will return to crime, that turns out to be a rather useful capacity to have.
-
‘There are good guys and bad guys,’ he told me. ‘Your algorithm is effectively asking: “Who are the Darth Vaders? And who are the Luke Skywalkers?”’
Letting a Darth Vader go free is one kind of error, known as a false negative. It happens whenever you fail to identify the risk that an individual poses.
Incarcerating Luke Skywalker, on the other hand, would be a false positive. This is when the algorithm incorrectly identifies someone as a high-risk individual.
These two kinds of error, false positive and false negative, are not unique to recidivism. They’ll crop up repeatedly throughout this book. Any algorithm that aims to classify can be guilty of these mistakes.
-
The size of the chunks of time or money or volume we perceive follows a very simple mathematical expression known as Weber’s Law.
Put simply, Weber’s Law states that the smallest change in a stimulus that can be perceived, the so-called ‘Just Noticeable Difference’, is proportional to the initial stimulus. Unsurprisingly, this discovery has also been exploited by marketers. They know exactly how much they can get away with shrinking a chocolate bar before customers notice, or precisely how much they can nudge up the price of an item before you’ll think it’s worth shopping around.
-
But it’s not enough to simply point at what’s wrong with the algorithm. The choice isn’t between a flawed algorithm and some imaginary perfect system. The only fair comparison to make is between the algorithm and what we’d be left with in its absence.