Big Tech Using Big Data For Perfect Prediction/Spying A Lie - An illustration showing a stick figure on the left with a smartphone. The stick figure seems to be smiling as it is texting. Labelled above the stick figure is the text "You". On the right, a much bigger stick figure seems to be using a telescope to spy on what the smaller stick figure is typing. Labelled above this stick figure is the text "Big Tech".

The notion of big tech using big data to predict human behaviour has been an ethical concern for a while now. In 2012, one of the first controversial stories that highlighted this issue came to the limelight. A frustrated father walked into a Target retail store to complain about the fact that the retail giant had sent coupon codes advertising pregnancy/baby products to his teenage daughter.

The responsible manager was surprised to hear this complaint and apologized for the mistake. But later on, the manager received a call from the father. The father revealed that his teenage daughter was indeed pregnant and apologized to the manager for his earlier behaviour (source: Forbes).

You see, target had been using big data analytics models on their customers’ data. The end effect was that target was able to “know” that the teenage girl was pregnant before her own father. Fast forward to 2022 (the time of writing this essay), and it is not uncommon to hear people complain that they saw advertisements about stuff they mentioned in a casual conversation with friends at a party.

With the increase in connected artificial intelligence systems such as Alexa, Cortana, Siri, etc., people are getting more and more concerned about big tech using big data to know more about them than themselves!

In this essay, we will be covering some basic mathematics behind such big data analytics algorithms to see why they are overrated. Don’t get me wrong; I’m not saying that you should not be worried at all. But these models are probably not as clever as you think they are. This point will get clearer as you read along. Let us begin.

This essay is supported by Generatebg

A product with a beautiful background featuring the sponsor: Generatebg - a service that generates high-resolution backgrounds in just one click. The description says "No more costly photographers" and displays a "Get Started" button beneath the description.

Big Tech Using Big Data for Pandemic Infection Prediction

Let us consider a hypothetical case where one of the big tech companies has developed a big data analytics algorithm to predict pandemic infection cases. For all we know, this might have actually happened in reality. So, I’d like to explicitly clarify beforehand that any resemblance of this hypothetical example to real-world events is purely coincidental.

Now, you may wonder why such a model would be necessary in the first place. Imagine an alternate reality where people are secretive about pandemic infections (for fear of some sort of penalty like quarantines) and a regulatory body has funded the development of such a model. Or the big tech company could be planning to sell the analytics information to a pandemic drug/vaccine manufacturer.

Now, let us say that the model is used on a population of 200 million people. Such a model would use the notion of “degree of confidence” to describe a potential infection. Contrary to what the typical person understands about big data analytics algorithm, they cannot predict with 100% accuracy. They always operate with a measure of confidence (probability based on statistics) that the prediction is true. In our case, the results of applying the model to a population of 200 million people would look like this:

Big Tech Using Big Data For Perfect Prediction/Spying A Lie: Table comparing big data model prediction list with reality: Top Left → Actually Infected and in the list: 10; top right → actually infected not in the list: 9,990; bottom left → actually not infected and in the list: 99,990; bottom right → actually not infected and not in the list: 199,890,010.
Table created by the author

On the left half of the four-chambered number-square, we have the total number of people in the list predicted by the model to be infected. On the right half, we have the total number of people who are not on the list. The top half holds the actual number of people who are infected in reality, while the bottom half holds the actual number of people who are not infected in reality.


Analysing the Big Data Analytics Model

Now, imagine that you work at this big tech company and learn that your neighbour has made it to the list. You are concerned whether he could really be infected. That is, you wish to confirm if he belongs to the upper-left quadrant of the number-square.

The first thing you note is that of all the people who are on the predicted infected list (left half of the number-square), just 0.01% (10/100,000) are actually infected; in other words, almost nobody. On the other hand, of all the people who are actually infected, only 0.1% (10/10,000) made it to the predicted list. This means that there is a 99.99% chance that your neighbour is not infected.

Well, this looks like a shoddy model, right? Hold on. There is another perspective we are missing here. Let us say that you pose the null hypothesis that any given person is not infected. Given that the null hypothesis is true, what is the probability that this person would end up on the list purely by chance?

From the data we have, we see that a total of 99,990 people ended up by chance on the list. The total number of un-infected people was 199,990,000. So, the answer to our question is:

Probability that any given person would make it to the list by chance = 99,990/199,990,000 = 0.0499% (approximately).

Any un-infected person has only a 1 in 2000 chance (approximately) of being mispredicted. If we apply R.A. Fisher’s cut-off of 1 in 20 for statistical significance (for details, refer to my essay on how to really understand statistical significance), we could merrily reject the null hypothesis. This also means that we could state that your neighbour is infected with a chance of misprediction of 0.05%.

Surely, this poor big data model cannot be how actual big tech companies operate, right? Well, let us see what the reality has to say.

Big Tech Using Big Data — A Real World Example

In 2006, Netflix held a competition with a prize of $1 million challenging the participants to develop a recommendation algorithm that performed 10% better than Netflix’s own algorithm at that time. For this purpose, the participants were provided with a huge dataset of about a million anonymized ratings for 17,700 movies.

It took three years before anyone beat Netflix’s algorithm by 10%. And in order to do this, several teams had to band together their pretty good (but not good enough) models. Even after all of this, Netflix did not end up using the winning algorithm.

Why? Because by the time the new algorithm came up, Netflix was transitioning from DVDs to online streaming. And in the world of online streaming, poor recommendations are not as big a deal as with physical DVDs.

This story has a lot of important details about how big tech uses big data to predict human behaviour. Firstly, why was Netflix willing to pay $1 million for an algorithm just 10% better than its own? The answer is that in the big data analytics prediction market, a 10% improvement over the state-of-the-art is worth a lot of money (a lot more than $1 million).

Secondly, why did it take competitors three years to crack the problem? The answer is that this is a very hard problem that involves a lot of resources to solve. Besides, even after the improvement, the truth is that there exist hard limits on how well such models can predict using statistical/probabilistic methods.

So, what’s the deal here? What is the thread linking the Target story, our hypothetical example, and the Netflix story? Let’s get to that next.


Big Tech Using Big Data Struggles with Fat Tails

The fundamental difference between our hypothetical example and the Target/Netlfix stories is that the former deals with relatively rare events, whereas the latter deals with relatively frequent events. Without going into cumbersome technical details, I’ll just say that the rarer the event we are trying to predict, the less useful big data analytics tends to be.

Let us say that in our hypothetical example, the big tech company pools in a lot of resources to double the accuracy of the prediction model. Doubling a very, very small number still results in a small number. It just transforms a ‘very bad’ model into a ‘bad’ model.

One of the real-world companies that is attacking this class of problems very innovatively happens to be Palantir. Palantir originally aimed to solve the problem of predicting criminal activity (among others) using big data analytics. According to Peter Thiel, their breakthrough came when Palantir implemented an augmented ‘big-data-plus-human-expert’ approach.

Because of sensitive nature of these problems, it is hard to get proof of efficiency of Palantir’s models. Even assuming that they are as good as their makers claim them to be, it is clear that big data on its own has its limits. So, what does all of this have to say about big tech spying on our day-to-day lives?

Big Tech Using Big Data for Perfect Prediction/Spying is a Lie

To be fair, our hypothetical example is not representative of the state-of-the-art in terms of big data analytics. With the advent of machine learning & co., big data analytics algorithms have been steadily improving over time. With all that being said, our hypothetical example highlights very well the inherent issues with big data analytics algorithms, especially how they struggle with fat-tailed rare events.

“So what? That just means that these models predict correctly most of the time. So, they actually ARE very good at prediction.”

If you are thinking along these lines, I’d argue that you are mistaken. You see, rare events are much more common than you or I would like to intuitively admit. One single rare event is, by definition, rare. But SOME rare event happens to someone, somewhere, ALL the time.

Given the same set of circumstances to the same human being, the said human being has the potential to disrupt the observed pattern on any single occasion (thus defining a rare event). Such is human behaviour; inherently chaotic and hard to predict.

If you wish to fact-check in real life, go to any business owner who bought advertisement services from Google or Facebook and ask him/her how well those models worked. You’d be surprised to hear how goofy they are. Why go even that far? When was the last time you were frustrated with a Netflix/Amazon recommendation? I bet that it occurs more often than you’d like to admit.

Final Remarks

To conclude, your ‘smart’ artificial intelligence companions: Alexa, Siri, Cortana, etc., are in the business of out-predicting their competition just by enough of a margin (say, 1%) so that they can corner, say 10% more of the advertisement market. Such services are not in the business of perfectly predicting your next step.

Even if they wanted to, current approaches are incapable of predicting every aspect of human behaviour. The notion of big tech using big data for perfect prediction/spying is a lie!


Reference and credit: Jordon Ellenberg.

If you’d like to get notified when interesting content gets published here, consider subscribing.

Further reading that might interest you: How To Really Understand The Philosophy Of Inferential Statistics? and How To Really Benefit From Curves Of Constant Width?

If you would like to support me as an author, consider contributing on Patreon.

Street Science

Explore humanity's most curious questions!

Sign up to receive more of our awesome content in your inbox!

Select your update frequency:

We don’t spam! Read our privacy policy for more info.