By now, most of us are aware of the unexpected ramifications the Age of Information brings. Whether it is the Cambridge Analytica psychometrics scandal in the 2016 election, the teen girl whose father found out she was pregnant because of their too-perceptive targeting ad campaign, or the huge but intractable issue of bias in machine learning, the moral conundrums of a new age of technology are already upon us, and we are struggling to catch up.
The Target case is a prime starting example of the novel ethical considerations that come with our new technological era. Using data from purchases (lotion, supplements, cotton balls), Target statistician Andrew Pole was able to predict not only whether a customer was pregnant, but how close to her due date she was, with startling accuracy.
Naturally enough, when women suspected that Target’s baby-fliers were a little too coincidental, they were creeped out. It reminds me a little of this scene in Community, when Abed’s preternatural talent for observation proves a bit too sharp for comfort.
I, however, find an answer of “it’s very complicated!” to be unsatisfactory. Indeed, I would go so far as to call it morally and intellectually lazy. It is neither too complicated for the wondrous minds we human beings are fortunate to call our own to unravel, nor is it too abstract for us to appreciate the human impacts and costs at play. The philosophy of ethics may be complex, but only insomuch as our world is complex. That complexity exists in order to render the dilemmas in such new situations down to what is simple: right and wrong, good and bad.
We cannot, of course, simply hold up one piece of this situation and pass judgement based on that. Certainly we cannot do so with the kind of doubt-free clarity I seek to find in ethical dilemmas. What we can do, however, is break the situation down into smaller, simpler components.
Posing straightforward questions with clear answers is one example. Was harm done here? I think we can safely answer in the affirmative. To the girl whose secret was revealed to her father without her consent, first. To her family, in the resulting media exposure. And even to Target’s reputation, in the end! This raises our next question: whose action precipitated these events? Target, in their use of data.
As we dig deeper, the questions become more fraught, but also more crucial to our eventual judgement. Did Target have any way of knowing they might be tapping into private information here? It certainly seems clear they did not anticipate this particular situation. But surely it is obvious that the inference of pregnancy being made from purchase data was never something the customers in question intended to share with Target.
This brings us to the crucial question: was Target aware that they might be violating their customers’ consent? In the abstract, from the basic information of the case, it might be easy to construct plausible arguments supporting either perspective – Target is at fault, or Target is blameless. But, fortunately for us, there is additional detail available, and it is quite telling:
“If we send someone a catalog and say, ‘Congratulations on your first child!’ and they’ve never told us they’re pregnant, that’s going to make some people uncomfortable,” Pole told me. “We are very conservative about compliance with all privacy laws. But even if you’re following the law, you can do things where people get queasy.“
The article goes on to describe tactics Target used to make their targeted advertisement seem more random, and to allay customer’s (correct) feelings that they were being spied upon.
So that gives us violation of consent, in the form of extraction of private information. It gives us intentional deception of their customers regarding that invasion of privacy. And it gives us clear intent to do both of those things. In the end, this is actually quite simple, no? Right and wrong are timeless – it is merely a matter of knowing your way through the forest of trappings layered on top of them to the simple heart of the matter.
Let’s take a look at a more speculative example, something that is not part of the world we live in, at least yet. Have you seen the movie “Minority Report”? To briefly summarize, Tom Cruise works for a “pre-crime” unit that makes arrests based on predicted criminal behavior, stopping crime before it starts.
Obviously, this kind of precognition is not a common feature of our world today — though it may not be as far-fetched as it sounds. Given the staggering power of machine learning-based predictive models, the question of “what if” we could stop terrible things before they happen may be more than a theoretical one before we know it.
So what are we really asking ourselves when we grapple with that question? On the one hand, we can weigh the obvious good of stopping a bad thing vs. letting it happen. It would be immoral to stand idly by when we know harm will be done.
At the same time, can it be right to punish someone for something they haven’t actually done yet? Perhaps this point could be argued. But let us not conflate prediction with proof. A likelihood, no matter how high, is not a certainty. Furthermore, any argument that stopping a harmful action before it occurs is the right thing to do must rest on an assumption that there is intent to do harm. I am confident most would agree that the correct answer is going to be no.
Even supposing such a predictive system could be infallible — or at least, proven less so than human judges and courts, I think most of us experience a twinge at the thought. Something is wrong with a system that uses machines to pass judgement on human beings, in a way that is hard to express. But I say maybe we are right to feel that hesitation.
After all, our justice system is based on rules, laws. By their nature, they tend to be inflexible, absolute, rigid. But our concept of justice is not, and surely even the most fervent advocate of the rule of law must concede the importance of a human element in anything claiming the title. It has never been simply because we believe humans are the most accurate predictive system that we can accept humans presiding over the fate of other humans in the context of the law: it is because we know that they, like ourselves, and anyone on either side of the law, are possessed of humanity, empathy, needs and fears, hopes and foibles, in addition to reason.
No machine can replicate empathy, or compassion. So as I focus on learning the skills of manipulating, analyzing, and understanding the oceans of data we find our whole society swimming in, in this new age of information, those will be my touchstones. As much as we come to wield the phenomenal power big data promises, let us never forget to be kind.