The Future of Privacy
By Mikkel Krenchel and Brendan Muha
‘Privacy has become a hot topic in the media, and for good reason – the technological advances that allow people, companies, and governments increasing access to what we do, say, and even think raise fundamental questions about our future. But beyond the heated debates among pundits, tech watchdogs, conspiracy theorists, regulators, and so on, is privacy something normal people actually care about? On the surface it’s hard to tell. People say they care, but they don’t really do anything about it. Over the past year or so, ReD has seen the privacy discourse work its way from the headlines to the dinner table, and through our fieldwork we’ve listened to everyday families discuss what it means for them. This memo is meant to share our perspective, synthesized from a number of studies, on a few key questions – Do people even care? Is it a passing fad? Do we need to care about this? Is there anything we can do? For the purposes of this document, we define privacy as the ability to control who knows what about you, why, and through which means. In just the past two years, ReD has conducted 25+ studies that touch on people’s relationship to privacy, on topics ranging from how Gen Z approaches digital, to how families’ financial behaviors are evolving, to how new technologies achieve social acceptability. We have conducted research with people from all walks of life, with geographically, ethnically, religiously, and economically-diverse groups. Our research spans the globe, but for the purposes of this document we are focusing on findings relevant to the United States.
Do people really care about privacy?
If you ask them, most people will say yes – “of course I care about privacy!” But at the same time, very few people are changing their behaviors in response to new information about how their personal data is being collected and utilized – they say they want to delete Facebook after the Cambridge Analytica scandal, but they don’t; they get outraged over a New York times article about location tracking, but they don’t change the setting on their iPhones. Many observers have interpreted this lack of action as evidence that people, in fact, don’t care – perhaps that they’re just bending to social pressure to say that they care – or that they willingly accept the trade-offs required to use the products and services without which their daily lives would be impossible.
We would argue that people do care, that privacy is in fact a deeply-held value – in other words, we believe them when they tell us they care. But just because they care, it does not mean they act – yet. To us, the situation looks like a textbook example of what social scientists call the “action-intention gap,” where an individual’s values don’t correlate with their actions, or simply, where they say one thing and do another. For most people, meaningfully increasing their digital privacy feels out of reach. Even understanding what’s at stake is phenomenally difficult; the New York Times found most companies’ privacy policies an “incomprehensible disaster,” with several of them charting higher than Kant’s “Critique of Pure Reason.” And existing solutions – e.g., changing settings – feels incomplete, fiddly, and requires trust that the company on the other side will actually follow through. The only real action most people see is giving up the platforms altogether – deleting Facebook, not using Gmail, staying off the internet – but for most people, most of the time, this tradeoff is simply too big.
We think it’s a mistake to assume that people are happily making this trade-off. While people are quick to explain away their lack of action – “they already have my data anyway,” “I’m not important enough,” “if they want me, they’ll figure out how to get me” – we see a mounting frustration and resentment. People don’t feel the situation is fair, and they feel trapped by platforms that feel indispensable. At the same time, they’re losing their interest and confidence; The Guardian reported that Facebook likes, shares, and posts fell 20% after the Cambridge Analytica scandal.
The “action-intention Gap” in privacy reminds us a lot of the way people approached sustainability in the early 2000’s. Like privacy, sustainability is another big, abstract topic where for a long time, only people at the fringe took action. But eventually, after record-breaking hurricanes, heat waves, wild fires, etc, climate change started to become more tangible. People are responding, and they’re expecting action from those they consider the most culpable and capable: the private sector.
Likewise, a steady increase in privacy scandals like Equifax, Polar, Exactis, and so on, means more and more people are experiencing the repercussions of privacy failures and as a result increasingly feel that their data is being “weaponized” and used against them. Today, weaponized data tends to hurt people in a few ways: damaging their reputation (e.g., when sensitive materials are leaked online), creating a material loss (e.g., fraudulent charges or compromised credit), stealing time and attention (e.g., robo-callers, invasive advertising), and robbing them of agency and integrity (e.g., when data is used to aid political campaigns an individual doesn’t support). All of this fuels a larger sentiment that, according to our research, seems to be growing – that new waves of technology are working against people.
In some communities where privacy is at a premium, such as radical political activists, digital fraudsters and marginalized groups living under repressive regimes, there is a growing trend of ‘going dark’ for private conversations, meeting off the grid and without any digital devices.
Will the focus on privacy fade?
You might think that the uproar around privacy is just a fad – a phase society goes through as we adapt to ever more digital lives. It is of course impossible to say anything about the future with any certainty, but our research strongly suggests that privacy concerns will only grow in the coming years. First, most people’s latent needs only become activated – meaning they take action – once they experience a violation. For many of the folks we met, privacy has thus far been a mostly theoretical concern. They may have heard stories of other people getting ‘hacked’; of people who had compromising photos leaked online or of the ‘other people’ who had probably fallen victim to data-driven political advertising which they themselves of course would be smart enough to see right through. But already, this picture is starting to change. More and more people feel that the growing volume of targeted advertising and other data-based disturbances such as pop up notifications, spam phone calls, etc, are becoming a real, felt nuisance in their lives. This constant sense of being sought out and targeted, is fueling an aspirational “unplugging” culture we’ve seen in many studies, where people are trying to impose stricter limits between companies and their attention. And with privacy scandals happening more and more frequently, a growing number of people will likely feel the effects and will begin to demand more of the companies that handle their data.
Secondly, not all data is equal. People are fairly nonchalant about reputable companies collecting information on their online behaviors; people feel that these digital footprints are fuzzy-at-best representations of who they are, and don’t feel particularly intimate. Right now, many companies are collecting mainly this type of data. However, the idea of being “listened to” or “watched” via audio and video recordings is deeply unsettling to people, especially when they don’t realize or forget it’s happening. These types of data create a picture of people’s lives that feels highly intimate, and new technologies – such as eye-tracking, fingerprinting, facial recognition and so on, promise to extend this even further. At the same time, data sharing and increasingly-sophisticated analytics are allowing companies to triangulate and create increasingly sophisticated pictures of people’s behavior that crosses between the physical and digital. We’ve seen people react with shock and fear at even the most basic demonstrations of triangulation, such as Uber maps of their travel, or Strava’s global heatmap. As digital technologies become more advanced, we predict people will feel more and more intimately violated by the collection and use of their data.
Do we need to care about this?
Just as every company now needs to deal with sustainability, we believe every company will soon need strong privacy controls as table stakes, and some will need a plan for privacy that starts in the boardroom. But not everyone needs to become a privacy crusader. It all comes down to how you make (or plan to make) money. While many people have trouble making sense of what data companies collect and how, they are surprisingly adept at sniffing out why. We were surprised to see how common perceptions around how companies make money dictate the extent to which we trust them with our data. When businesses are built on collecting and monetizing people’s data, like Facebook, Google, and many other companies in Silicon Valley, people are constantly suspicious of their motives – believing they will always want more data, collect more than they admit, be deliberately opaque about how it’s used, out-fox regulators, and constantly find new ways to use and abuse data in the future. These companies will face an uphill battle to show that a stance on privacy is more than another ploy to retain their audiences and preserve the flow of advertising dollars. For the vast majority of companies, who collect little data and don’t monetize it, privacy is likely to remain a hygiene factor; it will be enough to keep data secure and keep up with privacy trends. For a third type of company – businesses who process lots of data, but whose business models are recognized not to depend on it such as telcos like Comcast or Liberty Global, hardware providers like Samsung or Apple, or even banks or insurers – privacy can be an opportunity to differentiate. People believe these companies don’t need to exploit their data to survive, which creates an opening to offer more trustworthy offerings than the competition – an opportunity Apple has recently begun to seize. These opportunities are not without risk: just like companies touting their environmentalism have frequently been accused of greenwashing, people are likely to poke holes in ill-conceived privacy attempts, exposing the overall brand to harm, and anyone who stakes their brand on privacy should be extremely careful to avoid directly monetizing data they collect.
Is there anything we can do?
Just as very few people have given up air travel or gone vegan to save the environment, it will never be feasible for people to change their passwords every week, completely abstain from Facebook or Google, or elude all video cameras. To have their latent need for privacy activated, people need strong alternatives that allow them to feel in control without the trade-offs. Our data suggests two key principles to guide making those alternatives a reality.
Simple feels safe
Our research has revealed that fears around privacy are stoked by opacity. When people give away intimate data without knowing its use, learn about a technology like face recognition, or even see a camera pointed in their direction, they begin to worry: What is being captured, and what do they intend to do with it? What could they learn from my data? One of the most surprising findings from our research is that people respond better to situations where a company’s bad intent is known, than situations where a company’s intent is opaque. Whereas opacity leads people to assume the worst, knowing at least gives them the power of choice.
But the antidote to opacity isn’t just transparency – it’s transparency that is clear and simple. Complexity can be just as bad as opacity; when people are inundated with technology information, it can raise as many concerns as having no information at all. Long policies feel like they’re written to protect companies, not explain things to people. Kill the privacy policy and product descriptions – people need simple maxims that help them know what your company and technology can and can’t do. Overcomplicate privacy at your own peril.
Provide easy ways to act
In the physical world, privacy is something intuitive, simple and tangible. It’s easy to adjust with clothing, doors, curtains, etc. Privacy in the digital realm needs to feel the same way, and to move privacy from an all-or-nothing choice, people need the tools to take small, easy, actions.
We don’t think companies needs to completely solve privacy to tap into this latent need; it could start with offering people the opportunity to take small symbolic actions. For example, lots of people still tape over their laptop video cameras – where could we enable similar behaviors with our products? Snapchat is an interesting example as well – they differentiated themselves by giving people a perceived added layer of privacy (through ephemerality); it took off because it felt private. Neither of these is a full solution – there are still tons of other cameras trained on us, and there’s nothing stopping people from taking screenshots in Snapchat – but they represent simple and tangible ways for people to (symbolically) take back some control.
Privacy isn’t rational, it’s a feeling. The core of privacy – what can be captured, by whom, and how it can be used – should be felt in the product experience: visible in the interface, controlled in everyday intuitive interactions, or core to the product concept itself. Privacy, moreover, needs new value propositions beyond paranoia. Tesla made electric cars fun – what experiences will make privacy more than a worry? Any product that fails to make privacy intuitive and tangible will doom privacy strategies to being hollow words.
Aren’t people going to get used to it?
Some would argue that that norms around privacy will change. Sooner or later, we’ll get used to the surveillance, and the fuss will subside. It’s obvious that we’ll adjust to certain things, just as we’ve adjusted to other technological advances – wearing a watch, carrying a mobile phone, having security cameras in stores. It could be that privacy norms will adjust, and people will continue to accept the trade-offs; it could also be the opposite, with culture taking a sharp turn towards privacy and swiftly rejecting new technologies, e.g. Google Glass. The reality is that it’s tough to predict how future norms will evolve. But norms change slowly. And, it does seem that technology is advancing more quickly than people are able to absorb – as evidenced by people’s shock and horror when they, for example, learn exactly what data Google has about them, or realize that their iPhone is storing (and transmitting) every location they’ve visited over the past several years, or learn that their OkCupid profile photos and video taken surreptitiously in restaurants are being used to train facial recognition software, with absolutely no oversight.[1] As more and more people begin to feel the tangible consequences of the collection of our digital traces, (from annoying notifications, to advertising that makes incorrect assumptions about them, to data triangulations that feel creepy) along with the constant race to collect ever more intimate types of data, we believe concerns around privacy will only grow stronger in the years to come.
Sources & Notes
For the purposes of this document, we define privacy as the ability to control who knows what about you, why, and through which means. In just the past two years, ReD has conducted 25+ studies that touch on people’s relationship to privacy, on topics ranging from how Gen Z approaches digital, to how families’ financial behaviors are evolving, to how new technologies achieve social acceptability. We have conducted research with people from all walks of life, with geographically, ethnically, religiously, and economically-diverse groups. Our research spans the globe, but for the purposes of this document we are focusing on findings relevant to the United States.
New York Times, “We Read 150 Privacy Policies. They Were an Incomprehensible Diaster.” https://www.nytimes.com/interactive/2019/06/12/opinion/facebook-google-privacy-policies.html
In some communities where privacy is at a premium, such as radical political activists, digital fraudsters and marginalized groups living under repressive regimes, there is a growing trend of ‘going dark’ for private conversations, meeting off the grid and without any digital devices.
The Guardian, “Facebook Usage Falling After Privacy Scandal, Data Suggests.” https://www.theguardian.com/technology/2019/jun/20/facebook-usage-collapsed-since-scandal-data-shows
Several journalists have made this argument, including Alan Henry at the New York Times https://www.nytimes.com/2018/12/19/technology/online-privacy-climate-change.html and Shoshana Zuboff at The Atlantic https://www.theatlantic.com/ideas/archive/2019/05/crazygenius-season-three-privacy-internet/589078/
The Verge, “Polar Suspends its Global Activity Map After Privacy Concerns” where a fitness tracking app inadvertently exposed the exact locations of military and intelligence personnel. https://www.theverge.com/2018/7/8/17546224/polar-flow-smart-fitness-company-privacy-tracking-security
Wired, “Marketing Firm Exactis Leaked a Personal Info Database with 340 Million Records” https://www.wired.com/story/exactis-database-leak-340-million-records/
New York Times, “Facial Recognition is Growing Stronger, Thanks to Your Face.” https://www.nytimes.com/2019/07/13/technology/databases-faces-facial-recognition-technology.html