The age of consumer Surveillance and capitalism

We’re living through the most profound transformation in our information environment since Johannes Gutenberg’s invention of printing in circa 1439. And the problem with living through a revolution is that it’s impossible to take the long view of what’s happening. Printing shaped and transformed societies over the next four centuries, but nobody in Mainz (Gutenberg’s home town) in, 1495 could have imagined that this technology would fuel the Reformation and enable the rise of what we now recognize as modern science; create unheard-of professions and industries; change the shape of our brains; and even recalibrate our conceptions of childhood. And yet printing did all this and more.

Why to look at 1495? Let’s look into our revolution, the one kicked off by digital technology and networking. And although it’s now gradually dawning on us that this really is a big deal and that epochal social and economic changes are under way, we’re as clueless about where it’s heading and what’s driving it as the citizens of Mainz were in 1495.

In 1988– as one of the first female professors at Harvard Business School to hold an endowed chair she published a landmark book, The Age of the Smart Machine: The Future of Work and Power, which changed the way we thought about the impact of computerization on organizations and on work. It provided the most insightful account up to that time of how digital technology was changing the work of both managers and workers. The first hint of what was to come was a pair of startling essays – one in an academic journal in 2015, and  the other in a German newspaper in 2016.by  Shoshana Zuboff ,What she revealed was that she had come up with a new lens through which to view what Google, Facebook and other companies were doing – nothing less than spawning a new variant of capitalism. Those essays promised a more comprehensive expansion of this Big Idea.

And now it has arrived – the most ambitious attempt yet to paint the bigger picture and to explain how the effects of digitization that we are now experiencing as individuals and citizens have come about.

The headline story is that it’s not so much about the nature of digital technology as about a new mutant form of capitalism that has found a way to use tech for its purposes. The name given to the new variant is “surveillance capitalism”. It works by providing free services that billions of people cheerfully use, enabling the providers of those services to monitor the behavior of those users in astonishing detail – often without their explicit consent.

Surveillance capitalism, “unilaterally claims human experience as free raw material for translation into behavioral data. Although some of these data are applied to service improvement, the rest are declared as a proprietary behavioral surplus, fed into advanced processes known as ‘machine intelligence’, and these are then fabricated into prediction products that anticipate what you will do now, soon, and later. Finally, these prediction products are traded in a new kind of marketplace that is called behavioral futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are willing to lay bets on our future behavior.”

The best measure of whether a company cares about privacy is what it does by default. Many apps and products are initially set up to be public: Instagram and Facebook accounts are open to everyone until you lock them, and the whole world can see whom you split utilities with until you make your profile private. Even when companies announce convenient shortcuts for enhancing security, their products can never become truly private. Strangers may not be able to see your selfies, but you have no way to untether yourself from the larger ad-targeting ecosystem.

That lack of definitive privacy has come to the fore over the past few weeks as Google and Amazon each announced shortcuts to mitigate “always on” tracking—if users choose to enable them. Amazon Echo users can clear 24 hours of stored voice commands by saying, “Alexa, delete what I said today.” Google announced at its I/O meeting in early May that Google Maps will eventually feature an “incognito mode” that turns off tracking. While it’s enabled, users’ location won’t be added to their profile or used to inform ad data.

While the packaging for these features is shiny and new, the substance behind them is not. Users could always use Maps while logged out of their Google profile and manually delete stored Echo messages by logging in to their Amazon account. The recently announced features are faster and require less searching through menus, but privacy and ethical-design experts are unimpressed by the measures. While useful, they argue, the new policies do very little to shift the needle.

Mona Sloane is a professor at the Tandon School of Engineering at New York University, where she researches ethical-design principles in engineering and artificial intelligence. The first problem with privacy features that can only pause, not stop, surveillance, she argues, is that users have to enable these features themselves. “Outsourcing that is a way of circumnavigating and avoiding responsibility,” Sloane says. “It’s a way of maintaining the core business model, which is generating as much data as possible.”

For both of these products, the default remains: You can pause tracking, but you can’t stop it. Users can’t tell Alexa never to store their Echo commands to begin with, nor can they preemptively tell Google to never track them while they’re logged in to the Maps app.

The new shortcuts seem targeted at a user base that’s fed up after two years of big-tech privacy breaches and surveillance scandals. In April, Amazon Echo users were shocked to find out that their recorded voices are collected and sent to human contract workers for analysis. Many Google Nest owners learned that their products shipped with functional microphones via a February tweet. And Facebook is still revamping its public image since news broke of Cambridge Analytica harvesting user data for a national influence campaign last year.

Amazon’s and Google’s feature announcements, one privacy expert argues, are part of a Silicon Valley campaign to regain trust. “It’s privacy as a promotional tool,” says Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project at the Urban Justice Center. Cahn says that consumers’ perceptions of risk and reward when they’re shopping for smart products are changing. With each new scandal, wary shoppers become more convinced that it’s not worth wagering their privacy to use tracking software. These incognito-style moves are, in Cahn’s opinion, designed as damage control to get back in buyers’ good graces.

Industry experts describe shoppers’ new caution as an “existential threat” to tech companies.  “they’re trying to win back our trust with these measures that create the illusion of privacy, but don’t threaten their core business model.”

Those business models, often become tech companies’ ethics, shaping how they design their products. Ben Wagner, a professor at the Vienna University of Economics and Business, has found in his research that companies do embrace ethical principles after privacy backlashes. But those principles don’t necessarily adhere to the same values that consumers use. Wagner wrote about the risks of this disconnect in 2018, noting that firms regularly engage in “ethics washing” and “ethics shopping,” phrases borrowed from an earlier European Union report on governing AI.

Tech companies are moving to become even more deeply embedded in our lives, which will only amplify the risks posed by security breaches. Google has filed patents for speakers that listen to you brushing your teeth and smart cameras that scan your clothes to make product recommendations. Amazon has patented technology that will infer your health and emotional state from what it overhears and offer product recommendations accordingly.

With these devices designed to collect and store more data than ever, it’s important to spotlight what’s working well, especially tools that are developed in consultation with users and display a “notion of care for all participants.” Good privacy tools will let users orient themselves within the data ecosystem with their own ethical principles as a guide. They will leave it up to users whether they want their data collected, how they want their data reused or appropriated, and most crucially, whether they want to exist in the ecosystem at all.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.