Now That We Have Your Attention

Abstract

In a traditional business model, companies provide a product or service to their customers. However, in today’s digital marketplace, users of a company’s app, website, or game are not the true customers. While users are able to access these apps for free, companies can then sell those users data and attention to advertisers and other interested parties. But in a world now dominated by surveillance capitalism, is the use of user data exploitation, or just good business? Tech companies must increase transparency of how they plan to collect and use data, and developers should refuse to use manipulative or addictive methods in order to capitalize on users.


In the summer of 2016, the fiercely popular mobile app “Pokémon GO” emerged. The “ostensibly harmless” scavenger hunt game was a successful blend of 90s nostalgia, social media sharing, and old school geocaching marketed with a hip 2010s flair [1]. Its short-lived but undeniable day in the sun resulted in predictable mainstream news reporting on unassuming teens losing their whereabouts and walking into traffic or dangerous neighborhoods while on the hunt for virtual magic pets. It is unclear exactly how many of the millions of Pokémon Go players actually were run over or mugged while they were distracted by the game, but it is safe to assume that some of those stories were blown out of proportion. What has become clear, yet is far less publicized, is the business model of the free-to-play game. At some point, after harnessing the power to effortlessly herd mobile gamers much like a farmer would his goats, the app started clustering hidden Pokémon inside fast-food establishments. Indeed, once the Pokémon developers knew they had something valuable – in the form of their gamers’ undivided attention – and the ability to move them around physically, they worked out deals with McDonald’s and Starbucks that would guarantee getting customers through the doors [2]. If our attention can be bought and used to lure us into restaurants, how much could it influence the next brand to which we pledge our loyalty? How about our taste in art or music? Could it affect our political views?

It is increasingly evident that our behaviors are being studied, our locations tracked, our records collected as we move out of the realm of the physical and into the virtual. Now, given the extreme imbalance between our power as individuals and the powers that control our data, it is worth assessing whether the upward movement of information has come to hurt us more than help us. There must be greater pressure placed primarily on the businesspeople, investors, and marketers who peddle our data and push exploitative, attention-seeking practices to their legal limit. But as is the nature of their work, they will always remain incentivized by the income potential of what is now being called “surveillance capitalism.” Therefore, the main responsibility falls to the engineers, the designers, and the planners of the technology, who are obligated by the first principal of the IEEE Code of Ethics to “act consistently with the public interest” [3]. If software engineers were able to build these incredible tools in the first place, surely they have the power to fix them.

“Surveillance capitalism” is a term coined by academic Shoshana Zuboff and has since been used predominantly by journalists when referring to the practices employed by tech corporations to extract user data for profit. The collecting of mass amounts of data is decried as a violation of privacy, but according to Zuboff, what’s more concerning are the “aims to predict and modify human behavior as a means to produce revenue and market control” [2]. Zuboff identifies four major types of big data readily available: digital transactions, sensor collected data (the tracking of a phone’s presence/location), camera data, and stored record data such as banknotes, tax returns, and insurance co-pays [2]. “Big data” is extractable at massive scales and with diverse scopes, meaning that it can be sourced from a broad spectrum of people (virtually anyone who has an internet presence) and can access a variety of informational categories about any person or population. What are your age and hair color, your location, your economic status, your fashion sense, your favorite artists, your average commute, your health status, your political views, and most importantly, your relationships with other people, people whose data is being mined just the same? It follows, then, that the gatekeepers of all this information can very accurately predict the behavior of large portions of the population. Worse still, they can use the tools at their disposal to make countless individuals behave in directed ways, all in the name of profit.

There is a direct correlation between a company’s revenue and its use of surveillance capitalism as a business model. Three of the most powerful tech giants (Amazon, Facebook, and Google) have turned surveillance and data accumulation into their bread and butter. A 3,590% spike in Google’s revenue happened in the four years before it went public as it was pioneering its algorithm in targeted advertising. That increase is what Zuboff refers to in her writing as a “surveillance dividend” [2]. Facebook, meanwhile, has borrowed this model and pushed it further. Struggling to evaluate the price of a digital “space” that is limitless, the company began to rely on other metrics such as “clicks” and “screen time” to increase demand for well-performing ads [4]. In just a few years’ time, these giants have become so ingrained in the fabric of our society that they have even settled disputes amongst themselves; Microsoft and Google have simply dropped their complaints against one another seemingly because with their shared vitality, one’s success is bound to perpetuate the other’s [5]. Comparing the top three Silicon Valley companies in 2014 with the top three Detroit automakers of 1990 reveals that the former’s market capitalization is almost 30 times more valuable than the latter’s, with only a fraction of the employees [2]. What is behind this mechanism that has flipped our understanding of economics upside down?

Surveillance capitalism in effect produces a feedback loop; first an enticing software product acquires initial input data from users, and then algorithms take that data to disseminate attention-grabbing outputs on a large scale [5]. This leads to ad clicks and more time spent on the app, satisfying input capital while simultaneously gathering more input data for the company that will go on to enable even more precise targeting, and so on. Users have been made to believe that their participation in these apps makes them the customers, but the true customers are the marketers vying for space on the platform, while attention is the product [2].

Facebook is one company which has gone to particular extremes when it comes to exploiting user data. There is evidence that Facebook “buys data from third party brokers” in order to cross-reference with user data on their own platform for more effective targeting [4]. Additionally, the company’s ownership of various apps has allowed it to traffic in data of users without a Facebook account. This became horrifyingly clear to a number of users of the app “Flo Period & Ovulation Tracker,” who were targeted with ads for birth control or pregnancy preparedness based on the data that Facebook had collected from that app and shared with marketers [6]. Subsequent reports have shown that there are “at least 11 popular apps…sharing sensitive data entered by user,” and among those are “six of the top 15 health and fitness apps” [6]. Yet Facebook and other similar companies are able to get away with this because “collection of health data by non-health entities is legal in most U.S. states” as long as it is clearly disclosed in the “Terms of Service” agreement [6]. Some representatives from Facebook expressed dismay at the reports, declaring a need to recommit to user privacy, but perhaps this is a case of a company that has grown so large that the left hand doesn’t know what the right hand is doing. When corporate operators can acquire sensitive and valuable information so easily, they will hardly think twice about moving forward, especially when any repercussions are diffused broadly over the company’s umbrella.

While the targeting and exploitation of sensitive data may seem intrusive enough, the very tactics that companies use to segment consumer data hint at even more worrisome future consequences. As everyone’s information is split and subdivided into algorithmically polished portfolios for targeting, we are likely to witness the spread of a concept known as “personalized pricing,” in which two individuals could be offered the same consumer product at different price points based on their respective financial data. Personalized pricing is already present in some places, with one study demonstrating a Google search in which the price of headphones differs by a factor of four based on the participant’s past searches [1]. Not only would the proliferation of personalized pricing endanger a few of the reliable principles that accompany a free-market economy, but these pricing algorithms are likely to target the fiscally reckless at least as much as they would the affluent [7]. Certain data subgroups may become more attractive to marketers than others, leading to a new type of discrimination where datasets are ranked based on information the user has no control over and hasn’t consented to share and used to selectively provide or deny opportunities. What this could look like in practice is demonstrated by a Carnegie Mellon finding in which “male users were shown high-paying job ads six times more often than their female counterparts” [7]. All this “mass personalization” might do serious damage to our collective interests or any transcendent sense of a shared identity when algorithms are built to cater to “the individual’s idiosyncratic tastes, preferences, views, and inclinations” [7]. In other words, why would someone choose to dress like their friends when they can go to Instagram models for their fashion tips, or why would someone ask their dad for advice when they can pose the same question to Reddit? Some of the engineers of these algorithms may counter that their targeting codes can more accurately service the uniqueness of every individual than the default environment of their upbringing. But for children and teens who are not yet fully formed individuals, a blank slate of personality is an invitation for marketers to present mediocrity, supplying the averages that work best across populations. It could be argued that an identity molded on the internet is no identity at all, and that comes with many consequences.

Data is the gold mine on the mountains that are our collective attention spans. In order to extract as much information as possible, software engineers have devised techniques in collaboration with social scientists that can grab attention and dig in deep. The momentary lag that occurs when a Twitter feed is refreshed is not your phone catching up, but rather an “intentional delay written into the code,” inducing a calibrated psychological effect which suspends the desire before it is relieved [1]. YouTube (a Google subsidiary) is notorious for enabling viewers to fall into deep rabbit holes by lining up data-driven recommendations for video content. YouTube once built their recommendation algorithm on records of videos that a user “clicked” on, but once they switched to account for user “watch time,” they started to predict our preferred content much more accurately. Estimates from 2017 had users “watching a collective 1 billion hours of YouTube videos per day, more than 70 percent of which had been served to us in the form of algorithmic recommendations” [1]. This measure is astounding and a testament to the subtle power that software engineers have in snaring users’ attention. Snapchat captivates its young user base with a point system rewarding messages sent through the app, especially if the activity is consistent. Dating apps, such as Tinder, must do very little to keep their sex-starved users hooked, but they are known to feature newly-added profiles more predominantly to reel in new users with a few early matches—hardly representative of how they will fare for their duration on the app. The net result, with all of these addiction engines running on one device, is an average person who touches their smartphone 2,617 times per day [1]. Additionally, 93% of people tend to sleep within arm’s reach of their device and 75% will bring their phones with them to the bathroom [1]. If smartphones indeed made us all smarter, these numbers might be encouraging, but the predominant evidence only suggests that they make us sadder.

Attention mining is an environmental hazard whose byproducts include an appreciable decline in mental health among users. Studies on self-reported well-being consistently point to social media usage as having a starkly negative impact. A Gallup survey conducted between 2013 and 2015, for example, shows an association between increased Facebook activity and a decrease in self-reported mental health [8]. Likewise, a study of British social media users between the ages of 14 and 24 finds a link between negative changes in body image, depression, anxiety, and especially sleep to use of Instagram, Snapchat, Facebook, and Twitter [9]. The study also makes evident that those most satisfied with the platforms are the ones that are able to limit their time usage. Studies such as these barely scratch the surface of the impact that addictive programming may have on young users. What could possibly be gained from a service which wastes our time and crushes our self-esteem?

Of course, there are some perspectives on the numerous benefits that could warrant the use of surveillance technology. One argument is that surveillance marketing supports businesses and therefore stimulates the economy as a whole. That is certainly a factor, and to reiterate Zuboff’s “surveillance dividend,” it is clear that surveillance has driven unprecedented growth in the tech sector [2]. To do away with surveillance capitalism in short order would be to “diminish commerce” and disrupt the economy [10]. There are also proponents of the use of surveillance in our home lives. The Amazon Echo device—also known as “Alexa,” which to many skeptics is an Orwellian nightmare come to roost—on the one hand collects data from its owner to share with Amazon, but it also uses that data constructively to service the owner’s needs. The Echo is unique as a tool which can “remember context and past interactions, as well as knowing a customer’s location and meaningful details in order to maintain familiarity and be more efficient in future exchanges,” generating a new kind of “intimacy” between man and machine [11]. And it’s not just Alexa; assistive technologies for the home have become rising boons for the gerontological community. Researchers share a very promising outlook on surveillance tech, such as wearables which can monitor cardio data in elders to reveal undiagnosed medical concerns or “Ambient Assistive Living” devices which can use wireless frequencies to detect the “movement, respiration, and activities of daily living” in human bodies [12]. But the question still remains whether this valuable data which can be applied to extend life spans is exploitable somehow by commercially inclined actors who are willing to target the elderly at their most vulnerable.

It has become abundantly clear that surveillance tech, while not entirely nefarious, poses a deep ethical quandary. However, due to its relatively recent malignancy, the governmental powers responsible for preventing such unethical business practices are still caught up trying to understand the problem. There have been slow movements to regulate Big Tech. The Supreme Court held a landmark ruling in protection of digital privacy, stating that the “claim that searching a cell phone is ‘materially indistinguishable’ from searches of ordinary material items, is ‘like saying a ride on horseback is materially indistinguishable from a flight to the moon’” [13]. Silicon Valley alum Andrew Yang ran a surprisingly successful presidential campaign with a proposal for a “Department of the Attention Economy” on his platform to “deal with the issues generated by new online technologies” [13]. And just this week at the time of writing, the House Judiciary Committee finalized their 16-month investigation with a scathing report declaring that Facebook, Amazon, Apple, and Google have engaged in illegal monopoly practices [14]. Still, these measures hardly cover the breadth of the issues raised by surveillance capitalism. Regulatory action cannot provide the most optimistic solutions, especially when the combined annual budget of the Federal Trade Commission and the Department of Justice is less than what Facebook can collect in three days [15]. Federal enforcement against monopolization is simply not cutting it. The trend that seems the most troubling is the void of software and technology experts anywhere near regulatory positions. Silicon Valley businesses are paying large sums to snatch up as many lawyers from the public sector as they can [15]. For many engineers, the pay and benefits of these great companies must simply be too much to turn down; of 2016’s American computer science Ph.D. graduates, for example, 57% took industry jobs, and “only 11% became tenure-track faculty” [16]. There are a shrinking number of capable engineers, analysts, and developers who are willing to stand up to their ballooning corporate overlords.

Such software engineers and analysts are reckoning with a growing number of responsibilities toward the public. Back in 1984, Rev. Canon George Tolley wrote that engineers, who often carry out the tasks of others, need to question their “authority” and the “extent of their responsibility” [17]. Seeking plausible deniability is not the way of an engineer; in fact, “professional status cannot be claimed by those seeking to evade the moral consequences of their work” [17]. The engineer might feel pressured to obey their employer, and according to the IEEE Code of Ethics, they should do so, unless of course “a higher ethical concern is being compromised; in that case, inform the employer or another appropriate authority of the ethical concern” [3]. There is not just a general imperative, but a professional standard of obligation to refuse to participate in the development of software that aims to corrupt user attention. There is also an obligation to refuse to write algorithms which may extract and sort data in exploitative ways. If abstinence feels like a futile measure, the engineer is further obligated to go warn the public of harmful software and report to the authorities any questionable behavior by individuals involved in its development. Furthermore, there must be more pressure within the engineering community to take less desirable or voluntary positions working with regulators in the public sector in order to limit the powers of Big Tech. Finally, software engineers need to consciously create better systems which are less addictive and more transparent to users. Improved apps and platforms should be approved by social scientists and should prevent solicitor programming and social programming from sharing the same digital “spaces,” so people can enjoy the benefits of their social media engagement without being endlessly pursued. These are not impossible tasks, but it will require brave individual engineers who are willing to fight for positive change.

Surveillance capitalism is only in its early stages, and it must be slowed before it creates an even more damaging future. Our valuable time is wastefully drawn from and discarded for our data. A young generation may be forever damaged by attention-spamming software and self-perceptions formed on social media platforms. Corporations may have more power over our purchasing agency than we do ourselves, and these concerns will only worsen as more data is compiled. The government will be required to break apart the biggest monopolies in tech, which over time will lead to less exploitation. At the same time, technical experts need to be more thorough in devising algorithms that suit the needs of not just their clients, but the public as well. The world is counting on its engineers to save the users from being used?

By Noah Samuels, School of Architecture, University of Southern California


About the Author

At the time of writing this paper, Noah was in his fifth and final year of the Bachelor of Architecture program and planned to work in architecture and design after graduating.

References

[1] M. MacGuineas, “Capitalism’s Addiction Problem,” The Atlantic, April 2020. [Online]. Available: https://www.theatlantic.com/magazine/archive/2020/04/capitalisms-addiction-problem/606769/

[2] S. Zuboff, The Age of Surveillance Capitalism : The Fight for a Human Future at the New Frontier of Power, First ed. New York: PublicAffairs, 2019. [E-book] Available: https://uosc.primo.exlibrisgroup.com/permalink/01USC_INST/hs9vaa/alma991043289886303731

[3] “Software Engineering Code of Ethics and Professional Practice: IEEE-CS/ACM Joint Task Force on Software Engineering Ethics and Professional Practices.” Science and Engineering Ethics vol.7, no. 2,  pp. 231–238, June 2001.

[4] C. Doctorow, “How to Destroy Surveillance Capitalism,” One Zero: Medium, August 25, 2020. [Online]. Available: https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59

[5] J. Powels, “Google and Microsoft Have Made a Pact to Protect Surveillance Capitalism,” The Guardian, May 2, 2016. [Online]. Available: https://www.theguardian.com/technology/2016/may/02/google-microsoft-pact-antitrust-surveillance-capitalism

[6] S. Schechner and M. Cicada, “You Give Apps Sensitive Personal Information. Then They Tell Facebook,” The Wall Street Journal, February 22, 2019. [Online]. Available: https://www.wsj.com/articles/you-give-apps-sensitive-personal-information-then-they-tell-facebook-11550851636

[7] K. Yeung, “Five Fears About Mass Predictive Personalization in an Age of Surveillance Capitalism,” International data privacy law, vol. 8, no. 3, pp. 258-269, August 2018. [Online]. Available: https://uosc.primo.exlibrisgroup.com/permalink/01USC_INST/273cgt/cdi_proquest_journals_2239229342

[8] H.B. Shakya and N.A. Christakis, “Association of Facebook Use With Compromised Well-Being: A Longitudinal Study,” American Journal of Epidemiology vol. 185, no. 3, pp. 203-, 2017. [Online]. Available: https://uosc.primo.exlibrisgroup.com/permalink/01USC_INST/273cgt/cdi_gale_infotracacademiconefile_A490132138

[9] “How Heavy Use of Social Media is Linked to Mental Illness,” The Economist, May 18, 2018. [Online]. Available: https://www.economist.com/graphic-detail/2018/05/18/how-heavy-use-of-social-media-is-linked-to-mental-illness

[10] G. Whelan, “Trust in Surveillance: A Reply to Etxioni,” Journal of Business Ethics, vol. 156, no. 1, pp 15-19, April 30, 2019. [Online]. Available: https://uosc.primo.exlibrisgroup.com/permalink/01USC_INST/273cgt/cdi_proquest_journals_199318018

[11] E. West, “Amazon: Surveillance as a Service,” Surveillance & society, vol. 17, no. 1/2, pp. 27-33, 2019. [Online]. Available: https://uosc.primo.exlibrisgroup.com/permalink/01USC_INST/273cgt/cdi_proquest_journals_2208649271

[12] L. Carver and D. Mackinnon, “Health Applications of Gerontechnology, Privacy, and Surveillance: A Scoping Review,” Surveillance & society, vol. 18, no. 2, pp 216-230, June 16, 2020. [Online]. Available: https://uosc.primo.exlibrisgroup.com/permalink/01USC_INST/273cgt/cdi_proquest_journals_2427317719

[13] J. Torpey, “A Sociological Agenda for the Tech Age,” Theory and Society, August 1, 2020. [Online]. Available: https://doi.org/10.1007/s11186-020-09406-0

[14] J. Clayton, “US Tech Giants Accused of ‘Monopoly Power’,” BBC News, October 6, 2020. [Online]. Available: https://www.bbc.com/news/business-54443188

[15] A. Kantrowitz, “It’s Ridiculous’: Underfunded U.S. Regulators Can’t Keep Fighting the Tech Giants Like This,” One Zero: Medium, September 17, 2020. [Online]. Available: https://onezero.medium.com/its-ridiculous-underfunded-u-s-regulators-can-t-keep-fighting-the-tech-giants-like-this-3b57487b4d63

[16] S. Zuboff, “You Are Now Remotely Controlled,” New York Times (Online), New York: New York Times Company, January 24, 2020. [Online]. Available: https://www.nytimes.com/2020/01/24/opinion/sunday/surveillance-capitalism.htm

[17] C.G. Tolley, “Engineering and Morality,” Electronics + Power, vol. 30, no. 6, pp. 477–479, June 1984.

[18] Z. Graves and R. Cook-Deegan, “Incorporating Ethics Into Technology Assessment,” Issues in Science and Technology, vol. 36, no. 1, Fall 2019. [Online]. Available: https://link.gale.com/apps/doc/A604716156/AONE?u=usocal_main&sid=AONE&xid=81037665

[19] D. Thompson, “Why Surveillance Is the Climate Change of the Internet,” The Atlantic, May 9, 2019. [Online]. Available: https://www.theatlantic.com/ideas/archive/2019/05/crazygenius-season-three-privacy-internet/589078/

Links for Further Reading

For an in-depth look at surveillance capitalism: https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59

For more about how Facebook gets data from other apps: https://www.wsj.com/articles/you-give-apps-sensitive-personal-information-then-they-tell-facebook-11550851636

For more about the link between social media and mental illness: https://www.economist.com/graphic-detail/2018/05/18/how-heavy-use-of-social-media-is-linked-to-mental-illness