Datafication is a term invented in 2013 by Victor Mayer-Schoenberger and Kenneth Culkier to describe the process of digitally mapping human activity and interactions into data and how it is fed back into the real world by businesses and governments to optimise their services based on the information gathered. This manifests in ways both seemingly benign, such businesses and cities changing the allocation of their resources to suit the activities of their customers and citizens, and more concerning, such as employers and banks using an individual's data to determine their employability or assess an application for a loan.
This trend carries enormous implications for privacy and freedom, particularly when it comes to systems either being faulty, such as facial recognition systems used by law enforcement struggling to differentiate between black faces, or outright abused, as in China's use of mass surveillance to monitor and control every aspect of its citizens' lives. As enormous as these issues are, the focus of this piece is not on how datafication has changed how data presents us, but rather how we have allowed datafication to change the way we perceive ourselves.
Social media is often blamed for the polarisation of modern discourse. An oft-identified cause of this is how algorithms track the preferences of its users and present them with other users and information in alignment with those preferences, insulating them from alternative points of view and stifling healthy conversation. This is all true, though passes the buck of responsibility from the user solely onto the system. The system is reinforcing beliefs expressed by the user to begin with: in order words, had the user originally been seeking out a wide range of information and sources, the 'bubble' created for them would be far wider.
It is the individual who bears ultimate responsibility for their behaviour online and in the real world, and for ensuring their actions and beliefs are based on as accurate information as is available. This is not to abdicate all responsibility from the system, however, but to suggest that its most pernicious influence does not happen after the user has signed up, but rather, when.
Upon joining social media such as Facebook, or less obvious forms such as dating apps, users are asked to input information about their lives. Most of this is unremarkable stuff, such as one's name, education, gender, birthday, all of which have been used as identifiers in everyday life since human interaction began. There are also options for religious and political reviews, which represent a clearer indication of where the system begins to exert its influence. Neither are mandatory, but their presence creates an implicit equivalence between one's personal identifiers and one's beliefs.
In the real world, personal identifiers have value because of their stability: you offer people your name in the knowledge that it is unlikely to change the next time you meet, or any time after that. In some cases, it is also useful in identifying you as part of a specific family and ancestry. Similarly, an employer might request your education history as a measure of your prior achievements. Short of the aspiring employee telling a lie, a person's education is a fixed part of their history and thus a stable measure from which to gauge their suitability for a job.
A healthy belief system, on the other hand, must be flexible. As one moves through life, acquiring skills and knowledge and experience, it would be a feat of extraordinary close-mindedness if one's perspectives and opinions did not change at all. This does not mean 180° lurches from one extreme on the political spectrum to the other, but small, nuanced shifts accumulating over time into an intellectual tapestry unique to every person.
Brain Machine Interface, 2018, by Dr.s Greg Dunn and Brian Edwards |
By requiring prospective users to plant a flag in their beliefs, however, such systems solidify what should be fluid and personal aspects of human growth into fixed categories. Having to go back into one's profile to reflect a change in belief is a psychologically disruptive obstruction, as is the threat of isolation and condemnation from followers accrued with the help of the system's algorithm by making your change of mind obvious. That's to say nothing of those whose beliefs do not comfortably fit into the available categories at all.
In doing so, complex individuals are trained to simplify their sense of self into packets of predictable, easily managed data. This happens in myriad ways, few as literal as that of Facebook. As social media has risen to become a digital form of public gatherings, so too does the different format of each service encourage users to alter their behaviour from the beginning to make themselves more likeable, more visible and more predictable. Even if you are not concerned with gathering a large public following, Twitter's short-message format encourages complex thoughts to be reduced into simpler, more easily digestible ones, while the threat of widespread condemnation deters any risky or experimental thoughts which do not slip easily into the most popular orthodoxies. Instagram requires you to identify moments from your life deemed important or relevant enough to present to the world, a process of self-filtering reducing the messiness and complexity of existence into a series of predetermined snapshots. In other words, data.
One could argue that this form of filtering is simply a digital form of how we present ourselves to friends and family in real life. There is always a selection process in place when discussing the events of our lives with others. The difference is that real-life interactions are individualised whereas those on social media are generalised. The information offered to a person in real life will depend on your relationship with that person. A conversation with a casual acquaintance will in all likelihood be non-specific and sanitised, whereas a conversation about the same topics with a close friend, a lover or family member would be more intimate and personal.
Social media treats all relationships as identical: that of the follower and the followee. Because these services are now among the primary ways in which people interact, they has a subtly reductive effect on the way we see ourselves and others. The consequences can be seen not only in the polarisation of political and social views, but from our popular entertainment - 'character work' in television and film has changed from being about who a character is to the external events they have been through - to the horror at 'causing offence', the obsession with identity (perhaps the most succinct expression yet of the reduction of the self to pre-determined data points) and the infestation of buzzwords and platitudes into modern speech.
In a sense, what has happened is the mirror image of datafication. If datafication is real-life being turned into data and then back again, what I am describing is the process of human beings diminishing themselves into forms and categories determined by data systems, then having their data fed back to them to reinforce their new, compartmentalised selves. The principle nevertheless remains the same: it is how the complex and the individual has been reduced to the simple and the general by our relationship with data and the systems now utilising it to shape our lives both online and in the real world.
One of the biggest problems with this trend towards hyper-simplified categorisation is that it makes differences between people seem more stark and amplifies problems to create the false perception of diminishing social issues as endemic. If the datafication trend is to be broken, it will be through rediscovering an appreciation for what a widely prosperous, just and civilised world we have built for ourselves, despite certain continuing flaws, and how that world was built on deepening our knowledge of the complexity of human beings individually and as part of communities and societies, not simplifying it.