Millennials today occupy a unique position of power. We straddle the social media revolution, and many of technology’s most rapid advances are within our living memory. We have witnessed the rise and fall of the mighty Bebo, hold fond nostalgia for MSN, and recall life before and after Facebook took up residence in our joint consciousness. We are cultural brokers for generations who have lived most of their lives before or after the advent of social media, and as such, we have a unique ability and responsibility to make informed choices about the role of technology in our lives.
Technology asks us to surrender our identities: are we making conscious decisions?
The encroachments of technology upon the privacy of daily life have made me cringe for years. The impulse that caused me to refuse to sign up to Facebook in the face of peer pressure as a fourteen-year-old makes me intensely uneasy walking past the facial-recognition eye at Birmingham New Street Station. As with many intrusions on personal privacy, the fact that the all-seeing eye clocks me is practically immaterial; I’m not out to enact some dastardly plan in Birmingham’s transport system. However, such invasions deprive the average person of the ability to choose whether to be seen and recorded. Intelligent technology has become so established in the fabric of our lives that our relationship with it is now one of impulse; in this process, we have lost some awareness of the choices we must make regarding it.
That social media and notification culture is worryingly addictive is hardly news, but it is often dismissed with a defeated sight as an inevitable drawback of modern life. Consumers have become used to blaming themselves for hours procrastinated away on banal apps or Netflix binges, but in reality, much of the blame sits with the people behind the screens. As Max Stossel has publicised with his ‘Time Well Spent’ campaign, tech companies generally seek to maximise time spent on their site or device and measure success in these terms, whether or not this is what consumers actually want or benefit from. Consequently, equipped with some neuroscience and access to a market, designers can effectively programme how our brains respond to technology, bombarding the user with teasing alerts and episodes that play one after another into oblivion. Ever felt a sense of disproportionate panic upon thinking your phone is lost, or even that you have left it at home for the day? This crippling dependence is not half so worrying as the fact that it is hardly of our making; at no point in signing up to a ‘free’ platform is one asked to give informed consent that one pays in other ways. This payment is more real than money; it is debited in time wasted, attention diverted, relationships commoditised, and news misunderstood. Because these losses don’t stack up in a bank account they are easier to miss. As no traditional transaction has taken place, it is understandable that we choc these experiences up to the nature of modern life, or our own personal weakness.
This payment is more real than money; it is debited in time wasted… these losses don’t stack up in a bank account
More existential challenges plague the development of artificial intelligence as it moves at breakneck speed. So-called personal assistants such as ‘Google Home’ have brought AI to the consumer market and sit on kitchen counters across the world listening out for commands, ordering groceries, and controlling home facilities. Even the most inconsequential websites request login data, and Google’s ‘DeepMind’ trawls the online universe, collecting millions of unsuspecting NHS patients’ records in the name of advancing ‘deep learning’. Repeatedly, what the industry likes to pedal as advances in a narrative of human ‘progress’ are fundamentally inflected with the concepts and biases of their creators. Designers of pretty harrowing sex robots are unwilling to acknowledge that the predominantly ‘female’ machines they manufacture are objectification at its most extreme, and promote deeply damaging attitudes.
Concerns about something as seemingly innocuous as an intelligent personal assistant can resemble the ramblings of someone wearing a tinfoil hat, clutching a copy of Orwell’s 1984. We are well versed in how social media helps bring people together, and artificial intelligence appears to be the exciting future sci-fi has been promising us for decades. But there is more than meets the eye in these developments. Companies such as Google, Amazon, and Facebook may have ethics boards, but they are obscure and surprisingly unbridled by external regulations. Beyond being able to develop dicey practices and products, lack of restraint affords such companies control over the conversation about our technological future. They are not challenged by the application of external standards of much substance, and consumers have become accustomed to a passive relationship. Consequently, dialogue about mechanization becomes an echo chamber in which concerns about privacy, quality of life, and the creation of robot overlords are conspicuously quiet.
There is more afoot here than a well-worn narrative of corporate greed. Silicon Valley is a movement quite unlike other artefacts of modernisation, and has a culture that borders on religious fanaticism. The seemingly limitless possibilities of the internet opened the way a utopian vision, in which technology could offer solutions to any human problem. This sounds handy at first, but has extended to problematize fundamental questions of human existence in technological terms. An offshoot of Google now specialises in attempting to make death ‘optional’, casting the process of aging in terms of a ‘code to be cracked and hacked’ rather than a defining aspect of what it means to be a human. With regard to DNA this is technically true; but it establishes a precedent that even the metaphysical can be reduced, and ultimately controlled, by the privileged few who have access to the necessary codes.
Dialogue about mechanization becomes an echo chamber
So far, these risks sound practical and containable. But at the crux of my cringing are the sinister implications the technological revolution has for concepts of identity, autonomy, and self-ownership. Of course, personal information has been recorded by others throughout human history, but the scale and intimacy of data that is now collected is wholly unprecedented. The sum of the personal data that we deposit online amounts to an online identity, a virtual version of ourselves that mirrors our real preferences and habits. But from the moment we enter them into a social media site or online account, other people gain some extent of ownership over these constituent pieces of our identity. This could be the appropriation of profile pictures by others, or the selling-on of data by one company to another without consent. In a sense, when external parties exercise control over aspects of our identity we lose some ownership of ourselves.
This need not paint a picture of despair. As consumers, we have an enormous amount of power to mould how technology develops and our responses to it; it is only consumer power that catapulted Facebook to what it is today. However, the challenge of our information age is to be more than passive consumers. It is imperative that we make the inconvenient effort to consider the everyday choices about our data privacy that technology calls upon us to make, and not dismiss them as insignificant, or worse, lose awareness that there are choices to be made at all. Participation in dialogue about the future of technology is also crucial, to prevent tech companies from becoming shielded by a wall of understanding which gets progressively insurmountable for the average person. By making conscious decisions and observations about its presence, we can cultivate better control over the tech in our lives, and in doing so, keep our identities our own.