By David Brake
Hardly a day goes by without a story in the news about politicians, celebrities or ordinary members of the public revealing embarrassing or discrediting details about themselves and those around them online. On a smaller scale, we also see this happening among our friends, family and co-workers, as images or comments are seen by people for whom they were never intended or misinterpreted and taken out of context. Many see this as nothing more than a source of occasional entertainment. Those who seemingly inadvertently wash their dirty linen in public are often painted as either technologically clueless (if older) or, if they are younger, attention seekers with no sense of privacy.
I've been researching online self revelation for a decade and I would suggest this is a much more complicated phenomenon, morally and socially. The tendency to blame the victim – the person whose information leaks out – overlooks the ever-increasing difficulty of controlling information about ourselves and the commercial and technological drivers which make the problem worse. Greater individual vigilance about how and to whom we post information about ourselves is indeed part of the solution but a single-minded concentration on an individual's responsibility to, for example, use privacy controls lets social media companies and governments off the hook prematurely.
I and other researchers have found that while users of a social network may be aware in principle that what they post is readable by hundreds, thousands or even millions of people, it is a truth which is nearly impossible to fully grasp. The majority of Facebook users, for example, restrict postings to 'friends only'. However, the median Facebook user has 200 "friends" and more than a quarter of Facebook users under 30 have more than 500 'friends'. Almost one in five Facebook users post to 'friends of friends' – and a study done by Facebook's own researchers suggests with even 100 friends you would on average have 27,500 of them.
Another study suggests the average Facebook user reaches almost four times as many people as they think with their posts. In cases like this, the privacy tools provided by Facebook and other similar organisations are worse than useless – they don't restrict people's audiences in the way that they believe they do, but they do give a false impression of privacy and thus encourage people to reveal more about themselves than they should.
Which, arguably, is exactly what social media companies want us to do. They need us to sign up for their services in ever greater numbers and to spend an ever-increasing amount of time on their services. This can only happen if we become accustomed to revealing more and more about ourselves Mark Zuckerberg, Facebook's founder, told a journalist: "In a decade 1,000 times more information about each individual member may flow through Facebook". He also predicted in 2009 that "people are going to have a device with them at all times that is [automatically] sharing".
Modern mobile phones are already capable of this and there are already "life logging" tools that broadcast your location and/or take pictures of your surroundings continually – hence the privacy concerns over, for example, Google Glass. In the coming years technology will become ever cheaper and all manner of small personal devices we carry will network together with the sensors which surround us into an "internet of things". It will be harder and harder to avoid leaving a digital trail behind us wherever we go.
This digital trail can last a lot longer than we think. Social media posters tend to think of what they post as akin to a conversation rather than contributing to a permanent record. Social media services like Facebook and Twitter don't make it easy to find older postings, which reinforces that impression of transience. But of course very often material posted months or years earlier is in fact still findable using search engines. Behavior which we tolerate from, for example, a teenager may not be seen the same way years later. Consider, for example, Boris Johnson's Bullingdon Club antics.
Is it possible to protect yourself by withdrawing from social media? Certainly in principle, but there are great difficulties in practice. First of all, our social lives are increasingly being managed online. If you aren't part of the dominant social networking services but your friends are then you risk missing out, and there is already evidence that those who have attempted to disconnect from online social networks are often seen as 'holier than thou' or 'standoffish'. Moreover, even without a social media presence ourselves we can end up leaving a revealing "shadow" profile made up of pictures of us or things written about us about which we often have little control over.
When this causes us harm or embarrassment it is not generally because people mean us harm but because many of us live lives that are to some extent compartmentalised. Those who encounter us in one context may not be aware that what they are sharing could be damaging in another.
What is to be done? Well, better education is certainly part of the answer. At least in the UK, where there is an 'internet safety' policy agenda, it appears to be focused primarily on children and protecting them from the twin perils of porn and paedophiles. This is far too narrow a view. Firstly, young people are much more likely to find their self revelation on social media a problem because it loses them friends or job prospects than they are to find themselves stalked by a stranger. Secondly, it overlooks the extent to which social media use has spread across all of society (with the exception of many of the elderly). For older people, the consequences of harm to their employment, reputations or personal relationships may even be more severe than for more un-settled younger people.
Governments should also offer support, from primary school all the way through to universities, in the development of a critical 'digital literacy' agenda. This would focus not merely on preventing the most high-profile forms of harm but also on maximising the benefits of internet use and enabling a critical understanding of the commercial and other institutional contexts in which digital communication takes place.
Social media companies generally offer users control over what they share but don't help users really visualise their audiences as they post. Perhaps, for example, any public posting you make on a social media service could be given a light pink background in or around the posting box, changing to yellow for posts which reach (or potentially reach) 50 friends and green for fewer? Postings whose recipients include parents, employers or partners could be similarly colour-coded (even if only as an option).
Perhaps social media organisations could automatically check posts before they are placed online and if the content matches key phrases like 'I hate my job' they could alert the poster that what they are about to post could get them in trouble. Thinking about the extent to which social media posts can linger, better tools for control over digital 'remembering' may be called for.
Social media providers could, by default, request that archives of postings be automatically purged from search engines after a certain interval of time has elapsed unless the owner of such data chooses otherwise, and providers could give tools that would make it easier for users to hide large numbers of their postings at once using certain criteria (their date or the presence of keywords, for example).
This is also at the heart of the European Court of Justice's ruling that people should have a 'right to be forgotten'. There are certainly problems with the loose definition provided of what 'deserves' to be forgotten and there are obviously ways for 'forgotten' information to be found regardless. But the principle that even true-but-discreditable facts about people should at some point be forgotten is enshrined in law already through the notion of spent convictions in the Rehabilitation of Offenders Act.
Social media providers and other big internet companies may question the need for such measures in the first instance and may be reluctant to implement anything like what I've described. But the government, educators and members of the public all have a role in persuading social media providers to behave more responsibly. Social media companies wield a great deal of power over their users, whether they seek to or not, and just as we regulate conventional media organisations because of the public interest in a properly-run media so we should have the right to regulate social media – not in order to inhibit free speech but in order to empower users to make better decisions.
We will all eventually come to terms with the way that social media use is causing social friction. The danger is that at the moment the rate of change is outpacing our ability to adapt. Individual reflection and the exercise of personal responsibility will help to solve these problems longer term, but measured intervention by governments, regulators and educators have a role in easing the transition.
Dr David R Brake is author of Sharing Our Lives Online: Risks and Exposure in Social Media, published by Palgrave.
The opinions in Politics.co.uk's Comment and Analysis section are those of the author and are no reflection of the views of the website or its owners.