Panic Button Added to Social Networking Sites
From Tech Watch 07/12/09:
Facebook and other social networking sites have agreed to adopt recommendations drawn up by the government to provide a panic button on their web pages.
Bebo has already introduced this measure, which basically consists of a highly visible button that kids can click on to report offensive or inappropriate material.
The idea, which is something that the Child Exploitation and Online Protection Centre has been recommending for some time, was put forward by the government’s adviser on online safety, Tanya Byron.
Other guidelines will stress the need for sites to provide parental control options to better supervise their offspring’s online activities, and will also apply to chat rooms, instant messaging services and the like.
I’ve been advocating something along these lines for years. However, in my opinion, a butt which simply provides a means of reporting abuse does not go far enough and it should constitute more of a ‘stop-this-and-get-me-out-of-here-now’ function first and foremost. Reporting of the alleged abuse is surely secondary to stopping it in its tracks.
So, in addition to reporting abuse, pressing the button should also have more immediate functionality. For instance, a single press on it could bring up a window which overrides any window activity below and freezes input of new communication to the user’s account (be that messages, a chat conversation, items posted on a ‘wall’ and so forth.) This window could give the victim a chance to select very easily and quickly those contacts they wanted to stop all communication with – based on who they been in contact with most recently.
This would be much more preferable to just hitting the close button in the appropriate window as, while that would stop the abuse, the victim would still be subjected to witnessing it the next time they logged back in.
The window could prevent the perpetrators from successfully contacting them again at that point or offer a cooling-off period of a pre-defined period of time – perhaps over 24-48 hours. Either way, if the block is made permanent and they wish to do so, abuse could be reported to the service provider at this stage.
In this way, a stop or ‘panic’ button on social networking services could operate in a similar way to what I advocated in my dissertation for virtual worlds which allow for sexual activity between avatars:
Through the modality of code, avatars capable of engaging in sexual activities could be required to have a ‘stop’ function – essentially allowing teleportation or some other means of escape from the unwanted attention of another avatar. This means of regulation represents an ex ante rather than ex post solution – the victim need never be subjected to the trauma in the first place.
Surely this is the right way of regulating the problem of online abuse (in whatever form)? After all, it makes sense to take advantage of the fact the environment is a artificial binary construct which allows a level of regulatory efficiency to be achieved which real-world regulators can only dream of.