Many teachers might be wondering what to do if cyberbullying takes place in their classroom, especially in the aftermath of a recent tragic case where a young girl in Ireland died in a cyberbullying-related suicide. Some are calling for legislative action to make cyberbullying illegal and there have been significant efforts in the UK and EU recently to regulate social media platforms’ liability for harmful content on their sites. As I have argued previously, it is necessary to ensure that platforms take responsibility for assisting on this matter and that the tools they provide and their responses are effective. But we also need to be careful about how we design such regulation lest it fail to address the problem as some of the legislation in the past did.
Regulation alone will not solve the issue of bullying and cyberbullying as it is a broader cultural and social problem. Australia introduced legislation where the government-appointed e-safety commissioner has the authority to review online content that had previously been reported to the company but the company may not have taken any action. If the commissioner established that the content needed to be taken down, the company would face the prospect of a fine if it did not take the content down within a designated deadline. The last year’s review of the 2015 legislation shows that the commissioner worked effectively with the companies to swiftly remove much of the reported content relatively quickly. Something similar has recently been proposed in Ireland too.
Regardless of the outcome of the calls for regulation, most of the major social media platforms, such as Facebook, Snapchat, Instagram and Twitter are currently employing tools that are aimed to address the problem of cyberbullying and that teachers need to be familiar with. It is the responsibility of the industry to ensure that these tools are well known to children but also to teachers. While most children and teens, and perhaps many teachers, will have been familiar with blocking and reporting, which allow users to prevent abusive others from further contacting them or to report abusive content to the platform; fewer might have heard of Safety or Help centers, which provide advice as to what to do when cyberbullying happens and how to assist the child on and off the platform itself.
The EU Kids Online network, which I am a member of, has recently completed surveys with 9-17-year old internet using children in so far 14 European countries. The results of the survey in Norway, for instance, where independent use of digital media is very high among children, show that almost two thirds have used the blocking button with about 10% saying they did not know what it was or that they have not seen it. Two fifths have used a reporting button and about 15% did not know what a reporting button was or said they had otherwise not seen it. Meanwhile, only 10% have actually used Safety or Help centers, although more than a half say they know what they are; 35% did not know what these were or have never seen them. Results of the same survey from Serbia, also indicate that children are less familiar with Safety and Help centers than with reporting and blocking tools. There, 84% of children have seen blocking options; 76% have seen reporting and 59% have seen help or safety centers.
Some bigger companies, like Facebook, have developed more nuanced tools like social reporting, which are based on neuroscience research, and which allow users to reach out to users who they feel had bullied them. They could contact those who offended them with pre-made messages, and ask to take these hurtful posts down. The messages are designed to trigger empathy. In an earlier small scale study I did in Norway, I found that only 13% of the 152 children surveyed had actually seen this tool.
Awareness is, of course, only part of the equation—the other aspect of the issue is ensuring that these are helpful and they may not always be. Taking down content may not solve the offline component of the problem and it may not address the relational aspect of the problem that two or more teens are having, my focus group respondents underscored. Further, blocking or muting content may leave the victim wondering what is now being said about them behind their back. Reporting to the company may provoke more mocking or even retaliation against the victim—reporting may be perceived as a sign of weakness and the child may fear that the perpetrator will find out who reported them once their content is taken down. They may therefore avoid reporting even if they know that it is available. Telling teachers that they feel bullied online or excluded from a community may not be an easy action for a young person to take, and adult involvement is not always deftly or skillfully executed. My respondents pointed out it might also be difficult to persuade the perpetrator to listen to advice in the Safety Center telling him or her that they should be good digital citizens by avoiding hurting others and refraining from bullying. In the small-scale survey in Norway, I found that 48% of respondents thought that social media platforms were able to help children “a little” when bullying happened; 11% thought platforms were able to help “a lot,” whereas 23% thought they were not able to help children. These findings underscore the point that bullying and cyberbullying require multi-stakeholder responses and sustained investment into capacity building with everyone involved.