Over the past year, Twitter has come under fire for its lackluster responses to hate speech. These issues came to the forefront in July, when Leslie Jones faced racist and abusive tweet attacks instigated by conservative tech journalist Milo Yiannopoulos, and also throughout what was a deeply divided presidential campaign season. While Twitter has long prided itself on fostering free speech, the recent safety updates made by Instagram and Facebook left us wondering if and when the tweet engine would take action. Today, the company finally responded with critical changes of its own. "Because Twitter happens in public and in real-time, we’ve had some challenges keeping up with and curbing abusive conduct," said the company's blog post announcing today's updates. "We took a step back to reset and take a new approach, find and focus on the most critical needs, and rapidly improve." So, how will they do that? First, Twitter is expanding its mute features so that you can silence keywords, hashtags, phrases, and conversations from showing up in your notifications. In the past, users have had the option to mute individual accounts, but have had no protection against individual terms. Now, if, for example, you want a break from viewing election-related tweets you can temporarily mute any conversations containing the words "Trump", "Clinton", and "election".
While the expansion of mute tools is a nice addition, it doesn't actually do much to combat hate speech on Twitter. Muting lets you opt to ignore hateful tweets, which is the exact opposite of taking action. It's not a solution, just a way of distancing yourself from the issues. The more promising part of Twitter's announcement is a change in how users report issues. Now, when you report a tweet that violates the network's hateful conduct policy, you'll be able to specify whether that tweet directs hate towards a specific religion, gender, orientation, or race. In today's blog post about the updates, Twitter says this change "will improve our ability to process these reports, which helps reduce the burden on the person experiencing the abuse, and helps to strengthen a culture of collective support on Twitter." Whether Twitter will execute on this promise is the key question at stake here. Yes, a user can report a violation, but will the company take action in a timely, effective manner? The company said that it has "retrained support teams" to be better equipped to deal with issues as they arise. Nevertheless, a "culture of collective support" sounds more dreamy than it does realistic. Hopefully, that dream can become a reality. But for now, we're not holding our breath. Twitter will have to prove itself first.