At this moment, we're in limbo. There's no other way to describe it. It's a period of purgatory for democracy that's filled with the constant humming of cable news and punctuated with stress-filled punches to the refresh button on our cramped collection of open web browser tabs. We're desperate for any bit of news that might get us even an inch closer to knowing who will be the next President of the United States, so of course, we're devouring every morsel shared on social media. But, basic internet literacy would say it's dangerous to believe everything you see on any given social media platform. Just because it's posted online doesn't mean it's fact — that we're clear on. But what's less clear is which social media nuggets can be trusted and which are undoubtedly misinformation. During this critical time, being able to identify false claims on social media — including those made by our current president — is more important than ever, so we spoke to an expert for help.
"The best way to avoid misinformation altogether would be to stay away from social media," Dr. Filippo Trevisan, a professor at American University's School of Communication, tells Refinery29 over the phone. "But that's just not a fair thing to ask of people, because this is a time in which we have a lot of questions and social media is going to be another source of useful information. There is a lot of misinformation out there, but there's a lot of useful information too." With the reality being that most people will be obsessively scouring Twitter, Instagram, Facebook, and even TikTok for election updates, luckily, Dr. Trevisan says there are some strategies that can be helpful in deciphering what information is accurate and what is fake.
Is the news coming from a person or a bot?
The first thing you should do after you read a piece of information floating around on Facebook or Twitter (or, you know, Instagram or TikTok) is evaluate the source. Given that we live in the era of internet bots, that means first examining whether or not the info is being shared by an actual human. From your side of your screen, this might feel like an impossible thing to prove, but Dr. Trevisan lays out a few concrete things to look for when it comes to verifying if the poster is a person.
"You should look at what that particular account has been posting, and if it has been repeatedly posting about the same type of information, if the information always comes out from the same angle, if there are spelling or grammar mistakes, if that account looks like it was set up rather recently, or if it just started tweeting or posting about the election and nothing else, then those are pretty clear indications that that's not the person that's doing that work," he explains. "And even if it is a human, those sort of patterns should raise suspicion."
If the info is coming from a human, who are they and where are they located?
This particular juncture of the election-counting process is all about developments in very specific parts of the country. In other words, if you follow your aunt's friend from Hamilton County, Tennessee on Facebook, you might not want to automatically believe everything she has to say about what's going on in Douglas County, Nevada.
"If you're fairly confident that it's a human doing the posting or sharing, ask yourself, 'Who are they? Are they likely to really have local knowledge about what's happening on the ground?' We've moved now to a moment in the race where a lot of what matters really will happen at the very local level," says Dr. Trevisan. "So is that person really well-positioned to be providing a perspective that is trustworthy in this case?" Not every bad piece of news is coming from someone with bad intentions, but that doesn't mean it's reliable.
Is the information verifiable?
If you're fairly confident that your aunt's friend isn't in the know about what's happening in Nevada or Pennsylvania or wherever, but you don't totally know her life, take the time to look up the information she shared. Dr. Trevisan recommends looking at other outlets to confirm if claims being made on social media are true. You might want to go beyond referencing just your go-to national news outlets and instead check in with local ones. (You can also do this with information being tweeted by, say, President Trump.)
Another suggested strategy from the professor for verifying things that are happening with the election right now is to compare the information with what official sources within local jurisdictions are saying. "Some counties will have information about the process on a website or the website of the secretary of state, depending on what state we are talking about," he shares.
Does the information spark a particularly strong reaction from you? If so, take a pause.
We're likely all familiar with the experience of scrolling through our feeds and being stopped by something that fills us with outrage or perhaps extreme excitement. That, according to Dr. Trevisan, should actually be a red flag. "Be suspicious of content, in particular, that seems to be designed to arise strong passion — not just negative ones, in some cases, positive ones as well," he explains. "If it looks like content that really speaks to our pre-existing biases, that is something to be particularly careful with and take a little time to consider." When you see one of those jaw-dropping posts, don't let that passion make you immediately hit share, repost, or retweet. Take a beat.
How to spot fake info on Facebook and Twitter
Dr. Trevisan says that each of the above action items is effective across platforms, however, there are specific things to look for on specific social channels. According to the expert, Twitter has recently done some good work when it comes to cracking down on false information being shared. For one, the platform has actually built in that pause we talked about taking before sharing content. You may have noticed that Twitter now automatically prompts you to add your own comment before you can retweet something. Users aren't required to add a comment, but no longer allowing immediate retweets does at least make users take the time to consider whether they really want to share the content or whether they should perhaps provide more context for their followers. "We don't know if Twitter is going to keep that particular innovation after the election, but at least for this very sensitive time, it's an innovation that's designed to help people follow those steps we were talking about — take some time to do the research or at least reflect on the content they're seeing and the source it's coming from," Dr. Trevisan says.
Additionally, the professor highlights Twitter's recent decision to overlay tweets that include misleading information with a warning before they're actually visible to users — anyone who has looked at Donald Trump's feed lately is aware of this feature. "I think that's a really good step, and it's also removing the opportunity for people to like those posts straight away," he explains. "This ensures that whoever it is that's doing the posting still has an opportunity to say something, but at the same time, it limits the speed at which those claims are broadcast and shared with others, which slows the spread of misinformation."
Facebook's approach to tamping down the spread of fake news is to label all political content as potentially biased, and in Dr. Trevisan's opinion, that strategy is less effective. If you're not afraid of election-related Facebook drama between hometown friends or older relatives and have actually been brave enough to log onto this platform in the past few days, you may have noticed the post that's currently at the top of all news-feeds. It reads, "Votes are being counted. The winner of the 2020 US Presidential Election has not been projected yet." These types of general warnings are not effective, according to Dr. Trevisan, because they don't speak to the specifics of an individual piece of information. "I'm not saying that Twitter flags everything that's potential misinformation — that's probably impossible to do — but in terms of creating opportunities for people to take the time to think more critically about what it is that they're engaging with and potentially sharing with others, I think Twitter is doing a better job.”
What should you do when you see misinformation? Report it and don't engage
Once you've taken your time to research and reflect on the information that you're seeing on social media and who or what is sharing it, and you decide it's potentially false, Dr. Trevisan says you should report that tweet or post ASAP. "To their credit, all platforms have expanded their focus on misinformation and they've hired additional staff to look into these issues," he says. "So, although Facebook is taking more of a general approach that I don't see as being as effective, it still does have people who are looking into those reports, and it's very important that people submit them."
If the false claims are coming from someone you know, your instinct might be to leave a passive-aggressive comment that says "where exactly are you getting this info?" along with the thinking emoji or even an aggressive one that calls them out for sharing fake news, the expert doesn't recommend this tactic. At least not at this time. "I think engaging with the information isn't going to lead to a useful outcome. It may even cause people more stress than they need to be experiencing at this moment." We're all for reducing stress as much as possible while we wait to find out our country's fate.