Last week, Stanford University released a study on an artificial intelligence facial recognition program that could supposedly tell whether or not a person is gay based on their facial features alone.
Essentially, the research claims to show that faces can have more indicators for sexual orientation than the human eye can see, which is where the AI technology comes in. The study authors wrote that their computer algorithm could distinguish between gay and straight men 81% of the time, and 74% of the time with women.
"Gay faces tended to be gender atypical," the researchers said. "Gay men had narrower jaws and longer noses, while lesbians had larger jaws."
Not only is that dangerously reductive, but as GLAAD and HRC have pointed out, the study also fails to acknowledge bisexuality or any other sexuality on the spectrum, instead incorrectly assuming that "heterosexuality" and "homosexuality" are the only two sexual orientations that exist. The study also hasn't been peer-reviewed, and only studied white participants, as well as assuming that there is no difference between sexual orientation and sexual activity, which isn't always true.
“Technology cannot identify someone’s sexual orientation," Jim Halloran, GLAAD’s chief digital officer, said in a statement. "What their technology can recognize is a pattern that found a small subset of out white gay and lesbian people on dating sites who look similar. Those two findings should not be conflated."
Aside from the flawed research, many are worried that it will be taken out of context and be used to further perpetuate stereotypes about LGBTQ people and even straight people.
"At a time where minority groups are being targeted, these reckless findings could serve as weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous," Halloran said.
Read these stories next: