A recent technological development gives a literal interpretation to the expression, “there’s more to this than meets the eye,” when two researchers embarked on a controversial journey to bring awareness to potential artificial intelligence (AI) abuse.
Last month, Stanford University researches Michal Kosinski and Yilun Wang posted a heavily debated draft of an AI that could potentially indicate a subject’s sexual orientation. The researchers intended to use the device to expose privacy risks involved with using facial recognition. By using a standard facial analysis program, Kosinski and Wang were able to demonstrate just how far facial recognition programs could go. Effectively emphasizing how easy it would be to re-create programs that are designed to forecast something as intimate as someone’s sexual preference, potentially proving harmful if placed in the wrong hands.
Both Kosinski and Wang developed this AI by using faces from thousands of online dating profiles, primarily Caucasian, in order to properly classify gay and straight groups based on their online profile. The images were then copied to an artificial neural network to determine the mathematical measurements of the face by calculating the distance between facial features into numbers. The program would then use these numbers to make accurate determinations about the sexual orientation of each subject.
Establishing the program was half the battle. The next step Kosinki and Yilun needed to take was testing it. Researchers compared the program’s accuracy to a human being’s naked eye and the results were astonishing.
During the trial period, 54 percent of women and 61 percent of men were accurately distinguished by humans as being homosexual from photo reference when compared to the A.I that accurately predicted 83 percent of women and 91 percent of men when given five photos to analyze.
Although the results were promising in the controlled test, the AI would most likely be unable to achieve the same level of accuracy in a non-controlled test.
Critics, including William T.L. Cox, a psychologist at the University of Wisconsin-Madison raise valid points about the program’s inaccuracy and ability to operate successfully in the real world. With more subjects and demographics, an algorithm operating at 91 percent accuracy would present a challenge saying, “Almost two-thirds of the times it says someone is gay, it would be wrong.”
Nicholas Rule, a psychology professor at the University of Toronto reluctantly concurs with the results on the A.I. saying, “I still personally sometimes feel uncomfortable, and I have to reconcile this as a scientist — but this is what the data shows.”
Kosinski risked his reputation to remind people that this kind of technology, placed in a political system where homosexuality is criminalized, could be deemed detrimental and “expose a threat to the privacy and safety” of gay women and men in the future.