This week, Twitter admitted that the algorithm behind its automatic photo cropping feature was skewed, so it removed it.
The social networking platform said in a blog post on Wednesday that it had looked into the artificial intelligence algorithm that crops images before they appear in a user’s timeline after users complained last year that the system favored men over women and favored White people over Black people.
The test results are in, and according to the platform, the algorithm did indeed show “unequal treatment based on demographic differences.” According to Twitter, women were favored 8 percent more than men, and photos of White people were favored 4 percent more than photos of Black people. White women were favored by 7% more than Black women in those demographics. According to the data, White men were favored by 2% more than Black men.
The social media company also looked to see if the AI had any signs of objectification bias, or if it was more focused on specific parts of women’s bodies.
Twitter’s director of software engineering, Rumman Chowdhury, wrote in the post, “We didn’t find evidence of objectification bias — in other words, our algorithm did not crop images of men or women on areas other than their faces at a significant rate,”
Algorithms often influence what we see online and how often we see it because of their thinking and methodology. The systems decide which posts are removed and which are kept, ostensibly indefinitely.
External AI researchers frequently conduct algorithm audits and report their findings. However, Twitter’s information provides a rare and detailed acknowledgment from a social media platform of how unfair its automated systems may be. People were already aware that something was wrong, but Twitter’s offering of its own analysis and findings was, at the very least, unusual.
Casey Fiesler, assistant professor of technology ethics in the Department of Information Science at the University of Colorado at Boulder, said, “I was pleasantly surprised to see this level of transparency.” “It not only made their decision process transparent, but the contribution to the science is helpful for other people and companies thinking about these things.”
Facebook and Instagram, too, have auto-cropping options. You must tap a photo to see the full image when it appears in your Facebook feed. On people’s profiles, Instagram reduces full-sized images to squares. A request for comment on how those feature works were not immediately returned by Facebook.
Twitter claims to have started using the “saliency algorithm” in 2018 to keep photo sizes consistent across the platform.
According to Twitter, the software was trained with human eye-tracking data and was designed to estimate which part of a photo would be considered most “salient” or important to see first. The algorithm predicts and scores which areas of a picture are more likely to attract users’ attention after scanning it. Then, according to Twitter, the part of the image with the highest score becomes the centre of the tech-generated crop.
The tool allowed a user to see the full-sized photo by tapping on it and expanding it, revealing the parts of the image that the AI had hidden. The reduced-size image, however, was the source of user complaints.
Twitter said it stopped cropping standard-sized images on its mobile app “as a direct result of feedback we received last year that the way our algorithm cropped images wasn’t equitable.” On Twitter.com, the algorithm is still in effect.
According to Twitter “Even if the saliency algorithm were adjusted to reflect perfect equality across race and gender subgroups, we’re concerned by the representational harm of the automated algorithm when people aren’t allowed to represent themselves as they wish on the platform.”
According to the company, “deciding how to crop an image is a decision best made by people.”
The incident is the most recent example of how algorithmic and machine learning biases can infiltrate mainstream technology. Humans are flawed by nature, and we make judgments that, knowingly or unknowingly, can be seen in the AI that powers decision-making products.
In 2019, researchers discovered that artificial intelligence (AI) used on more than 200 million people in U.S. hospitals incorrectly concluded that Black patients were healthier than equally sick White patients. In the same year, a facial recognition study found that the technology was more likely to misidentify people of color than white people when it was used by law enforcement.