Categories
Uncategorized

Encoding Bias: Facial Recognition and the Maintenance of the Status Quo

In 2018, Amazon created a hiring tool system that would help rank job candidate applications based on their past hiring preferences. The AI hiring tool would scan resumes and seek out keywords which matched ideal candidates according to past hiring pools, streamlining the process of sorting through viable job applicants. Although this sped up the job application process, it quickly became apparent that the system had obtained a bias, as candidates  who attended all-women’s colleges and  resumes with the word “women’s” included were downgraded in its ranking system. Amazon was able to discover this bias within the system, but decided to ultimately abandon the tool as they were “unable to ensure that the algorithm would not be biased against women” in the future. 

The systematic practice of Amazon hiring mostly male candidates was picked up on by the resume ranking tool without this bias being specifically encoded.  Amazon’s hiring software was able to pick up on an implicit bias in the fundamental hiring structure of the company. The fact that this tool was completely discarded after attempts by Amazon engineers to make it more neutral demonstrates that once technology is given a command to follow, we are unable to ensure that this command can be overridden. With so many of the commands we give to technology so entrenched with our own personal biases, it seems impossible that technology would not assume the responsibility of upholding the power-based classification systems we choose to preserve in our everyday lives. 

This paper will serve as a space to explore the influence humanity has on technology, specifically as it pertains to facial recognition technology and its implementation as a new staple in society (particularly in the realm of surveillance). Broken down into four parts, this paper starts by looking at the very human action of classification, moves on to investigate what occurs within the friction caused by the rubbing between technology and power, closely inspects facial recognition technology, and ends with a recommendation on how we can one day create a more impartial collaboration between humans and technology. 

Categorization and Power 

Classification is defined as a, “systematic arrangement in groups or categories according to established criteria.” According to Geoffrey Bowker and Susan Star in the book Sorting Things Out: Classification and Its Consequences, “to classify is human.” They continue, stating:

“We all spend large parts of our days doing classification work, often tacitly, and we make up and use a range of ad hoc classifications to do so. We sort dirty dishes from clean, white laundry from colorfast, important email to be answered from e-junk.”

Whether it is conscious or not, according to Bowker and Star, a large part of human life is sorting the things we have into “implicit labels.” Humans like to categorize things; it is something we all do in order to be able to compartmentalize certain items in our lives, and for the most part they are invisible, yet “their impact is indisputable… and inescapable.” But what happens when classification and power meets? 

Classification not only happens on the local, individual level, but on a larger, bureaucratic scale as well. Bowker and Star assert that classification goes unnoticed until it happens on a governmental level with real consequences. “Try the simple experiment of ignoring your gender classification and use instead whichever toilets are the nearest.” This example demonstrates that classification is not just an arbitrary past time that we all just happen to do; deciding how to divide one’s emails in order to make them more manageable has a relatively low impact on the rest of society. Classification systems however, when backed bureaucratically, have a larger weight. The consequences of the bathroom example given by Bowker and Star demonstrates the weightiness of classification in all parts of life, including our own emotions. Walking into a bathroom can bring up feelings ranging from embarrassment  — when you realize that you have accidentally walked into the wrong bathroom (the bathroom that doesn’t coincide with gender) — to feeling an extreme sense of anxiety  — for example, when using the bathroom that matches your gender identity clashes with how others classify you (a debate currently being tackled in the United State’s highest courts). 

These bureaucratic systems are put into place by those who are in power, and who have been categorized by others who should be in power. Those put in positions of power are placed there because of personality traits that have been deemed by others suitable to rule. For all great leaders (for example, presidents and historical figures still revered years and even centuries later), there is a combination of respect and liking from those on the bottom as, “while appointments to positions come from above, affirmation of position comes from below.” Once a person is categorized as someone who can lead, it is up to them to continue meeting the needs of the ones who put them in their position. And after a period of time where these needs are met, the people in power are able to turn the tables on the ones who put them there, doing everything that they can to stay at the top by creating standards and laws that widen the gap between the ones who make the decisions and the ones who follow them. 

Technology and Power 

Technology can be broadly defined as, “the practical application of knowledge especially in a particular area.” Technology can pertain to anything ranging from a pencil to a self-driving car. Because of this wide range in meanings, this paper will narrow down the definition of technology to mean digital technology — objects which are used to improve life by using “electronic tools, systems, devices and resources that generate, store or process data.”

Technology is commonly seen as a neutral tool. As it pertains to categorizations, most people would place it in the bin of impartiality. Phrases like, “guns don’t kill people, people kill people,” put technology in the place of doing its job regardless of who is the one who pulls its trigger. Technology is seen as the impartial decider (as I, too, have asked Siri to pick a number when trying to decide with my friends where to eat) and is often used as such. 

Political theorist Langdon Winner would disagree with the sentiment of technology being neutral. According to Winner in his article, “Do Artifacts Have Politics?”, although technologies might not be created with the intention of being political in mind, the existence of these technologies are often inherently political. He writes, “the very process of technical development is so thoroughly biased in a particular direction that it regularly produces results counted as a wonderful breakthrough by some social interests and crushing setbacks for others.” Though technology might be created with the advancement of some parts of society in mind, it often finds itself leaving certain people disproportionately affected by its existence. This echoes the idea that standards, which are formed from classifications, “valorize” some viewpoint over others. A great example of this power can be found in how posts and videos on the internet are censored. Who is making these decisions and how does this decision-making reflect power?

The Cleaners, a documentary-thriller, brings to light the dark world of content moderation on major online platforms like Facebook, YouTube and Twitter. Directed by Hans Block and Moritz Riesewieck, The Cleaners focuses on the content moderators in the Philippines that must decide if a post should be “deleted” or “ignored” from a social media platform. The film juxtaposes the light of the social media companies with the darkness of the job of being a moderator and how this labor being outsourced to workers in foreign countries (these companies all originating from the United States) might impact their decisions on what to keep on the platform and what to delete. 

One thing the film makes clear is the political impact of all the decisions the content moderators have on people around the world. The film bounces between interviews with global artists and activists (with increased emphasis on Syria and Turkey), and their beliefs on how social media censorship can be dangerous as it allows governments “a straight pass without anyone challenging them.” Because of the importance of social media being used as a platform to challenge government officials deemed corrupt, it is crucial that the information that is “deleted” or “ignored” is properly categorized. As artist and activist Khaled Barakeh states in The Cleaners, “by logic, the background of who [content moderators] are affects how they think.” In order to place the content that has been flagged from moderation into either the delete or ignore pile, the moderator viewing that content has to make the decision on whether or not it goes against the platform’s standards. These standards are not always clearly defined, leaving the moderators to rely on their own beliefs on what is allowed and what is not. 

Illma Gore, “Make America Great Again” 2016

An example of this censoring in the film is an artistic rendering of Donald Trump. A painting by artist Illma Gore in 2016 depicting Trump in the nude with small genitalia and titled “Make America Great Again,” was originally posted on Facebook and then shared across multiple platforms, garnering over 50 million shares in 3 days. The picture of the painting was eventually taken down. When asked by the filmmakers if the painting should be ignored or deleted, a content moderator answered, “it’s delete. Why? It degrades Donald Trump’s personality, so it must be deleted.” This decision to delete is heavily influenced by power. Traditional art, like Renaissance paintings with nude models, are allowed on the platform, even despite its nudity. The issue then is not nudity in and of itself, but instead who is depicted nude. Someone politically powerful being depicted nude as a criticism of them is deleted. Power is affecting classification. 

Tristan Harris, a former Google Design Ethicist, echoes a similar assertion as Winner in The Cleaners regarding the neutrality of technology. Harris states: 

“One of the misconceptions is that human nature is human nature and technology is just a neutral tool… But this is not true because technology does have a bias… it has a goal and the goal it’s seeking is: ‘what will get the most number of people’s attention? What tends to work on billions of people and successfully extracting their attention out of them?’… And it turns out that outrage is really good at doing that. Whether Facebook wants to admit it or not, they actually benefit… when they show feeds that are filled with outrage and amplifies that which is most divisive… the whole environment tuned to offer us the worst of ourselves.”

Technology and the algorithms on these social media platforms are used to maximize the most amount of outrage in order to keep clicks on the site (which means more money from advertisements). Facebook’s algorithms is just a classification system that, taking on the bias of the people who made the system, replaces live humans for codes in a computer. Because of this, the technology cannot truly be neutral, as it is reinforcing ideals that it was programmed to uphold. 

“Geo-blocking” and “IP Blocking” is another form of technology that is currently used to perform human bias on social media sites such as Facebook. Geo-blocking classifies people by geographic location and makes content unavailable for people in certain regions. If a government has laws against the type of content the people in their country is allowed to see, Facebook is able to block the content so that it is not visible in that country, but remains available on the website. Although geo-blocking is an incredible advancement of society for some, like the Turkish government for example, this censoring becomes a crushing setback for people like Yaman Akdeniz, a professor of law at Istanbul Bilgi University. In The Cleaners, Akdeniz emphasizes how the ability for Facebook to geo-block some content in countries like Turkey is dangerous, as Turkey already has a highly censored traditional media, and having a highly censored social media as well means the lack of potentially valuable information getting to citizens that need it. This means that any information that may oppose the current government regime will get stifled before it’s even allowed to be disseminated, leaving Turkish citizens in the dark about things the government does not want coming to the light. 

Geo-blocking is not just a problem because it allows government censorship to reach into the world of social media, but it becomes more of a problem when this technology allows companies to “decide what’s lawful and what’s not.” Instead of waiting for government officials to come to them and ask for content to be blocked in their country, social media companies are taking the power into their own hands and classifying content, based on decisions on previous content, as permissible and impermissible in a region. The algorithmic geo-blocking technology is fed this bias, coming from people in power (i.e, the government officials in this region), and is later so attuned to block certain data that it does it without needing to be told. The classification technology is reinforcing the power structures (censorship laws in a country) that already exist.

 Social media moderation is not the only instance where technology takes on human bias. Another example of  bias-influenced technology  is artificial intelligence (AI). AI allows machines to learn from past experience in order to better complete tasks. Much like humans, the more exposure AI technology has with certain experiences, the better it is at handling similar experiences in the future. Because of this process of learning, AI is vulnerable to the biases of the people who program it. 

Facial Recognition and the Performance of Bias 

Facial recognition is a form of AI which uses multiple methods of identification in order to verify one’s identity.  Facial recognition works by taking note of the many nodal points in the human face, specifically the peaks and valleys in the face which make up facial features. With a face having about 80 nodal points, the most commonly measured features by facial recognition technology include: distance between the eyes, depth of eye sockets, length of jaw line, width of nose, and shape of the cheekbones. After the measurements of these nodal points are taken, a numerical code, called a “faceprint,” is placed in the database to represent the face, so that once the face is scanned again, that specific numeric code comes up, verifying its identity.  

Like many modern technology, facial recognition research has its start tied to the military. In the 1960s, Panoramic Research Inc. was one of many Cold War era companies started in the United States for government funded research in computer science. Funded mostly by the U.S. Department of Defense, the appeal of facial recognition technology was its contribution to “logistics of military perception,” by possibly one day giving the military the ability to locate enemies from a distance. But besides its military implications, facial recognition was also appealing to researchers in the sciences, as “the techniques being developed for computer recognition of faces promised to address a set of vaguely defined problems concerning how to automatically process images and handle an expanding volume of visual information in medicine, science, and military intelligence.” Due to this all around appeal, facial recognition research since the 60’s to now remains highly funded for its possibility to streamline all operations in the future. 

The facial recognition technology back then would use two-dimensional images to compare to other two-dimensional images in a database. Any slight variance in light or facial expression from the image stored in the database could render the technology ineffective — with it unable to positively match the two images and confirm one’s identity. With the switch to three-dimensional captures of a person’s facial structure, facial recognition technology uses  distinct facial features (nodal points) to confirm identity, as they are unique and do not change over time.

Facial recognition technology is used widely today — from unlocking phones to confirming travelers’ identities at airports. This technology ranges in size, from being able to memorize one face at a time to unlock a device, to using a database from the Department of Motor Vehicles (DMV) to identify a robber captured on CCTV. 

Currently, the most consistent form of facial recognition technology that people are exposed to is that which unlocks smartphones. Using a 3D scan of the face that is programmed on the phone as a “password” which allows the phone to be unlocked and payments to be confirmed (from debit/credit cards connected to the phone) this technology confirms the identity of the person holding the device by measuring nodal points and matching the facial print to what is in its database. Many consumers feel safer with this technology put in place for their devices because unlike a manually entered password that can be memorized by someone sitting close by and input later by them when they want to break into another’s phone, facial recognition seems foolproof and completely unique. With many tech companies like Apple, for example, going the extra mile to make it so that a phone cannot be unlocked unless the person is looking at the screen, gone seems to be the days where one can easily steal and unlock other devices. 

Facial data encoding is not just limited to device security but has also become a key feature on many social media apps. On Snapchat and Instagram Stories, face filters are commonly used —  from adding lipstick and lashes that move in sync every time a person talks or blinks to morphing users into puppy dogs. More recently, Instagram-user made filters have stormed the platform, with some filters even boasting the ability to use facial recognition to scan the user’s face and guess their ethnicity or ancestral background. Some of these filters are so well created that they actually seem as though they are scanning the nodal points of the user’s face before giving them a percentage breakdown of their ethnicity based on how they look. 

Photographer Denis Korobov is a creator of one of these popular Instagram filters. His filter, named “DNA Test,” takes a total of five seconds to “analyze” the user’s face before it comes back with 4 ethnicities and percentages to show a “DNA breakdown” of the user based on how the user looks. However, each time the user uses the filter, a different ethnic breakdown is given. When asked in an interview about this variance in results, Korobov stated, “actually this filter is just a random,” the filter uses basic facial detection to be able to locate a face in the frame, but does not actually use facial recognition technology to guess the ethnicity of the users. Korobov continued stating, “Instagram doesn’t allow deep face recognition now.” 

It is no surprise that a simple Instagram filter isn’t able to look deeply into one’s ancestral roots for the past hundreds of years and spit out a comprehensive breakdown all under five seconds. However, from the study of these “pseudo facial recognition” filters, it is interesting to note the seeming readiness of many social media users for these apps to actually incorporate these technologies on their platforms. Although Instagram currently does not use deep facial recognition, its parent company Facebook does, using facial recognition to auto-tag users in pictures that they are not already tagged in and that look like them. For filter creators like Korobov, the technology would be welcomed on the platform, “I think development of face recognition is necessary, like a natural part of a progress. At least I’m waiting for hand/hair/body/ pet tracking — it will make filters much more interesting and give us a lot of new capabilities.”

But social media is not only the place where facial recognition has become the norm, many countries are starting to adopt facial recognition based surveillance systems throughout their cities. An example of a recent implementation of this technology can be found in China’s “Social Credit System.” 

Monitors display a video showing facial recognition software in use at the headquarters of the artificial intelligence company Megvii, in Beijing, May 10, 2018. Beijing is putting billions of dollars behind facial recognition and other technologies to track and control its citizens. (Gilles Sabrié/The New York Times)

Started in 2015 with the intention for complete implementation in 2020, China’s Social Credit system uses artificial intelligence and about 200 million surveillance cameras to track the movement of citizens and give them a score based on their actions. With a range from 350 to 950, the scoring takes into account the habits and behaviors of the citizens, adding points for purchases made towards “good items” (clothes and diapers) and subtracting points for participating in “bad” shopping (alcohol and video game spending). These behaviors and others (such as getting into fights with neighbors, littering or helping a stray animal) are tracked by the surveillance cameras and facial recognition technology is used to identify the perpetrators of these actions and assign scores to them which have real world consequences. For people with a low social credit score, they may even be prohibited from purchasing transportation tickets. 

For the citizens of China  — and potentially soon for everyone in the world — facial recognition is the new wave for constant surveillance used by the government. So what could go wrong? 

According to Sarah West in “Discriminating Systems: Gender, Race and Power in AI,” AI systems reinforce inequality and reflect “historical patterns of discrimination.” Artificial intelligence is unable to be separated from the biases of human nature, because humans are the ones who create these systems. West asserts, “AI systems function as systems of discrimination: they are classification technologies that differentiate, rank and catergorize. But discrimination is not evenly distributed… [with] a persistent problem of gender and race-based discrimination.”

Much like the discimination found in humans against people of a certain race and gender, AI technology mimics this discrimination. Racial and gender bias, for example, have so completely penetrated AI systems, that facial recognition systems have a difficult time recognizing the faces of dark-skinned women, while being the “most proficient at detecting light-skinned men.” Because AI technology development is a primarily white, male-dominated field, there is a bias towards people who fit into this category. In her report, Sarah West cites instances of a feedback loop forming between discriminatory practices and discriminatory AI. Due to this racial bias within the technology field, AI is most able to recognize white men for facial recognition, which reinforces current power structure bias of white men being seen as the demographic of people with the greatest amount of power. 

For people who identify as transgender, the technology once again comes up at a loss. In a research of the four largest providers of facial recognition technology (Amazon, IBM, Clarifai and Microsoft) and their ability to identify Instagram images, trans men were categorized as women 38 percent of the time: gender nonbinary people were completely missed. In comparison, cisgender women were correctly identified 98.3 percent of the time and cisgender men 97.6 percent of the time. Once again, this discrepancy is due to the AI systems being fed in labs images on a gender binary, rather than a gender range.  

With AI having so much difficulty correctly taking stock of minority bodies, what would happen when a more racially diverse country, like the United States for example, implements a Social Credit Score system like China? How would the lives of those already marginalized be affected by biased AI? 

Although complete citizen surveillance with the intention of social scoring is not (yet) present in the United States, law enforcement have used systems like Clearview to “identify perpetrators and victims of crimes,” and currently the FBI’s Next Generation ID system contains a database of over 117 million Americans using the DMV’s driver’s license database. The era, “in which the slightest movements are supervised, in which all events are recorded… in which each individual is constantly located, examined and distributed” is almost upon us. Because of this, it is crucial that the technology is up to par to make a clear, unbiased detection of citizen’s bodies as they move throughout society. 

In 2018 as a means to convince Congress to join the fight against law enforcement’s use of facial recognition technology, the ACLU conducted an experiment (using Amazon’s Rekognition software) where it ran lawmaker’s pictures through a database that consisted of 25,000 arrest photos. The results were troubling: 28 lawmaker’s were incorrectly identified as people who have been arrested for a crime, and lawmaker’s of color were disproportionately represented in these incorrect matches. The results are troubling because the conclusions from these systems for a citizen accosted by a police officer who is using this technology to solve a case can mean the difference between freedom and incarceration. 

A Call for the Unbiased 

Facial Recognition is not going anywhere. It is a technology that will probably be as much a part of our lives in the future as fingerprint identification has been for the past 120+ years. Because of this, is it important for us to proceed with caution while the systems are still so entrenched in human bias. 

It is important for the makers of these systems to be aware of how their own biases might be encoded, either intentionally or accidentally, in the technology they create. It is important for the companies which make these systems to understand the importance of diversifying their own workforces, if not for the sake of providing equal opportunities, but for at least the sake of ensuring that they have the most accurate technology on the market  — understanding that in some instances even a one percent margin of error is far too great. And most importantly, it is crucial that we, the consumers of technology, understand that it may never be possible for it to remain neutral. 

Facial Recognition is not an inherently evil piece of technology that we should ban from use worldwide. But it is true that we should proceed with caution on perceiving Facial Recognition and AI as our one stop shop for all human needs. Just like humans are biased, so is technology, because it is us, humans, who program them. Using technology with the understanding of this inevitable encoded bias while working hard to work on our own biases is necessary, so we could one day work towards more impartial technology in the future.

.

.

.

.

.

For a copy of the “Works Cited” page for this essay (or a PDF version), please email me at tocallmyselfanartist@gmail.com.

Leave a comment