Computer science major Becky Lytle ’18 knew from the start that she wanted to write her thesis on something that had real-world applications. Or, as she puts it, “I wanted to research something that could have a clear, tangible impact on people.”
She found the perfect subject in a program analysis technique called “abstract interpretation,” which can be used to determine whether the decisions computer programs make are fair or not.
“As many of us may know,” Lytle says, “algorithms are beginning to make more and more societally impactful decisions; these algorithms are used for welfare allocation, hiring, policing, and more. Data used to train these algorithms is often biased with racism, sexism, etc., and therefore, the decisions that these algorithms make are at risk of being discriminatory. Therefore, there is a moral imperative to prevent algorithms from making these biased decisions, especially when they are being given more power in society.”
With the completion of her thesis, “Abstract Interpretation of Algorithmic Fairness,” one of Lytle’s greatest dreams— to use her love of math and technology to better the prospects of those around her—has been realized.
“I chose to major in computer science because I wanted to combine my love of mathematics with my desire to build things that would have a direct impact on human lives,” she says.
As for her plans for the future? Lytle, who is enrolled in Haverford’s 4+1 engineering program with the University of Pennsylvania, is interning at Google’s Manhattan headquarters this summer before she resumes her studies—which will hopefully culminate in a master’s degree in engineering with a concentration in computer and information science—in the fall.
What did you learn from working on your thesis?
At a very high level, I learned that the issue of ensuring algorithmic fairness is incredibly complex for a variety of reasons. First off, the definition of “fairness” is not mathematically agreed upon among prominent researchers. This, I feel, is the most difficult problem involved in algorithmic fairness, because there is no true “absolute” definition of what fairness is. For example, one paper I read for my research argued that the way to ensure fairness was to make sure a protected attribute, such as race, was not affecting the outcome of an algorithm. However, this ignores the fact that one’s race has been shown to affect one’s opportunities in terms of education, etc. Another paper, therefore, argued that we must analyze all attributes of a person that may be affected in some way by a protected attribute.
The complicated nature of formalizing definitions of fairness definitely makes me feel as if a well-rounded education in ethics, power structures, and simply understanding how our society functions in a nuanced way in general is necessary in order to produce meaningful research in this area.
What are the implications of your research?
As machine-learning algorithms become more and more prevalent in our day-to-day lives, it is imperative that we prevent these algorithms from discriminating against marginalized groups of people. I read a lot of machine-learning-related content in the news and many of the articles I read are centered around the question of how we will verify that our algorithms are making fair choices. Therefore, this content not only helps researchers and academics in the field of algorithmic fairness, but it potentially presents a solution to people working in the industry and developing algorithms in their day-to-day jobs.
“What They Learned” is a blog series exploring the thesis work of recent graduates.
Photo: Current Google intern Becky Lytle ’18 smiles on a trip to the company’s Sunnyvale, Calif., campus. Photo courtesy of Becky Lytle ’18.