What They Learned: Dylan Slack ’19

The computer science major explored the development of different forms of machine learning in his thesis.

Computer science is a constantly growing area of study, and Dylan Slack’s thesis in the department takes an exploratory step into the vast and emergent territory of machine learning. Slack’s thesis, “Expert Assisted Transfer Reinforcement Learning,” attempts to apply the work of machine learning, which is concerned with the decision-making processes of computational models, in service of human application.

His thesis advisor was Assistant Professor of Computer Science Sorelle Friedler. “We’re both interested in how machine learning models can better interface with people,” said Slack, “both in the sense of how usable they are and how well people can understand the motivations behind their decisions.”

Machine learning models learn to solve problems through repeated interactions which garner data that in turn reinforms the model’s decision making process. Slack’s thesis handles Transfer Learning in particular, which he said “deals with how we can use a model trained in one environment to achieve better performance in another.”

Computer science research necessitates communication among peers in order to produce the most effective outcomes in the future, and Slack’s thesis is notably in dialogue with the greater CS community and its work.

“Part of the mandate in the CS department is that the thesis can be read by anyone with the background given in the CS curriculum,” said Slack. “I learned how to balance relevant background with forward looking results and experiments. It was difficult but interesting to figure out how to strike a good balance between the two.”

Slack’s work will be of continued relevance, not just to the computer science community, but to his future work as well, he will be pursuing a Ph.D. in computer science at UC Irvine starting this fall.

How did your advisor help you develop your thesis topic, conduct your research, and/or interpret your results?

[Sorelle and I] thought it would be interesting to develop a method to allow a human expert to interpret the knowledge moved from one environment to another and edit it to better suit another environment. This method is applied in the case of “deep reinforcement learning,” which uses models called neural networks. These type of models have been really successful for things like video games. When you see in the headline “AI beats human at X game,” a lot of the time it’s a model that uses a neural network. I came up with a couple simple toy scenarios and found that the method I developed worked pretty well in achieving better performance across environments.

What are the implications for your thesis research? 

Methods in machine learning for the most part don’t pull from the advice of human experts. I think my thesis indicates that more work is needed in the areas of incorporating domain expertise in more significant ways in machine learning models. Because my work was exploratory, hopefully in the future people can find more robust real-world applications. I’ll be working on a more robust application to chemical-reaction exploration over the summer.

“What They Learned” is a blog series exploring the thesis work of recent graduates.