Class name: “Speech Synthesis and Recognition”
Taught by: Assistant Professor of Linguistics Jane Chandlee
Here’s what Chandlee had to say about her class:
This class provides an overview of the automated recognition and generation (synthesis) of human speech, two technologies that are increasingly being used in daily life (e.g., Siri, Amazon Echo, GPS systems, etc.). We first look at human speech itself, from the perspective of both articulation and acoustics, and then review the algorithms and methods currently used by speech technology developers and researchers. I hope students come away with a greater appreciation for how challenging it is to develop this kind of technology, as well as an appreciation for how amazing it is that humans perform the same tasks seemingly effortlessly.
I created this class because it is very rare to see it offered at the undergraduate level, and I thought Haverford was an ideal place to try it out. There is a growing interest among the students here in both computer science and linguistics, as well as their intersection, and speech technology is an excellent and prominent example of what can happen when these two fields meet. I also thought it would be a fun and rewarding experience for the students to tackle very high-level computational problems in an interactive way. Throughout the semester the students participate hands-on in building the components of working recognition and synthesis systems.
Photo of students exploring digital representations of speech by Wanyi Yang ’20.
Cool Classes is a series that highlights interesting, unusual, and unique courses that enrich the Haverford experience.