This course explores the intersection of generative AI and Human-Computer Interaction, equipping students to design, critique, and ethically deploy text-to-image (T2I) technologies in real-world applications. Through hands-on labs, case studies, and debates, students will bridge technical foundations with HCI principles—focusing on usability, creativity, and societal impact.
Be able to:
Explain how diffusion models and text-image alignment (CLIP) work at a conceptual level
Compare the strengths/weaknesses of major T2I tools for design tasks
Apply prompt engineering techniques to generate usable outputs for design scenarios.
Design AI-augmented workflows and evaluate their efficacy.
Diagnose biases in T2I outputs and propose mitigation strategies.
Develop a functional prototype that integrates T2I into a user-facing application.
TBD
TBD
Tuesday {8:00-10:00}PM
Tuesday {13:30-15:30} [Office Room] @4321
Paper - 30%
Report - 20%
Project - 40%
Participation - 10%