Seminar: Deep Semi-Supervised Learning by Prof Jee-Hyong Lee
Abstract:
In this seminar, I will explore how combining small amounts of labeled data with large amounts of unlabeled data can significantly enhance machine learning models. We will discuss the key assumptions of Semi-Supervised Learning (SSL), including smoothness, low-density separation, and manifold assumptions, and delve into basic techniques such as consistency regularization, entropy minimization, and self-training. Additionally, I will introduce recent advanced methods like MixMatch, FixMatch, and FlatMatch that incorporate various techniques and examine how they can achieve performance comparable to fully-supervised models. We will also address the challenges SSL faces, including computational costs and instability when labeled data is scarce.
Biography:
Professor Jee-Hyong Lee has been with the Department of Computer Science and Engineering at Sungkyunkwan University, Korea, since March 2002. He leads the Graduate School Program in Artificial Intelligence, funded by the Korean government, since April 2019. He was Vice President of the Artificial Intelligence Society at the Korean Institute of Information Scientists and Engineers from January 2017 to December 2019. He has presented his research at prestigious conferences such as CVPR, ICCV, ECCV, SIGIR, ACL, EMNLP, and COLING. His research interests encompass various areas, including semi-supervised learning, learning with noisy labels, program and language generation, program vulnerability detection, and adversarial attacks on natural language models.