Seminar by Dr Maggie Liu, RMIT University, Australia on 19 Feb 2025, RTP Harvard Room

Time: 19 Feb 2025, 3pm to 4pm
Venue: Research Techno Plaza, Level 2, Harvard Room
Title: SIGuard: Guarding Secure Inference with Post Data Privacy
Bio: Dr Xiaoning (Maggie) Liu is a Lecturer (aka Assistant Professor) at the School of Computing Technologies, RMIT University, Australia. Her research pivots on data privacy and security related to machine learning, cloud computing, and digital health. Her current focus is on designing practical secure multiparty computation protocols and systems to its applications in privacy-preserving machine learning. In the past few years, her work has appeared in prestigious venues in computer security, such as USENIX Security Symposium, NDSS, and European Symposium on Research in Computer Security (ESORICS), IEEE Transactions on Dependable and Secure Computing (TDSC), IEEE Transactions on Information Forensics and Security (TIFS). Her research has been supported by Australian Research Council, and CSIRO. She is the recipient of the Best Paper Award of ESORICS 2021.
Abstract: Secure inference is designed to enable encrypted machine learning model prediction over encrypted data. It will ease privacy concerns when models are deployed in Machine Learning as a Service. For efficiency, most of recent secure inference protocols are constructed using secure multi-party computation (MPC) techniques. However, MPC-based protocols do not hide information revealed from their output. In the context of secure inference, prediction outputs (i.e., inference results of encrypted user inputs and models) are revealed to the users. As a result, adversaries can compromise output privacy of secure inference, i.e., launching Membership Inference Attacks (MIAs) by querying encrypted models, just like MIAs in plaintext inference. In this talk, I will first share our observations on the vulnerability of MPC-based secure inference to MIAs, though it yields perturbed predictions due to approximations. Then I will report on our recent research effort in guarding the output privacy of secure inference from being exploited by MIAs. I will also discuss the future research along with the line of privacy-preserving machine learning and deep learning.