The 2023 International Workshop on

Privacy-Preserving Machine Learning

Co-located with IEEE TPS 2023

(November 1-3, 2023. Atlanta, Georgia, USA)

About

The primary purpose of the workshop is to strengthen collaborations between the communities of machine learning and security and privacy. The workshop will bring together experts from both areas to discuss and exchange ideas on privacy-preserving techniques for training, inference, and disclosure. The event consists of invited talks and contributed papers focused on privacy-aware data analysis. We believe that the workshop will provide an opportunity for researchers to come up with innovative solutions to the challenges of privacy-aware machine learning and facilitate cross-domain collaboration between researchers.

Specific topics of interest for the workshop include (but are not limited to) theoretical and empirical works in:

  • Privacy enhancing technologies for (distributed) machine learning
    • multi-party computation based on conventional garbled circuits, secret sharing, etc.
    • advanced cryptographic primitives, e.g., homomorphic encryption, functional encryption.
    • differential privacy
    • privacy-preserving oriented architecture design
  • Privacy-preserving federated learning
    • attack and defense to both horizontal and vertical FL
    • asynchronous problem in PPFL
    • effectiveness problem in vertical FL
  • Adversarial machine learning
  • Fairness and accountability for machine learning
  • Measurement and usability for privacy-preserving machine learning

Invited Talk

(Co-Keynote with IEEE TPS)

Can Federated Learning be Responsible?

Prof. Ling Liu Georgia Institute of Technology, USA

Abstract: Federated learning (FL) is an emerging distributed collaborative learning paradigm by decoupling the learning task from the centralized server to a decentralized population of edge clients. An attractive feature of federated learning is its default client privacy, allowing clients to jointly learn a global model while keeping their sensitive training data locally and only share local model updates with the federated server(s). However, FL may not guarantee responsible distributed learning. First, clients are heterogeneous with diverse computing resources, which may prevent thin clients with less resources to participate in federated learning. Second, recent studies have revealed that the default privacy in FL is insufficient for protecting the confidentiality of clients’ training data and the safety of the global model. Finally, federated learning is vulnerable to trojan attacks, which may poison both data and the local model updates. This keynote will describe model leakage risks and model poisoning risks in distributed collaborative learning systems, ranging from image understanding, video analytics, to large language models (LLMs), and provide insights for risk mitigation methods and techniques, ensuring responsible Federated Learning.

Bio: Ling Liu is a Professor in the School of Computer Science at Georgia Institute of Technology. She directs the research programs in the Distributed Data Intensive Systems Lab (DiSL), examining various aspects of large scale big data-powered artificial intelligence (AI) systems, and machine learning (ML) algorithms and analytics, including performance, availability, privacy, security and trust. Prof. Liu is an elected IEEE Fellow, a recipient of IEEE Computer Society Technical Achievement Award (2012), and a recipient of the best paper award from numerous top venues, including IEEE ICDCS, WWW, ACM/IEEE CCGrid, IEEE Cloud, IEEE ICWS. Prof. Liu served on editorial board of over a dozen international journals, including the editor in chief of IEEE Transactions on Service Computing (2013-2016), and the editor in chief of ACM Transactions on Internet Computing (since 2019). Prof. Liu is a frequent keynote speaker in top-tier venues in Big Data, AI and ML systems and applications, Cloud Computing, Services Computing, Privacy, Security and Trust of data intensive computing systems. Her current research is primarily supported by USA National Science Foundation under CISE programs, IBM and CISCO.

Call for Contribution

We seek contributions from different research areas of computer science, information science, and information security.

Authors are invited to submit a short paper of their work. Submissions are single-blind (non-anonymized) and should between 6 to 8 pages in the standard two column IEEE proceedings format . Submissions will undergo a lightweight review process and will be judged on originality, relevance, interest, and clarity. Accepted workshop paper will be presented at the workshop co-located with the joint conference either as a talk or a poster. The proceedings of the workshop will be co-located with the joint-conference and published by the IEEE.

submit your work at easychair

submissions deadline: Sept.15, 2023

Organization (by alphabetical order of surname)

Ka-Ho Chow

IBM Reserch - Almaden

Chao Li

School of Computer and Information Technology, Beijing Jiaotong University

Yanzhao Wu

Knight Foundation School of Computing and Information Sciences, Florida International University

Runhua Xu

School of Computer Science and Engineering, Beihang University

Tianyu Chen (Student Volunteer, Ph.D. Candidate)

School of Computer Science and Engineering, Beihang University