Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Blog Post 1

less than 1 minute read

Published:

portfolio

publications

research

Model Inversion Attacks against Secure Federated Learning Systems

Research Problem

  • Federated learning (FL) is a distributed learning paradigm that enables its participants to collaboratively train machine learning models without sharing local datasets. In this context, it is considered as a privacy-preserving paradigm.
  • However, recent optimization-based model inversion attacks show that a curious server can reverse the shared model updates between FL participants back to local training samples, challenging this privacy guarantee.
  • To address this, a multi-party computation mechanism named secure aggregation is proposed, which hides individual model updates behind cryptographic masks but keeps the aggregated results identical to pre-masked values to keep system utility. This prevents the optimization-based attackers from obtaining individual model updates, effectively defending these attacks.
  • In this research, we investigate whether the parameter server can still reconstruct local client samples from model updates when the secure aggregation protocol is in place.

Protecting Network Timing from Byzantine Attacks within Time-Sensitive IoT Networks

Research Problem

  • Ensuring network time synchronization is super important for time-sensitive IoT networks such as 5G and automotive Ethernet because they have very stringent time synchronization requirements and failing to meet them (even for a millisecond-level desynchronization) can cause significant system performance degradation.
  • Technically, precision time protocol (PTP) is the widely-used network timing protocol that ensures time synchronization for many distributed networks.

talks

teaching