I want to talk about ***a rabbit hole I have fallen down *since reading a paper on The Promises and Predicaments of Federated Learning in Healthcare. **
Last year, I had the privilege of working with an incredible team focused on applying machine learning techniques to tackle data interoperability challenges within our healthcare system. We grappled with issues like disparate data formats, strict privacy regulations, and the sheer volume of sensitive patient information scattered across multiple institutions. These hurdles sparked my curiosity about how emerging technologies could offer solutions, ultimately leading me to delve deeper into federated learning and privacy-enhancing technologies. These challenges led me to further explore the promise of technologies, such as homomorphic encryption and differential privacy, and their integration into federated learning.
Federated Learning
The concept of federated learning involves partitioning the learning process into smaller units that can occur locally, while model parameters or gradients are shared with a central server for aggregation. This decentralisation ensures that the raw data never leaves the originating device or organisation, thereby reducing the risk of exposure. Multiple entities(e.g., mobile devices, hospitals, or organisations) collaboratively train a shared global model without exchanging their local datasets.
Imagine a teacher wants to teach math to a group of students. Each student lives in a different city, they cannot meet together in one classroom at the same time. The teacher sends the lesson to each student, they practice at home, and then send their test results back to the teacher. The teacher doesn’t see their workbooks, just the grades. The teacher uses these grades to update the lesson for next time. In traditional AI training, data is collected in one place (like all the students coming to one classroom). But in federated learning, each “student” (or user) keeps their data private and helps improve the model locally. Only updates are shared, never the original data. **Federated Learning: **
- ML paradigm that allows a model to be trained across decentralised devices or servers holding local data points without exchanging them.
- Local models are trained on local data while gradients (model updates) are shared with central server.
- The central server aggregates these updates to improve the global model.
- This is relevant where the data used to train models should be kept private; and it’s not desirable or feasible to centralise all data (Healthcare, edge computing, mobile devices, distributed networks)
Privacy-Enhancing Technologies
While federated learning reduces the need to share raw data, it isn’t impervious to all threats. Model updates can sometimes be reverse-engineered to reveal sensitive information—a process known as model inversion attacks.
Challenges in FL: This is where privacy-enhancing technologies like differential privacy and homomorphic encryption become crucial. Privacy-enhancing technologies enable AI systems to learn from data without exposing it in raw form.
Differential Privacy introduces carefully calibrated noise to the data or model updates, making it mathematically improbable to identify any individual’s information from the aggregated data. The noise masks the contribution of a single data point. This concept is not limited to ML techniques but a general concept for securing data sets.
Homomorphic Encryption extends privacy protection to the computational level by allowing computations on encrypted data. For example, sensitive patient data can remain encrypted while computations (such as training updates) are performed on it. The result, when decrypted, matches the outcome of operations performed on the plaintext data.
- Partially Homomorphic Encryption (PHE): Supports either addition or multiplication.
- Fully Homomorphic Encryption (FHE): Supports arbitrary computations, both additions and multiplications, on encrypted data.
- **Application in Federated Learning: **Homomorphic encryption allows a central server to aggregate encrypted model updates without learning anything about the original data, thereby further reducing the risk of data breaches. Data can remain confidential while contribute to a collective learning process.
This allows a computer to work with data without actually being able to see what’s inside. It can calculate and process encrypted data, which makes sure the original data stays secure and hidden throughout. This can allow AI systems to process sensitive data, like health records, without ever revealing the data itself.
Combining in FL
Process Flow:
- Local Model Training: Each entity trains the model on its local data.
- Applying Differential Privacy: Entities add random noise to their model updates according to differential privacy mechanisms.
- Encrypting Updates with HE: The differentially private updates are then encrypted using a homomorphic encryption scheme.
- Secure Aggregation by Server: The server aggregates the encrypted updates without decrypting them, thanks to HE.
- Updating Global Model: The aggregated result is decrypted (if necessary) and used to update the global model.
- Model Distribution: The updated global model is sent back to for the next training round.
Appendix
A.1 Federated Learning Optimization
The objective in FL is to minimize the global loss function:

A.2 Differential Privacy Mechanisms
- Gaussian Mechanism: For function f with sensitivity Δf, the mechanism is:

A.3 Homomorphic Encryption Schemes
- Paillier Cryptosystem: An additive homomorphic encryption scheme where:
