Flower Labs is continuously looking to integrate baselines to our Flower framework. There’s always an opportunity to make a difference and contribute to open-source code by adding another cool baseline to the repository.
Below you can find a list of baselines that can be implemented:
- FedDF - Ensemble Distillation for Robust Model Fusion in Federated Learning
- FedNTD - Preservation of the Global Knowledge by Not-True Distillation in Federated Learning
- FedSAM - Improving Generalization in Federated Learning by Seeking Flat Minima
- FedSMOO - Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape
- q-FFL - Fair Resource Allocation in Federated Learning
- FedMinMax - Minimax Demographic Group Fairness in Federated Learning
- Power-of-Choice - Towards Understanding Biased Client Selection in Federated Learning
- Oort - Efficient Federated Learning via Guided Participant Selection
- AdaS - Private Adaptive Optimization with Side information
- Per-FedAvg - Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach
- FedPAC - Personalized Federated Learning with Feature Alignment and Classifier Collaboration
- pFedHN - Personalized Federated Learning using Hypernetworks
- FedFomo - Personalized Federated Learning with First Order Model Optimization
- FedPM - Sparse Random Networks for Communication-Efficient Federated Learning
- Scaffnew - ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!
- EDEN - Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
- FedCVAE - Data-Free One-Shot Federated Learning Under Very High Statistical Heterogeneity
- FedDC - Federated Learning from Small Datasets
- MoCoSFL - enabling cross-client collaborative self-supervised learning
- FedAUX - Leveraging Unlabeled Auxiliary Data in Federated Learning
- FedDM - Iterative Distribution Matching for Communication-Efficient Federated Learning
- ZeroFL - Efficient On-Device Training for Federated Learning with Local Sparsity
- FedSM - Closing the Generalization Gap of Cross-Silo Federated Medical Image Segmentation
- FedMix - Approximation of Mixup under Mean Augmented Federated Learning
- SSFL + KWS - Semi-Supervised Federated Learning for Keyword Spotting
- EF21 - A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
- FedHE - Heterogeneous Models and Communication-Efficient Federated Learning
- Ditto - Fair and Robust Federated Learning Through Personalization
- FedHarmony - Unlearning Scanner Bias with Distributed Data
- FedSim - Similarity guided model aggregation for Federated Learning
- FedExP - Speeding Up Federated Averaging via Extrapolation
- SCAFFOLD - Stochastic Controlled Averaging for Federated Learning
- FedGen - Data-Free Knowledge Distillation for Heterogeneous Federated Learning
- FedBABU - Towards Enhanced Representation for Federated Image Classification
Please express your interest for one of these baselines. Also, feel free to connect with us here and on Slack.