How to Manage Learning Rate Scheduling (and similar operations) in Flower?

Hello,
I recently started using Flower for a federated learning project I am working on. As a sanity check, I implemented a centralized training baseline on CIFAR-10 using a ResNet-18 model. The training recipe in this case involves learning rate scheduling, optimizer momentum, etc. I have achieved a test accuracy of around 90%, which is as expected.

On the other hand, when doing the same in the federated setting using Flower, I can hardly reach this performance. Based on advice, it appears that learning rate scheduling is necessary to push performance. But due to the natively stateless nature of Flower clients, it is not immediately clear how I can use LR scheduling (and/or similar stateful operations) in Flower.

Any help with this would be greatly appreciated!

Hi @mabduaguye! If you sample clients (i.e., you don’t select every client every round), I’d recommend to control the learning rate (and optimizer state) via the server-side logic (Strategy or main loop). You can send the learning rate to the client using the config dict. The client (in fit or evaluate) would then read the LR from the config dict and set it accordingly.

Hi @daniel,
Thanks for the response. That sounds good! I have around 10 clients (and frequently less). I imagine such a small number should be scalable enough to let me sample all clients?

Scalability-wise, from a framework perspective, you can sample much larger quantities of clients.

From an implementation perspective, I’d generally recommend to send the learning rate (and other hyper-parameters) from the ServerApp to the ClientApp.

Makes sense. Thanks!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.