Hello,
I recently started using Flower for a federated learning project I am working on. As a sanity check, I implemented a centralized training baseline on CIFAR-10 using a ResNet-18 model. The training recipe in this case involves learning rate scheduling, optimizer momentum, etc. I have achieved a test accuracy of around 90%, which is as expected.
On the other hand, when doing the same in the federated setting using Flower, I can hardly reach this performance. Based on advice, it appears that learning rate scheduling is necessary to push performance. But due to the natively stateless nature of Flower clients, it is not immediately clear how I can use LR scheduling (and/or similar stateful operations) in Flower.
Any help with this would be greatly appreciated!