Hi everyone,
I’m working on a federated learning setup using Flower and would like to train each client on a fixed number of mini-batches per round instead of a full epoch. I noticed that the pyproject.toml
file allows setting local_epochs
, and the Flower documentation mentions that training can be limited to “as little as a few steps (mini-batches)” in each round
However, I’m unsure how to implement this efficiently:
- Is there a built-in way to specify a fixed number of mini-batches per round using
pyproject.toml
or client configuration? - Or would I need to manually define an iterator within the client class to handle this?
Any guidance or best practices would be greatly appreciated!
Thanks in advance!