Random number of clients in every round of federated training

*This question was migrated from Github Discussions.

Original questions:

  1. Given a total of N clients, I want to randomly sample M (M < N) clients for each round of federated learning. So the set of M clients used in 2 consecutive rounds can be different. Is there a way to do that? Currently, I am using fraction_fit parameter of FedAvg but it doesn’t seem to sample clients randomly.
  2. Also, is it possible to sample different numbers of clients in each round of federated training? For example, we use 20 clients for the first round and 30 clients for the second round.
  3. Also, is there a way to disconnect clients in every round of training?

“Hi, you can totally implement the sampling mechanism you describe. One way of doing it is by having a slightly customised strategy. In particular you want to look into the configure_fit() method and add some logic to call client_manager.sample() using your M by drawing for instance from a uniform distribution (e.g. with numpy.random.randint)”