Server as job coordinator?

*This question was migrated from Github Discussions.

Original Question:
It appears that the current design assumes that a single strategy is passed to the Server at initialization. Is is possible to, instead, use the server as a job coordinator such that multiple FL jobs are submitted and managed by a single central server?

In a many to many topology where many data scientists want to submit multiple FL jobs to a distributed pool of workers it would be a requirement that each worker in the pool only need to know about a single central server and each data scientist could be furnished with a client that can be configured with a single central server as well.

Is this either possible today or planned on the Flower roadmap?

Answer:
Hi, thanks for the question! Job coordination / multitenancy is currently is not yet supported in a user-friendly way. It’s one of the top features on our roadmap and we’re already exploring different design approaches for it, our goal is (as always) to provide a very user-friendly implementation.

That being said, there are a couple of workarounds that people have successfully used:

  1. Use a custom strategy that orchestrates the different jobs (fairly easy to implement)
  2. Use MLCube in combination with a custom strategy
  3. Start multiple Flower server and orchestrate them through an external script