Announcing Flower 1.11

The Flower Team is excited to announce the release of Flower 1.11 stable and it’s packed with updates!
Flower is a friendly framework for collaborative AI and data science.
It makes novel approaches such as federated learning, federated evaluation,
federated analytics, and fleet learning accessible to a wide audience of researchers and engineers.

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Charles Beauville, Chong Shen Ng, Daniel J. Beutel, Daniel Nata Nugraha, Danny, Edoardo Gabrielli, Heng Pan, Javier, Meng Yan, Michal Danilowski, Mohammad Naseri, Robert Steiner, Steve Laskaridis, Taner Topal, Yan Gao

What’s new?

  • Deliver Flower App Bundle (FAB) to SuperLink and SuperNodes (#4006, #3945, #3999, #4027, #3851, #3946, #4003, #4029, #3942, #3957, #4020, #4044, #3852, #4019, #4031, #4036, #4049, #4017, #3943, #3944, #4011, #3619)

    Dynamic code updates are here! flwr run can now ship and install the latest version of your ServerApp and ClientApp to an already-running federation (SuperLink and SuperNodes).

    How does it work? flwr run bundles your Flower app into a single FAB (Flower App Bundle) file. It then ships this FAB file, via the SuperExec, to both the SuperLink and those SuperNodes that need it. This allows you to keep SuperExec, SuperLink and SuperNodes running as permanent infrastructure, and then ship code updates (including completely new projects!) dynamically.

    flwr run is all you need.

  • Introduce isolated ClientApp execution (#3970, #3976, #4002, #4001, #4034, #4037, #3977, #4042, #3978, #4039, #4033, #3971, #4035, #3973, #4032)

    The SuperNode can now run your ClientApp in a fully isolated way. In an enterprise deployment, this allows you to set strict limits on what the ClientApp can and cannot do.

    flower-supernode supports three --isolation modes:

    • Unset: The SuperNode runs the ClientApp in the same process (as in previous versions of Flower). This is the default mode.
    • --isolation=subprocess: The SuperNode starts a subprocess to run the ClientApp.
    • --isolation=process: The SuperNode expects an externally-managed process to run the ClientApp. This external process is not managed by the SuperNode, so it has to be started beforehand and terminated manually. The common way to use this isolation mode is via the new flwr/clientapp Docker image.
  • Improve Docker support for enterprise deployments (#4050, #4090, #3784, #3998, #4094, #3722)

    Flower 1.11 ships many Docker improvements that are especially useful for enterprise deployments:

    • flwr/supernode comes with a new Alpine Docker image.
    • flwr/clientapp is a new image to be used with the --isolation=process option. In this mode, SuperNode and ClientApp run in two different Docker containers. flwr/supernode (preferably the Alpine version) runs the long-running SuperNode with --isolation=process. flwr/clientapp runs the ClientApp. This is the recommended way to deploy Flower in enterprise settings.
    • New all-in-one Docker Compose enables you to easily start a full Flower Deployment Engine on a single machine.
    • Completely new Docker documentation: Flower Framework main
  • Improve SuperNode authentication (#4043, #4047, #4074)

    SuperNode auth has been improved in several ways, including improved logging, improved testing, and improved error handling.

  • Update flwr new templates (#3933, #3894, #3930, #3931, #3997, #3979, #3965, #4013, #4064)

    All flwr new templates have been updated to show the latest recommended use of Flower APIs.

  • Improve Simulation Engine (#4095, #3913, #4059, #3954, #4071, #3985, #3988)

    The Flower Simulation Engine comes with several updates, including improved run config support, verbose logging, simulation backend configuration via flwr run, and more.

  • Improve RecordSet (#4052, #3218, #4016)

    RecordSet is the core object to exchange model parameters, configuration values and metrics between ClientApp and ServerApp. This release ships several smaller improvements to RecordSet and related *Record types.

  • Update documentation (#3972, #3925, #4061, #3984, #3917, #3900, #4066, #3765, #4021, #3906, #4063, #4076, #3920, #3916)

    Many parts of the documentation, including the main tutorial, have been migrated to show new Flower APIs and other new Flower features like the improved Docker support.

  • Migrate code example to use new Flower APIs (#3758, #3701, #3919, #3918, #3934, #3893, #3833, #3922, #3846, #3777, #3874, #3873, #3935, #3754, #3980, #4089, #4046, #3314, #3316, #3295, #3313)

    Many code examples have been migrated to use new Flower APIs.

  • Update Flower framework, framework internals and quality infrastructure (#4018, #4053, #4098, #4067, #4105, #4048, #4107, #4069, #3915, #4101, #4108, #3914, #4068, #4041, #4040, #3986, #4026, #3961, #3975, #3983, #4091, #3982, #4079, #4073, #4060, #4106, #4080, #3974, #3996, #3991, #3981, #4093, #4100, #3939, #3955, #3940, #4038)

    As always, many parts of the Flower framework and quality infrastructure were improved and updated.

Deprecations

  • Deprecate accessing Context via Client.context (#3797)

    Now that both client_fn and server_fn receive a Context object, accessing Context via Client.context is deprecated. Client.context will be removed in a future release. If you need to access Context in your Client implementation, pass it manually when creating the Client instance in client_fn:

    def client_fn(context: Context) -> Client:
        return FlowerClient(context).to_client()
    

Incompatible changes

  • Update CLIs to accept an app directory instead of ClientApp and ServerApp (#3952, #4077, #3850)

    The CLI commands flower-supernode and flower-server-app now accept an app directory as argument (instead of references to a ClientApp or ServerApp). An app directory is any directory containing a pyproject.toml file (with the appropriate Flower config fields set). The easiest way to generate a compatible project structure is to use flwr new.

  • Disable flower-client-app CLI command (#4022)

    flower-client-app has been disabled. Use flower-supernode instead.

  • Use spaces instead of commas for separating config args (#4000)

    When passing configs (run config, node config) to Flower, you now need to separate key-value pairs using spaces instead of commas. For example:

    flwr run . --run-config "learning-rate=0.01 num_rounds=10"  # Works
    

    Previously, you could pass configs using commas, like this:

    flwr run . --run-config "learning-rate=0.01,num_rounds=10"  # Doesn't work
    
  • Remove flwr example CLI command (#4084)

    The experimental flwr example CLI command has been removed. Use flwr new to generate a project and then run it using flwr run.

4 Likes

Hi
I was wondering, when will the advanced pytorch example be updated with how to run with the Deployment Engine Deployment Engine and TLS certificates, or with Docker? I would really find this usefull!

Kind Regards
Johan

1 Like

Hello @johanrubak,

You can find instructions on how to run our quickstart examples with our Docker Compose setup here: Run Flower Quickstart Examples with Docker Compose - Flower Framework

These instructions also work for the advanced-pytorch example. The only changes you need to make are in Step 1, where you clone advanced-pytorch instead of quickstart-pytorch:

git clone --depth=1 https://github.com/adap/flower.git \
    && mv flower/examples/advanced-pytorch . \
    && rm -rf flower && cd advanced-pytorch

Run without W&B

You might also want to disable W&B if you don’t have an api key. In the advanced-pytorch directory, run the following command to disable W&B:

flwr run . local-deployment --run-config use-wandb=false

If you have an W&B api key you can set the WANDB_API_KEY environment variable in the compose.yml file:

services:
  # ...
  # create a SuperExec service
  superexec:
    # ...
    environment:                      # <- add
      WANDB_API_KEY: <YOUR_API_KEY>.  # <- add
  # ...

Run with TLS

To run the example with TLS, you will need to download the with_tls.yml and certs.yml files in the advanced-pytorch directory:

curl https://raw.githubusercontent.com/adap/flower/refs/heads/main/src/docker/complete/with-tls.yml -o with-tls.yml
curl https://raw.githubusercontent.com/adap/flower/refs/heads/main/src/docker/complete/certs.yml -o certs.yml

Then, follow the steps described in Quickstart with Docker Compose - Flower Framework.

However, the path of the root-certificate is different from what’s shown in the tutorial, as it assumes a different directory structure.

Instead, use root-certificates = "superexec-certificates/ca.crt".

Run with using Docker

If you want to run it with just Docker, you can follow the steps on Quickstart with Docker - Flower Framework.

However, instead of setting up a new project, navigate to the advanced-pytorch directory and follow the instructions in the tutorial.

I hope that helps!
If you have any further questions, feel free to ask.

1 Like

Hi, this is great! But I also wonder how we can run the deployment engine directly in the terminal with the commands flower-supernode, flower-superexec, flower-server-app, flower-superlink, and flwr run for example with 1 server on one terminal and 2 clients on 2 other terminals. Thank you.

1 Like

Hey @paula-delgadodesanto,
you can run the deployment engine directly in the terminal with the following commands:

Terminal 1: Start the SuperLink

flower-superlink --insecure

Terminal 2 and 3: Start two SuperNodes

flower-supernode --superlink 127.0.0.1:9092 --insecure

Terminal 4: Start the SuperExec

flower-superexec --executor flwr.superexec.deployment:executor \
    --executor-config 'superlink="127.0.0.1:9091"' --insecure 

Before you can run your project, you need to add the following configuration to your pyproject.toml file:

[tool.flwr.federations.local-deployment]
address = "127.0.0.1:9093"
insecure = true

This configuration tells the flwr CLI where to run your project.

Important: The commands above start the SuperLink, SuperNodes, and SuperExec in insecure mode (without TLS).

Terminal 5: Run the project

flwr run . local-deployment

You can then follow the ServerApp logs in Terminal 4 (SuperExec).


Additional note

If your ClientApp needs to access the node_config like context.node_config["partition-id"], you will need to set the config values when starting the SuperNode.

For example:

flower-supernode --superlink 127.0.0.1:9092 --insecure \
    --node-config "partition-id=1"

If you have any further questions, please let me know.

1 Like

Hi Robert, thanks for your answer! It works perfectly :slight_smile:
If I want to run it now from one server and two Jetson as clients, do I need to change something such as pyproject.toml?:

[tool.flwr.federations.local-deployment]
address = "127.0.0.1:9093"
insecure = true

Thank you again,
Paula

2 Likes

Hi @paula-delgadodesanto, glad to hear it worked for you!

To run the server components (SuperLink and SuperExec) on a separate server, you can use the same commands as above. However, you’ll need to update the pyproject.toml file and the SuperNode command.

Specifically, you’ll need to update the address in the [tool.flwr.federations.local-deployment] section to the IP address of the server and also update the --superlink flag when starting the SuperNode to the same IP address.

For example, if the server has the IP address 192.168.2.100, you would start the SuperNode with:

flower-supernode --superlink 192.168.2.100:9092 --insecure

And update the pyproject.toml file to:

[tool.flwr.federations.local-deployment]
address = "192.168.2.100:9093"
insecure = true

If you encounter any issues, feel free to let me know!

1 Like

Hi Robert, so whee I try to run the opacus example following your steps, I am facing the next ERROR:

ERROR :     ClientApp raised an exception
...
File"/home/opacus_fl/client_app.py", line 81, in client_fn
    partition_id=partition_id, num_partitions=context.node_config["num-partitions"]
KeyError: 'num-partitions'

So I think when we are in simulation there is no problem to pass context.node_config[‘num-partitions’], but when we do it with the deployment engine directly we face the previous error.

Please, how can I solve this?

Thank you so much again,
Paula

1 Like

Hi Paula,

It looks like the issue arises when using the deployment engine because the node config isn’t being set correctly.

In simulation, the engine knows how many nodes will exist and can set partition-id and num-partitions automatically, but with the deployment engine, these need to be manually configured.

You’ll need to set the node config using the --node-config flag when starting the SuperNode.

For example, if you have two nodes, you can start the first one with:

flower-supernode --superlink <SUPERLINK_IP>:9092 --insecure \
    --node-config "partition-id=1 num-partitions=2"

And the second one with:

flower-supernode --superlink <SUPERLINK_IP>:9092 --insecure \
    --node-config "partition-id=2 num-partitions=2"

Let me know if that helps!

2 Likes