Simple secure summation

I have got an answer from ChatGPT on how to do secure aggregation by summing two clients’ vectors in the server side, but it succeeds only on the first round, the rest of the rounds it says failures. I have two clients and one server.

import flwr as fl
from Crypto import Random
from Crypto.PublicKey import RSA
from Crypto.Cipher import PKCS1_OAEP
import random

# Function to combine shares (we simply sum the shares for this example)
def combine_shares(shares):
    # Combine the shares by summing them (simple addition for demonstration)
    return [sum(x) / len(x) for x in zip(*shares)]

# Define the server logic
class Strat(fl.server.strategy.FedAvg):
    def aggregate_fit(self, rnd, results, failures):
        # Get shares from each client (result[0] contains the shares)
        client_shares = []
        for client_proxy, fit_res in results:
            client_shares.append(fl.common.parameters_to_ndarrays(fit_res.parameters)[0])
        # Combine the shares (in this case, we just sum them)
        aggregated_update = combine_shares(client_shares)
        
        # Return the aggregated update
        print(f"Aggregated Model Update: {aggregated_update}")
        
        return aggregated_update, {}


# Start the Flower server on localhost:8080
strat = Strat()
fl.server.start_server(server_address="localhost:8080", strategy=strat, config=fl.server.ServerConfig(num_rounds=3))
import flwr as fl
from Crypto import Random
from Crypto.PublicKey import RSA
from Crypto.Cipher import PKCS1_OAEP

# Import the necessary secret sharing functionality from PyCryptodome
from Crypto.PublicKey import RSA
from Crypto.Cipher import PKCS1_OAEP
import random


# Function to split a model update (vector) into shares using a simple secret-sharing method.
def secret_share(model_update, n_shares=3, threshold=2):
    # Simple dummy secret sharing example (not the true Shamir's Secret Sharing but for demonstration)
    shares = []
    for _ in range(n_shares):
        # Here we just create random noise to simulate secret sharing
        noise = [random.uniform(-0.5, 0.5) for _ in model_update]
        share = [a + b for a, b in zip(model_update, noise)]  # Simulating secret sharing
        shares.append(share)
    return shares

class Client(fl.client.NumPyClient):

    def get_parameters(self, config):
        # Simulate model update (e.g., a vector of weights or gradients)
        model_update = [1.0, 2.0, 3.0]

        # Split the model update into shares
        shares = secret_share(model_update, n_shares=3, threshold=2)
        return shares

    def fit(self, parameters, config):
        # This function simulates fitting a model (no actual training here)
        return parameters, len(parameters), {}

    def evaluate(self, parameters, config):
        # Dummy evaluation function
        return 0.0, len(parameters)


# Start the client with Flower (we'll start it on localhost:8080)
client1 = Client()
fl.client.start_numpy_client(server_address="localhost:8080", client=client1)

Output from client:
INFO :
INFO : Received: get_parameters message 59ea6d8a-5c81-467a-8f27-2aef566d9e25
INFO : Sent reply
INFO :
INFO : Received: train message e58cacec-259b-40e8-a374-3ad6210a00b0
INFO : Sent reply
INFO :
INFO : Received: reconnect message d660d6d2-58c9-466f-b400-83cd84400ab2
INFO : Disconnect and shut down

Output from server:
INFO : [INIT]
INFO : Requesting initial parameters from one random client
INFO : Received initial parameters from one random client
INFO : Starting evaluation of initial global parameters
INFO : Evaluation returned no results (None)
INFO :
INFO : [ROUND 1]
INFO : configure_fit: strategy sampled 2 clients (out of 2)
INFO : aggregate_fit: received 2 results and 0 failures
Aggregated Model Update: [np.float64(0.8757696645739083), np.float64(1.7642662273751237), np.float64(3.330408813842628)]
INFO : configure_evaluate: strategy sampled 2 clients (out of 2)
INFO : aggregate_evaluate: received 0 results and 2 failures
INFO :
INFO : [ROUND 2]
INFO : configure_fit: strategy sampled 2 clients (out of 2)
INFO : aggregate_fit: received 0 results and 2 failures
Aggregated Model Update:
INFO : configure_evaluate: strategy sampled 2 clients (out of 2)
INFO : aggregate_evaluate: received 0 results and 2 failures
INFO :
INFO : [ROUND 3]
INFO : configure_fit: strategy sampled 2 clients (out of 2)
INFO : aggregate_fit: received 0 results and 2 failures
Aggregated Model Update:
INFO : configure_evaluate: strategy sampled 2 clients (out of 2)
INFO : aggregate_evaluate: received 0 results and 2 failures
INFO :
INFO : [SUMMARY]
INFO : Run finished 3 round(s) in 2.85s
INFO :

It seems to only work on round one, summing the shares received, but the rest of the shares are failures somehow?

1 Like

Hi, I see that there might be some issues with your SA implementation, and I’m not able to identify all of them here. Since SA is a complex algorithm, ChatGPT might not be the best tool for this. However, if you’re interested in experimenting with SA, we offer an SA feature in Flower. Would you mind checking the SA example here? Secure aggregation with Flower (the SecAgg+ protocol) - Flower Examples 1.15.2

1 Like

Hello, ye ok I tried to implement SecAgg+ protocol with flower but I could not get it to work. Felt too complicated. Anyway I switched to tensorflow-federated and got it working there.