Choosing parameters for Local Differential Privacy

Hi everyone,

I implemented a Federated Learning simulation where I train a Logistic Regression model and implemented local differential privacy that adds noise to the parameters with the LocaldpMod, since the server will be regarded as a untrusted party. I have a question on what to choose as parameters.

For the delta I chose a small number, 0.001 and the epsilon I will choose different values between 1-10 to compare the results.

I am having trouble to find what are ‘good’ values for the clipping_norm and sensitivity. I have found multiple resources on what they do, but I cannot find anything on what values to choose.

Does anyone have advice for me on this? Does anyone have references that explain how to decide these values for your simulation given your dataset and model?

Thanks!