Hello,
I am brand new to flower.ai, but I have a question that might lend itself to intermediate or advanced users - and it could very well be my limited understanding of how flower works, that I might be, ‘barking up the wrong tree,’ considering I don’t see this topic already addressed.
I work as an enterprise architect for Generative AI, formerly, I was a Product Manager for Gen AI R&D and before that a PM for ML Governance. I come from an ML background but now find myself in the LLM ecosystem. ‘Governance’ is a loaded term, yes it includes GDPR and data privacy, but the topic from my perspective includes model security, responsible ai, ethics, sustainability, fairness/bias, hallucination, guardrails, everything.
What I am personally researching is an idea to implement Gen AI governance that is ‘decentralized’ or ‘federated’ not in the way we just think of federated data, but in a way that you could implement policy-as-code in a large organization and make it easier to:
- track internal gen ai approvals reducing governance review redundancy
- automate/’federate’/distribute certain guidelines that can slip through the cracks, teams may otherwise avoid as a loophole to get their use case approved, and enforce things that might not get added to requirements if its not something an employee is being incentivized to achieve (no one hates the environment but if your performance/end result is not getting measured against it then it probably won’t make it on your limited list of what you can feasibly achieve
- policy-as-code: from what I can tell this is all ‘guardrails’ are, but it’s not explicitly shared since most guardrails are developed proprietarily, some of my peers scoff when I say ‘policy-as-code’ since they think of it in context of traditional software development
When I came across flower.ai I thought of federated, in the way that you could empower Gen AI development teams/use cases to easily be compliant to whatever internal standards were set without trying, reduce manual governance reviews. But the content seems focused around the shared data usage and regulatory concerns.
I’m happy to continue diving into this solo - but since I am SO new, and found this discussion board, I figured I would ask in case someone who is further than me could help save me if flower.ai isn’t suited for this.
I have been exploring OPA to prove my peers wrong - about Policy-as-code to enforce LLM policies, (https://www.openpolicyagent.org/) now I am wondering if flower.ai is what I need to ‘federate’ OPA. (but maybe this is also overkill, maybe OPA is inherently federated as I am thinking of this).
If you got this far and think I am insane, you can just forget you ever saw this
if you think you have a helpful directive on my train of thought here either way, I’d love to chat with someone and brainstorm. I am working independently in github because I feel that industry-wide across LLMs, no one can clearly answer the governance question and it’s in all of our interests that we develop mature capabilities. So that’s my skin in the game. I’m not attempting to build my own software solution, just a nerd with an idea but probably half-baked technical skills needed to get me there or to know if I am crazy.