We read this, which is excellent to start with CUDU GPUs:
Is it possible for you provide another version, which support MLX (Apple Silicon) and LLM?
Thank you.
We read this, which is excellent to start with CUDU GPUs:
Is it possible for you provide another version, which support MLX (Apple Silicon) and LLM?
Thank you.
Hi @chrischaochn,
Great to see our example to be helpful! You can check out our quickstart-mlx example to get started. While creating an MLX LLM example is on our roadmap, it’s likely that this will be implemented in Swift for iOS.
In the meantime, feel free to try it out yourself. If you encounter any issues or have questions along the way, don’t hesitate to reach out.
Hello @chrischaochn , you can find an adaptation of FlowerTune LLM that uses MLX-LM here: GitHub - ethicalabs-ai/BlossomTuneLLM-MLX: Federated Fine-Tuning of LLMs on Apple Silicon with Flower.ai and MLX-LM