6 Open-Source Libraries to FineTune LLMs 1. Unsloth GitHub: https://github.com/unslothai/unsloth โ Fastest way to fine-tune LLMs locally โ Optimized for low VRAM (even laptops) โ Plug-and-play with Hugging Face models
3. TRL (Transformer Reinforcement Learning) GitHub: https://github.com/huggingface/trl โ RLHF, DPO, PPO for LLM alignment โ Built on Hugging Face ecosystem โ Essential for post-training optimization
4. DeepSpeed GitHub: https://github.com/microsoft/DeepSpeed โ Train massive models efficiently โ Memory + speed optimization โ Industry standard for scaling
6. PEFT GitHub: https://github.com/huggingface/peft โ Fine-tune with minimal compute โ LoRA, adapters, prefix tuning โ Best for cost-efficient training
since everyone liked my previous announcement post ( https://huggingface.co/posts/Tonic/338509028435394 ) so much , i'm back with more high quality proceedural datasets in the Geospacial domain for SFT training !
This is the best set of AI and ML books and a full guide to learning machine learning from the ground up. This is my study material that I used, so I thought it would be helpful to share it with others. Like, share, and add it to your collection at Ujjwal-Tyagi/ai-ml-foundations-book-collection.
We are hiring at Shirova AI. We need AI researchers and engineers to work in our research lab. Shirova AI is a research lab in India, so we can help our researchers move to nearby workspaces or let them work from home without ever coming to the lab. We're building our founding team, so the pay will be good. You can learn, so don't hesitate to mail us at: [email protected]
I am sharing my study material for AI & ML, these books are really a "bible" and gives very strong foundation, I also have given guidance, introduction and my master notes in the dataset repo card! I hope you will find them helpful, if you have any queries, just start a discussion and I am always there to help you out! Ujjwal-Tyagi/ai-ml-foundations-book-collection