Fine-Tuning


The Main Functions
Forward_Backward
Execute a forward computation pass paired with a backward gradient propagation, automatically aggregating gradient values.
Optim_Step
Adjust model weights using the aggregated gradients to drive training progress.
Save_State
Preserve current training status to enable seamless resumption of workflows at a later stage.
Server Module
Deliver the complete architecture of the server module—encompassing test suites and support for checkpoint-based resume functionality.
Major Advantages
Underlying Acceleration:Powered by Colossal-AI, delivering substantial improvements in distributed training efficiency.
Flexible Fine-Tuning:Supports both LoRA (lightweight and cost-effective) and full fine-tuning (optimized for complex scenarios).
Core-Focused Workflow:Eliminates the complexity of distributed training, allowing users to concentrate on data and algorithm design.
Usable & Controllable:Features a streamlined API and full control over training loops, with support for custom loss functions.
Reliable Training Experience:Transparently handles hardware failures, supports checkpoint resumption, and enables model weight export.
Models Support
Qwen3-0.6B
Qwen3-1.7B
Qwen3-4B
Qwen3-8B
Qwen3-14B
Qwen3-32B
Frequently Asked Questions
Yes, we are now open free trial quota upon accessing Fine-Tuning SDK, allowing you to test core features.