Introduction

Fine-tuning offers a no-code/low-code UI for customizing base models, supporting both LoRA and full-parameter fine-tuning. Using extensible templates, users can configure models, data sources, resources, and hyperparameters, then submit and monitor training tasks with logs, metrics, and curves. Training outputs can be registered in a model repository for deployment, and custom templates enable support for most model types and training workflows.