This is a private QLoRA adapter for google/gemma-4-31B-it, fine-tuned on a cleaned subset of 2025 rows from Crownelius/Opus-4.6-Reasoning-2100x-formatted dataset focused on math (1899 rows) and code (126 rows).
Training used 4-bit NF4 quantization, BF16 precision, 4096 max sequence length, 2 epochs, and NVIDIA GH200 hardware, targeting specific linear projection modules. Final validation shows eval loss of 3.6018 and perplexity of 36.66; load via PEFT with the base model.