Nemotron 3 Nano 4B — Power Apps Formula Expert
A fine-tuned version of NVIDIA Nemotron 3 Nano 4B specialized in Microsoft Power Apps formula generation, error diagnosis, and SharePoint integration patterns.
Model Details
| Attribute | Details |
|---|---|
| Developed by | DevNexAI |
| Base model | NVIDIA Nemotron 3 Nano 4B (Hybrid Mamba-Transformer MoE) |
| Fine-tuning method | QLoRA (4-bit) via Unsloth |
| LoRA rank | 16 |
| LoRA alpha | 32 |
| Training hardware | NVIDIA A100-SXM4-40GB (Google Colab Pro) |
| Precision | BF16 |
| License | Apache 2.0 |
| Languages | English, Spanish |
What This Model Does
This model is trained to assist Power Apps developers with:
- Formula generation — Describe what you need in natural language, get the correct Power Apps formula
- Error diagnosis — Paste your broken formula, get the root cause and fix
- SharePoint patterns — Delegation-aware queries, Patch operations, Choice/Lookup column handling
- Regional syntax — Understands separator differences between English (
,) and Spanish (;) environments - Best practices — Named Formulas, Concurrent loading, batch Patch, role-based visibility
Training Data
The model was fine-tuned on 202 curated examples across 5 categories:
| Category | Examples | % |
|---|---|---|
| Error/Debugging | 71 | 35.1% |
| Formula generation | 61 | 30.2% |
| Patterns/How-to | 56 | 27.7% |
| SharePoint specific | 10 | 5.0% |
| Explanation | 4 | 2.0% |
Dataset: Devnexai/Nemotron_PowerApps_FineTunning
Usage
With Unsloth (recommended)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
"Devnexai/nemotron3-powerapps-formulas",
load_in_4bit=True,
trust_remote_code=True,
)
FastLanguageModel.for_inference(model)
messages = [
{"role": "system", "content": "Eres un experto en Power Apps. Genera fórmulas correctas, explica errores y ayuda a resolver problemas de sintaxis."},
{"role": "user", "content": "Filtra la lista Empleados donde el departamento sea Ventas y el estatus sea Activo"},
]
inputs = tokenizer(
tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True),
return_tensors="pt"
).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7, top_p=0.9)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
With vLLM (API server)
vllm serve unsloth/NVIDIA-Nemotron-3-Nano-4B-BF16 \
--enable-lora \
--lora-modules powerapps=Devnexai/nemotron3-powerapps-formulas
Example Prompts
| Prompt | Category |
|---|---|
Filtra pedidos con monto mayor a 10000 y estatus Pendiente |
Formula generation |
Error: Filter(Lista, Status = "Activo") me marca error de tipo |
Error diagnosis |
Cómo hago una búsqueda delegable en SharePoint por múltiples campos |
SharePoint pattern |
Mi Patch no guarda datos, no da error pero no se actualiza |
Silent error debugging |
Copié una fórmula de internet y no funciona: If(x > 1, "Sí", "No") |
Regional syntax |
Limitations
- Trained on 202 examples — increasing to 500+ will improve quality
- Optimized for SharePoint as datasource; Dataverse patterns are limited
- GGUF export not available yet (Mamba-Transformer hybrid not supported by llama.cpp)
- Requires Linux or WSL2 for local deployment (mamba_ssm dependency)
About DevNexAI
DevNexAI is an AI consulting and products venture focused on the Mexican and LATAM enterprise market, specializing in applied AI for enterprise platforms.
Training Framework
This model was trained 2x faster with Unsloth
Model tree for Devnexai/nemotron3-powerapps-formulas
Base model
nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base Finetuned
nvidia/NVIDIA-Nemotron-Nano-12B-v2 Finetuned
nvidia/NVIDIA-Nemotron-Nano-9B-v2 Finetuned
nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16 Finetuned
unsloth/NVIDIA-Nemotron-3-Nano-4B