How to change model behavior! Context engineering, fine-tuning and more
John Savill breaks down practical ways to change an AI model’s behavior, from prompt and context techniques through to retrieval-augmented generation (RAG) and fine-tuning approaches like LoRA.
Overview
The video explains the main levers available to influence how a model responds, starting with foundational concepts (parameters, layers, embeddings, and the training phase) and then moving into behavior-shaping techniques.
Model fundamentals (“Model 101”)
- What model parameters represent.
- How hidden layers and dimensions relate to model capability.
- What embeddings are used for.
- The difference between the training phase and the prompt/response (inference) phase.
Prompt-based behavior changes
- How changing the prompt changes outputs.
- Zero-shot, one-shot, and few-shot prompting as ways to steer behavior.
- Using a system prompt to set higher-level instructions and constraints.
Retrieval-augmented generation (RAG)
- How RAG can change model behavior by supplying relevant external context at query time.
Context engineering
- How to shape and manage the context provided to the model to influence responses.
Fine-tuning and LoRA
- Fine-tuning as a way to change behavior by updating model weights.
- LoRA (Low-Rank Adaptation) as an approach for adapting models.
- How these techniques can be combined with prompting, RAG, and context engineering.