Zero-Cost n8n + MCP with Docker | Smarter Agents in Minutes
Alireza Chegini | AI Skills for Your Career demonstrates how to run n8n locally with Docker and use MCP (Model Context Protocol) to build a tool-using AI agent workflow (including search and memory) without writing code.
Full summary based on transcript
What the video builds (n8n + MCP + Docker)
The presenter walks through creating a local, “zero-cost” setup where:
- n8n runs locally via Docker Desktop
- n8n uses MCP community nodes to act as an MCP client
- An AI agent connects to external tools through MCP
- The workflow is tested end-to-end locally
MCP (Model Context Protocol): what it is and why it matters
The video explains MCP at a practical level:
- MCP is used to connect an AI agent to external tools/services through a standard protocol
- The goal is to make “tool use” easier to wire up in workflows (in this case, inside n8n)
Running n8n locally with Docker Desktop
The presenter demonstrates setting up n8n locally using Docker Desktop so it’s easy to:
- Spin up a local n8n instance
- Iterate on workflows quickly
- Test agent workflows without relying on hosted n8n
Installing and using MCP community nodes in n8n
The video covers:
- Finding/installing MCP community nodes for n8n
- A quick overview of what the MCP node provides in the workflow builder
Building an MCP client workflow from scratch
The presenter builds an n8n workflow step-by-step, including:
- Creating the workflow structure
- Adding the MCP node into the workflow
- Wiring the agent so it can call external tools through MCP
Connecting an OpenAI model
The workflow is configured to use an OpenAI model as the LLM behind the agent.
Integrating Brave Search via MCP
The presenter demonstrates connecting the Brave Search API through MCP so the agent can:
- Perform web search
- Use search results as context for responses
How system prompts and context flow through the workflow
The video explains how:
- The system prompt is used to steer the agent
- Context is passed through the workflow steps so the agent can use tool outputs effectively
Adding memory to improve agent quality
The presenter adds memory to the agent so it can:
- Retain relevant context across steps/interactions
- Produce higher-quality results compared to a stateless setup
Testing the full pipeline
The video ends by testing the complete workflow:
- n8n (local) + Docker
- OpenAI model
- MCP tool connection
- Brave Search
- Memory
Wrap-up and next steps
The presenter summarizes what was built and suggests continuing to expand MCP-based workflows for “smarter” agent behavior.