Model Node
Connect to any major LLM provider with a unified interface.
Overview
The Model Node allows you to send prompts to AI language models and receive responses. It supports multiple providers including OpenAI, Anthropic, Google, Meta, and Mistral.
Configuration
| Field | Description | Required |
|---|---|---|
Model |
Select the LLM model to use (GPT-4o, Claude 3.5, Gemini, etc.) | Yes |
System Instructions |
The system prompt that defines the AI's behavior and context | No |
User Prompt |
The user input variable (default: {{prompt}}) |
Yes |
Output Variable |
Variable name to store the response for use in other nodes | No |
Images |
For vision models, select an image variable from previous nodes | No |
Advanced Options
| Field | Description | Default |
|---|---|---|
Temperature |
Controls randomness. Higher = more creative, Lower = more focused (0-2) | 0.7 |
Max Tokens |
Maximum response length (100-8192) | 2048 |
Using Variables
You can use variables from previous nodes in your prompts using the {{variable_name}} syntax:
Analyze the following data:
{{scraped_content}}
Provide a summary in 3 bullet points.
Output
The LLM response is stored in the output variable you specify. Access it in subsequent nodes using:
{{llm_response}}
Supported Models
- OpenAI: GPT-4o, GPT-4, GPT-3.5 Turbo
- Anthropic: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
- Google: Gemini Pro, Gemini Ultra
- Meta: Llama 3
- Mistral: Mistral Large, Mistral Medium