n8n-nodes-nemotron
This community node integrates NVIDIA's powerful Nemotron Ultra 253B model with n8n, the workflow automation platform. Access state-of-the-art AI capabilities directly in your workflows and agents.
Overview
This package adds the NVIDIA Nemotron Ultra 253B model to n8n's AI toolkit, allowing you to:
- Use one of the most powerful open-source AI models in your n8n workflows
- Connect Nemotron Ultra to n8n's AI Agents and AI Chains
- Optimize conversations with fine-tuned parameters
- Process natural language within your automation workflows
Installation
Follow these steps to install this custom node:
In n8n v1.0+:
- Go to Settings > Community Nodes
- Click on Install a community node
- Enter
n8n-nodes-nemotron
in the "Install npm package" input field - Click on Install
Manual installation:
# Navigate to your n8n installation folder
cd ~/.n8n
# Install the package
npm install n8n-nodes-nemotron
Configuration
To use the Nemotron Ultra model, you need to set up your NVIDIA API credentials:
- Go to your n8n instance
- Open the Credentials page
- Click on Create New Credentials
- Search for "NVIDIA API" and select it
- Enter your NVIDIA API Key
- Set the Base URL to
https://integrate.api.nvidia.com/v1
- Click on Save
Usage
The NVIDIA Nemotron node can be used in two primary ways:
1. With AI Agent
- Add the Bro AI Agent node to your workflow
- Add the NVIDIA Nemotron node to your workflow
- Connect the Nemotron node to the Agent node
- Configure the Agent as needed
- Run your workflow
2. With AI Chain
- Add the AI Chain node to your workflow
- Add the NVIDIA Nemotron node to your workflow
- Connect the Nemotron node to the Chain node
- Configure your prompt and other settings
- Run your workflow
Available Options
The Nemotron Ultra model can be configured with these parameters:
- Temperature - Controls response randomness (default: 0.6)
- Top P - Controls token diversity via nucleus sampling (default: 0.95)
- Maximum Tokens - Limits the length of generated responses (default: 4096)
- Frequency Penalty - Reduces repetition in responses (default: 0)
- Presence Penalty - Encourages the model to talk about new topics (default: 0)
- System Message - Sets the behavior/role of the assistant
- Response Format - Select between regular text or structured JSON output
Example Workflow
Here's a simple example workflow using the Nemotron node with an AI Agent:
[Manual Trigger] → [Bro AI Agent] ← [NVIDIA Nemotron]
↓
[Set] → [Respond to Webhook]
Compatibility
- Requires n8n version 1.0.0 or later
- Works with Bro AI Agent and standard n8n AI Chain nodes
- Compatible with browsers supporting modern JavaScript features
Contributing
Contributions are welcome! If you encounter any issues or have suggestions for improvements:
- Check the GitHub repository
- Submit an issue describing your problem or suggestion
- Create a pull request if you have a solution to propose
License
This project is licensed under the MIT License - see the LICENSE file for details.
Credits
- Developed by Bro Lendario
- Email: contato.nfelipe@gmail.com
- Powered by NVIDIA's Nemotron Ultra 253B LLM technology
- Built for n8n workflow automation platform
FAQ
Q: Do I need a NVIDIA account to use this node?
A: Yes, you need to register for NVIDIA's API services and get an API key.
Q: Is the NVIDIA Nemotron Ultra model free to use?
A: You'll need to check NVIDIA's current pricing and usage terms.
Q: Can I use this with multiple LLMs in the same workflow?
A: Yes, you can use multiple LLM nodes in an n8n workflow and choose which one to connect to your agents or chains.
Q: Does this support streaming responses?
A: Yes, streaming is enabled by default for a better user experience.
If you find this node useful, please consider starring the repository and sharing it with others who might benefit from using NVIDIA's Nemotron Ultra model with n8n.