LLM's enhanced nodes to assist your Houdini python or vex workflows.
Vex.Small.Model.Example.mp4
Python.Large.Model.Example.mp4
- Create Code
- Input the promt and press 'Send Prompt' to generate the required code
- Modify Code
- Input your request into the 'Modify' tab and press 'Send Modification' to change an existing snippet
- Fix Errors
- Roll the dice and request for the llm to fix the error on the node
On the releases page. Download and unzip Houdini.AI.Code.Nodes.zip. Then place the hda's in your otls folder.
The first time you open Houdini after adding the nodes you will see some updates in the console. This is adding the openai python package dependency. When this is done, restart Houdini.
This is the standard api for communicating with llms. The openai package should install automatically using pip the first time the nodes are added. You will see some updates in the console, When this is done, restart Houdini.
If you need to manually add the python package. Using the correct version of python for your installation of Houdini and run:
pip install openai
The settings dropdown has the properties for the API key, URL and Model. Update these parameters to work with your preferred service.
After updating the parameters right click the properties and 'Make current value default' so you don't have to do this every time create a new node.
The default parameters are setup to work with LMStudio.
These nodes should work with any llm service, local or remote, that uses the openai api.
LM Studio has a great interface for downloading and using models. To use with the nodes, go to the developer tab, load a model and start the server.
Settings Example:
API Key: lm-studio
URL: http://localhost:1234/v1
Model: model-identifier
Settings Example:
API Key: ollama
URL: http://localhost:11434/v1/
Model: llama2
- gemma 3 27b
Here is a good list of providers
OpenRouter is a great service that has free models you can use. The free gemini models are quite good.
Settings Example:
API Key: generated-api-key
URL: https://openrouter.ai/api/v1/
Model: google/gemini-2.0-flash-lite-preview-02-05:free
- gemma 2.0 flash
- Larger models create much better results, smaller models can often send code full of errors
- Models work better with python than vex
- It can be very useful to get you started with writing a snippet

