Used to send a message to OpenAI's Responses API and receive a generated answer back into the flow. The node also supports function calling, conversation state and image / file attachments.

The text sent to ChatGPT is taken from the upstream Message payload (msg.payload.content), the response is forwarded out of the first pin and can be sent back to the user as a regular Message.

The prompt design (model, instructions, tools, variables, …) is not configured field by field in the editor: instead the JSON exported from the Prompts Playground is pasted in the Prompt text area. Any change made in the Playground (a new tool, a different model, a longer instruction) is therefore propagated by simply copying and pasting the updated JSON.

The example flow ChatGPT example.json ships a working setup with one function tool (get_temperature), a Telegram receiver / sender pair and a typing indicator.

Using ChatGPT tools

ChatGPT Responses dynamically grows its output pins based on the tools array declared in the Prompt JSON: one pin per function tool is created, plus the standard response pin (first) and error pin (last).

image.png

In the screenshot above:

To answer a function call, the downstream node must:

  1. compute the result and place it on msg.payload (any JSON-serializable value)
  2. leave msg['chatgpt-function-call'] untouched — it carries the call_id and previous_response_id the node needs to resume the conversation with the model
  3. wire its output back into the input of the same ChatGPT Responses node

For example, the Get temperature function node reads the city argument provided by the model and writes the temperature back:

// msg.payload contains the arguments parsed from the model, e.g. { city: 'Milan' }
const city = msg.payload.city;

msg.payload = { city, temperature: 21, unit: 'C' };
return msg;

The same pattern scales to any number of tools: declare them in the prompt JSON, wire one branch per tool and loop each branch back to the ChatGPT Responses input.

Conversation state

ChatGPT Responses keeps the conversation alive across messages by storing the response.id of the latest OpenAI call into the flow context, scoped by chatId. The next call automatically reuses it as previous_response_id, so the model sees the full history without the chatbot having to resend it.

If the upstream message does not carry a chatId (for example a broadcast or a manual Inject), the node will log a warning and the conversation will be stateless.