Used to send a message to OpenAI's Responses API and receive a generated answer back into the flow. The node also supports function calling, conversation state and image / file attachments.
The text sent to ChatGPT is taken from the upstream Message payload (msg.payload.content), the response is forwarded out of the first pin and can be sent back to the user as a regular Message.
The prompt design (model, instructions, tools, variables, …) is not configured field by field in the editor: instead the JSON exported from the Prompts Playground is pasted in the Prompt text area. Any change made in the Playground (a new tool, a different model, a longer instruction) is therefore propagated by simply copying and pasting the updated JSON.
The example flow ChatGPT example.json ships a working setup with one function tool (get_temperature), a Telegram receiver / sender pair and a typing indicator.
ChatGPT Responses dynamically grows its output pins based on the tools array declared in the Prompt JSON: one pin per function tool is created, plus the standard response pin (first) and error pin (last).

In the screenshot above:
ChatGPT Responses node and to a Waiting… message that gives the user immediate feedback while the model is thinkingChatGPT Responses node exposes three pins because the prompt declares one function tool named get_temperature:
Message / Params / Telegram Sender chainget_temperature: fires whenever the model decides to call the get_temperature tool. The downstream Get temperature function node resolves the value (for example by calling an external weather API) and loops the result back into the input of ChatGPT Responses so the model can produce the final natural-language answerTo answer a function call, the downstream node must:
msg.payload (any JSON-serializable value)msg['chatgpt-function-call'] untouched — it carries the call_id and previous_response_id the node needs to resume the conversation with the modelChatGPT Responses nodeFor example, the Get temperature function node reads the city argument provided by the model and writes the temperature back:
// msg.payload contains the arguments parsed from the model, e.g. { city: 'Milan' }
const city = msg.payload.city;
msg.payload = { city, temperature: 21, unit: 'C' };
return msg;
The same pattern scales to any number of tools: declare them in the prompt JSON, wire one branch per tool and loop each branch back to the ChatGPT Responses input.
ChatGPT Responses keeps the conversation alive across messages by storing the response.id of the latest OpenAI call into the flow context, scoped by chatId. The next call automatically reuses it as previous_response_id, so the model sees the full history without the chatbot having to resend it.
If the upstream message does not carry a chatId (for example a broadcast or a manual Inject), the node will log a warning and the conversation will be stateless.