Skip to content

leafo/lua-openai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

88 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

lua-openai

Bindings to the OpenAI HTTP API for Lua. Compatible with any HTTP library that supports LuaSocket's http request interface. Compatible with OpenResty using lapis.nginx.http. This project implements both the classic Chat Completions API in addition to the modern Responses API.

AI Generated Disclaimer

The large majority of this library was written using Generative AI models like ChatGPT and Claude Sonnet. Human review and guidance is provided where needed.

Install

Install using LuaRocks:

luarocks install lua-openai

Quick Usage

Using the Responses API:

local openai = require("openai")
local client = openai.new(os.getenv("OPENAI_API_KEY"))

local status, response = client:create_response({
  {role = "system", content = "You are a Lua programmer"},
  {role = "user", content = "Write a 'Hello world' program in Lua"}
}, {
  model = "gpt-4.1",
  temperature = 0.5
})

if status == 200 then
  -- the JSON response is automatically parsed into a Lua object
  print(response.output[1].content[1].text)
end

Using the Chat Completions API:

local openai = require("openai")
local client = openai.new(os.getenv("OPENAI_API_KEY"))

local status, response = client:create_chat_completion({
  {role = "system", content = "You are a Lua programmer"},
  {role = "user", content = "Write a 'Hello world' program in Lua"}
}, {
  model = "gpt-3.5-turbo",
  temperature = 0.5
})

if status == 200 then
  -- the JSON response is automatically parsed into a Lua object
  print(response.choices[1].message.content)
end

Chat Session Example

A chat session instance can be created to simplify managing the state of a back and forth conversation with the ChatGPT Chat Completions API. Note that chat state is stored locally in memory, each new message is appended to the list of messages, and the output is automatically appended to the list for the next request.

local openai = require("openai")
local client = openai.new(os.getenv("OPENAI_API_KEY"))

local chat = client:new_chat_session({
  -- provide an initial set of messages
  messages = {
    {role = "system", content = "You are an artist who likes colors"}
  }
})

-- returns the string response
print(chat:send("List your top 5 favorite colors"))

-- the chat history is sent on subsequent requests to continue the conversation
print(chat:send("Excluding the colors you just listed, tell me your favorite color"))

-- the entire chat history is stored in the messages field
for idx, message in ipairs(chat.messages) do
  print(message.role, message.content)
end

-- You can stream the output by providing a callback as the second argument
-- the full response concatenated is also returned by the function
local response = chat:send("What's the most boring color?", function(chunk)
  io.stdout:write(chunk.content)
  io.stdout:flush()
end)

Streaming Response Example

Under normal circumstances the API will wait until the entire response is available before returning the response. Depending on the prompt this may take some time. The streaming API can be used to read the output one chunk at a time, allowing you to display content in real time as it is generated.

Using the Responses API:

local openai = require("openai")
local client = openai.new(os.getenv("OPENAI_API_KEY"))

client:create_response({
  {role = "system", content = "You work for Streak.Club, a website to track daily creative habits"},
  {role = "user", content = "Who do you work for?"}
}, {
  stream = true
}, function(chunk)
  -- Raw event object from API: check type and access delta directly
  if chunk.type == "response.output_text.delta" then
    io.stdout:write(chunk.delta)
    io.stdout:flush()
  end
end)

print() -- print a newline

Using the Chat Completions API:

local openai = require("openai")
local client = openai.new(os.getenv("OPENAI_API_KEY"))

client:create_chat_completion({
  {role = "system", content = "You work for Streak.Club, a website to track daily creative habits"},
  {role = "user", content = "Who do you work for?"}
}, {
  stream = true
}, function(chunk)
  -- Raw event object from API: access content via choices[1].delta.content
  local delta = chunk.choices and chunk.choices[1] and chunk.choices[1].delta
  if delta and delta.content then
    io.stdout:write(delta.content)
    io.stdout:flush()
  end
end)

print() -- print a newline

Documentation

The openai module returns a table with the following fields:

  • OpenAI: A client for sending requests to the OpenAI API.
  • new: An alias to OpenAI to create a new instance of the OpenAI client
  • ChatSession: A class for managing chat sessions and history with the OpenAI API.
  • VERSION = "1.5.0": The current version of the library

Classes

OpenAI

This class initializes a new OpenAI API client.

new(api_key, config)

Constructor for the OpenAI client.

  • api_key: Your OpenAI API key.
  • config: An optional table of configuration options, with the following shape:
    • http_provider: A string specifying the HTTP module name used for requests, or nil. If not provided, the library will automatically use "lapis.nginx.http" in an ngx environment, or "socket.http" otherwise.
local openai = require("openai")
local api_key = "your-api-key"
local client = openai.new(api_key)
client:new_chat_session(...)

Creates a new ChatSession instance. A chat session is an abstraction over the chat completions API that stores the chat history. You can append new messages to the history and request completions to be generated from it. By default, the completion is appended to the history.

client:new_responses_chat_session(...)

Creates a new ResponsesChatSession instance for the Responses API. Similar to ChatSession but uses OpenAI's Responses API which handles conversation state server-side via previous_response_id.

  • opts: Optional configuration table
    • model: Model to use (defaults to client's default_model)
    • instructions: System instructions for the conversation
    • tools: Array of tool definitions
    • previous_response_id: Resume from a previous response
client:create_chat_completion(messages, opts, chunk_callback)

Sends a request to the /chat/completions endpoint.

  • messages: An array of message objects.
  • opts: Additional options for the chat, passed directly to the API (eg. model, temperature, etc.) https://platform.openai.com/docs/api-reference/chat
  • chunk_callback: A function to be called for each raw event object when stream = true is passed to opts. Each chunk is the parsed API response (eg. {object = "chat.completion.chunk", choices = {{delta = {content = "..."}, index = 0}}}).

Returns HTTP status, response object, and output headers. The response object will be decoded from JSON if possible, otherwise the raw string is returned.

client:chat(messages, opts, chunk_callback)

Legacy alias for create_chat_completion with filtered streaming chunks. When streaming, the callback receives parsed chunks in the format {content = "...", index = ...} instead of raw event objects.

client:completion(prompt, opts)

Sends a request to the /completions endpoint.

Returns HTTP status, response object, and output headers. The response object will be decoded from JSON if possible, otherwise the raw string is returned.

client:embedding(input, opts)

Sends a request to the /embeddings endpoint.

Returns HTTP status, response object, and output headers. The response object will be decoded from JSON if possible, otherwise the raw string is returned.

client:create_response(input, opts, stream_callback)

Sends a request to the /responses endpoint (Responses API).

  • input: A string or array of message objects (with role and content fields)
  • opts: Additional options passed directly to the API (eg. model, temperature, instructions, tools, previous_response_id, etc.) https://platform.openai.com/docs/api-reference/responses
  • stream_callback: Optional function called for each raw event object when stream = true is passed in opts (eg. {type = "response.output_text.delta", delta = "Hello"})

Returns HTTP status, response object, and output headers. The response object will be decoded from JSON if possible, otherwise the raw string is returned.

client:response(response_id)

Retrieves a stored response by ID from the /responses/{id} endpoint.

  • response_id: The ID of the response to retrieve

Returns HTTP status, response object, and output headers.

client:delete_response(response_id)

Deletes a stored response.

  • response_id: The ID of the response to delete

Returns HTTP status, response object, and output headers.

client:cancel_response(response_id)

Cancels an in-progress streaming response.

  • response_id: The ID of the response to cancel

Returns HTTP status, response object, and output headers.

client:moderation(input, opts)

Sends a request to the /moderations endpoint to check content against OpenAI's content policy.

  • input: A string or array of strings to classify
  • opts: Additional options passed directly to the API

Returns HTTP status, response object, and output headers.

client:models()

Lists available models from the /models endpoint.

Returns HTTP status, response object, and output headers.

client:files()

Lists uploaded files from the /files endpoint.

Returns HTTP status, response object, and output headers.

client:file(file_id)

Retrieves information about a specific file.

  • file_id: The ID of the file to retrieve

Returns HTTP status, response object, and output headers.

client:delete_file(file_id)

Deletes a file.

  • file_id: The ID of the file to delete

Returns HTTP status, response object, and output headers.

client:image_generation(params)

Sends a request to the /images/generations endpoint to generate images.

Returns HTTP status, response object, and output headers.

ResponsesChatSession

This class manages chat sessions using OpenAI's Responses API. Unlike ChatSession, conversation state is maintained server-side via previous_response_id. Typically created with new_responses_chat_session.

The field response_history stores an array of response objects from past interactions. The field current_response_id holds the ID of the most recent response, used to maintain conversation continuity.

new(client, opts)

Constructor for the ResponsesChatSession.

  • client: An instance of the OpenAI client.
  • opts: An optional table of options.
    • model: Model to use (defaults to client's default_model)
    • instructions: System instructions for the conversation
    • tools: Array of tool definitions
    • previous_response_id: Resume from a previous response
session:send(input, stream_callback)

Sends input and returns the response, maintaining conversation state automatically.

  • input: A string or array of message objects.
  • stream_callback: Optional function for streaming responses.

Returns a response object on success (or accumulated text string when streaming). On failure, returns nil, an error message, and the raw response.

Response objects have helper methods:

  • response:get_output_text(): Extract all text content as a string
  • response:get_images(): Extract generated images (when using image_generation tool)
  • tostring(response): Converts to text string

The stream_callback receives two arguments: the delta text string and the raw event object. Each call provides an incremental piece of the response text.

session:create_response(input, opts, stream_callback)

Lower-level method to create a response with additional options.

  • input: A string or array of message objects.
  • opts: Additional options (model, temperature, tools, previous_response_id, etc.)
  • stream_callback: Optional function for streaming responses.

Returns a response object on success. On failure, returns nil, an error message, and the raw response.

ChatSession

This class manages chat sessions and history with the OpenAI API. Typically created with new_chat_session

The field messages stores an array of chat messages representing the chat history. Each message object must conform to the following structure:

  • role: A string representing the role of the message sender. It must be one of the following values: "system", "user", or "assistant".
  • content: A string containing the content of the message.
  • name: An optional string representing the name of the message sender. If not provided, it should be nil.

For example, a valid message object might look like this:

{
  role = "user",
  content = "Tell me a joke",
  name = "John Doe"
}
new(client, opts)

Constructor for the ChatSession.

  • client: An instance of the OpenAI client.
  • opts: An optional table of options.
    • messages: An initial array of chat messages
    • functions: A list of function declarations
    • temperature: temperature setting
    • model: Which chat completion model to use, eg. gpt-4, gpt-3.5-turbo
chat:append_message(m, ...)

Appends a message to the chat history.

  • m: A message object.
chat:last_message()

Returns the last message in the chat history.

chat:send(message, stream_callback=nil)

Appends a message to the chat history and triggers a completion with generate_response and returns the response as a string. On failure, returns nil, an error message, and the raw request response.

If the response includes a function_call, then the entire message object is returned instead of a string of the content. You can return the result of the function by passing role = "function" object to the send method

  • message: A message object or a string.
  • stream_callback: (optional) A function to enable streaming output.

By providing a stream_callback, the request will runin streaming mode. This function receives chunks as they are parsed from the response.

These chunks have the following format:

  • content: A string containing the text of the assistant's generated response.

For example, a chunk might look like this:

{
  content = "This is a part of the assistant's response.",
}
chat:generate_response(append_response, stream_callback=nil)

Calls the OpenAI API to generate the next response for the stored chat history. Returns the response as a string. On failure, returns nil, an error message, and the raw request response.

  • append_response: Whether the response should be appended to the chat history (default: true).
  • stream_callback: (optional) A function to enable streaming output.

See chat:send for details on the stream_callback

Appendix

Chat Session With Functions

Note: Functions are the legacy format for what is now known as tools, this example is left here just as a reference

OpenAI allows sending a list of function declarations that the LLM can decide to call based on the prompt. The function calling interface must be used with chat completions and the gpt-4-0613 or gpt-3.5-turbo-0613 models or later.

See https://github.com/leafo/lua-openai/blob/main/examples/example5.lua for a full example that implements basic math functions to compute the standard deviation of a list of numbers

Here's a quick example of how to use functions in a chat exchange. First you will need to create a chat session with the functions option containing an array of available functions.

The functions are stored on the functions field on the chat object. If the functions need to be adjusted for future message, the field can be modified.

local chat = openai:new_chat_session({
  model = "gpt-3.5-turbo-0613",
  functions = {
    {
      name = "add",
      description =  "Add two numbers together",
      parameters = {
        type = "object",
        properties = {
          a = { type = "number" },
          b = { type = "number" }
        }
      }
    }
  }
})

Any prompt you send will be aware of all available functions, and may request any of them to be called. If the response contains a function call request, then an object will be returned instead of the standard string return value.

local res = chat:send("Using the provided function, calculate the sum of 2923 + 20839")

if type(res) == "table" and res.function_call then
  -- The function_call object has the following fields:
  --   function_call.name --> name of function to be called
  --   function_call.arguments --> A string in JSON format that should match the parameter specification
  -- Note that res may also include a content field if the LLM produced a textual output as well

  local cjson = require "cjson"
  local name = res.function_call.name
  local arguments = cjson.decode(res.function_call.arguments)
  -- ... compute the result and send it back ...
end

You can evaluate the requested function & arguments and send the result back to the client so it can resume operation with a role=function message object:

Since the LLM can hallucinate every part of the function call, you'll want to do robust type validation to ensure that function name and arguments match what you expect. Assume every stage can fail, including receiving malformed JSON for the arguments.

local name, arguments = ... -- the name and arguments extracted from above

if name == "add" then
  local value = arguments.a + arguments.b

  -- send the response back to the chat bot using a `role = function` message

  local cjson = require "cjson"

  local res = chat:send({
    role = "function",
    name = name,
    content = cjson.encode(value)
  })

  print(res) -- Print the final output
else
  error("Unknown function: " .. name)
end

About

OpenAI API bindings for Lua

Topics

Resources

License

Stars

Watchers

Forks

Contributors 2

  •  
  •