Interface: LlaminateConfig

LlaminateConfig

Represents the configuration options for the Llaminate service.
Properties:
Name Type Attributes Description
endpoint string The endpoint URL for the LLM service to use.
key string The API key for the LLM service.
model string <optional>
The model name to use with the LLM service.
attachments Array.<{type: string, url: string}> <optional>
Attachments to include with the request. A tool call can also return attachments in its response using {"@attachments": [...]}, which will be included in the context for subsequent messages, if supported by the LLM service.
headers Object <optional>
Additional headers to include in the API requests to the LLM service.
history Array.<LlaminateMessage> <optional>
An array of messages to include as part of the conversation history for the request.
limits Object.<{tokens: number, attachments: number, recursions: number}> <optional>
An object specifying various limits to enforce when processing requests and responses.
options Object <optional>
Additional options to include in the request body sent to the LLM service.
rpm number <optional>
The maximum number of requests per minute (RPM) to allow when making API requests.
schema Object <optional>
An optional JSON schema to specify the expected structure of the response.
system Array.<string> <optional>
System prompts to include with every request. A tool call can also return system prompts in its response using {"@system": [...]}, which will be included in the context for that completion.
tools Array.<{function: {name: string, description: string, parameters: Object, strict: boolean}, handler: function()}> <optional>
Tool definitions to include with the request.
window number <optional>
The number of recent user messages to include in the context window for each request.
fetch function <optional>
A custom fetch function to use for making API requests to the LLM service.
Default Value:
  • { attachments: [], headers: {}, history: [], limits: { attachments: 8, recursions: 5 }, options: {}, rpm: Infinity, system: [], tools: [], window: 12, fetch: fetch, handler: async (name, args) => { throw new Error(`No \`handler\` method provided for \`${name}\` was provided in the Llaminate configuration.`) } }

Examples

{ endpoint: Llaminate.MISTRAL }
{ endpoint: "https://api.example.com/v1/chat/completions" }
{ key: "12345-abcde-67890-fghij-klm" }
{ model: "mistral-small-latest" }
{ attachments: [
  {
    type: Llaminate.JPEG,
    url: "https://example.com/image.jpg"
  },
  {
    type: Llaminate.PDF,
    url: "data:application/pdf;base64,JVBERi0xLjcKJcfs..."
  }
] }
{ headers: {
  "OpenAI-Organization": "org-12345-abcde-67890",
  "OpenAI-Project": "project-abcde-12345-fghij",
} }
{ history: [
  { role: "user", content: "What is the capital of France?" },
  { role: "assistant", content: "The capital of France is Paris." },
] }
{ limits: {
  tokens: 1000,
  attachments: 5,
  recursions: 3,
} }
{ options: {
  temperature: 0.7,
  max_tokens: 150,
} }
{ rpm: 720 }
{ schema: {
  type: "object",
  properties: {
    reply: {
        type: "string",
        description: "Your response to the user's query."
    },
    thoughts: {
      type: "string",
      description: "Your internal thoughts about the user's query."
    },
  },
  required: ["reply", "thoughts"],
  additionalProperties: false,
} }
{ system: [
  "You are an assistant who answers questions about movies.",
  "You are always excited about movie trivia and love sharing fun facts.",
] }
{ tools: [
  {
    function: {
      name: "get_current_time",
      description: "Returns the current time in ISO format.",
      parameters: {
        type: "object",
        properties: {},
        required: [],
      }
    },
    handler: async () => {
      return new Date().toISOString();
    }
  }
] }
{ window: 5 }
{ fetch: async (url, options) => {
  // Example fetch using Node's https module
  return new Promise((resolve, reject) => {
    const request = https.request(url, options, (response) => {
      let data = "";
      response.on("data", (chunk) => data += chunk);
      response.on("end", () => resolve({
        ok: response.statusCode >= 200 && response.statusCode < 300,
        status: response.statusCode,
        json: async () => JSON.parse(data),
        text: async () => data,
      }));
    });
    request.on("error", reject);
    if (options.body) request.write(options.body);
    request.end();
  });
} }
{ handler: async (name, args) => {
  console.log(`Tool called: ${name} with arguments:`, args);
} }