Constructor
new Llaminate(config)
Constructs a new instance of the Llaminate class.
Parameters:
| Name | Type | Description |
|---|---|---|
config |
LlaminateConfig | The configuration options for the Llaminate instance. |
Throws:
Will throw an error if the provided configuration is invalid.
Example
const mistral = new Llaminate({
endpoint: Llaminate.MISTRAL,
key: "12345-abcde-67890-fghij-klm",
model: "mistral-small-latest",
system: ["You are a sarcastic assistant who answers very briefly and bluntly."]
rpm: 720
});
Classes
Methods
clear()
Resets the chat history. This does not affect the configuration or any
other settings.
Returns:
void
Example
// EXAMPLE: Populating and then clearing the history
mistral.complete("What's your name?"); // "John"
mistral.complete("How do you spell that?"); // "J-O-H-N"
mistral.clear();
mistral.complete("Tell me again?"); // "Tell you what again?"
(async) complete(prompt, configopt) → {Promise.<LlaminateResponse>}
Sends a prompt to the LLM service and returns a chat completion response.
Parameters:
| Name | Type | Attributes | Description |
|---|---|---|---|
prompt |
string | Array.<LlaminateMessage> | The input prompt or messages to send to the service. | |
config |
LlaminateConfig |
<optional> |
Optional configuration settings for this completion. |
Throws:
Will throw an error if the prompt is invalid, the response is
unsuccessful, or if the response does not conform to the expected format.
Returns:
A promise resolving to a
LlaminateResponse from the service.
- Type
- Promise.<LlaminateResponse>
Examples
// EXAMPLE 1: Simple prompt with default configuration
const response = await mistral.complete("What's the capital of France?");
console.log(response.message); // "Paris"
// EXAMPLE 2: System messages set through configuration
const response = await mistral.complete("What's the capital of France?",
{ system: [
"You are a children's geography tutor.",
"Always reply as if you are explaining to a child."
] } );
// EXAMPLE 3: An image attachment (supported depends on LLM model)
const response = await mistral.complete(
"Generate a helpful HTML `alt` tag for this image.", {
attachments: [ { type: Llaminate.JPEG, url: "https://example.com/image.jpg" } ]
});
// EXAMPLE 4: Prompt with a pre-rolled conversation history
const history = [
{ role: "user", content: "What's a good name for a houseplant?" },
{ role: "assistant", content: "How about Fernie Sanders?" },
{ role: "user", content: "Nice. Any other suggestions?" },
{ role: "assistant", content: "How about Leaf Erickson?" }
];
const prompt = "Great. What could be its nickname?";
const response = await mistral.complete(prompt, { history });
// EXAMPLE 5: Rolling the conversation history into the prompt
const messages = history.concat({ role: "user", content: prompt });
const response = await mistral.complete(messages);
export(windowopt) → {Array.<LlaminateMessage>}
Exports the chat history. By default, this exports the entire chat
history. If a window is specified, only the most recent messages are
returned. The window counts back each of the most recent user message, it
includes assistant responses to these, and any system prompts set in the
global configuration.
Parameters:
| Name | Type | Attributes | Description |
|---|---|---|---|
window |
number |
<optional> |
The length of the window to retrieve. |
Returns:
An array of chat history messages.
- Type
- Array.<LlaminateMessage>
Examples
// EXAMPLE 1: Exporting the entire history of messages
const history = mistral.export();
localStorage.setItem('mistral-history', JSON.stringify(history));
// EXAMPLE 2: Getting a window of recent messages
// Returns the last 5 user-assistant interactions and all system prompts
const history = mistral.export(5);
console.log(history.length);
// e.g. 14 = 5 user messages + 5 assistant replies (one that included a
// tool call) + 2 system prompts
(async, generator) stream(prompt, configopt) → {LlaminateResponse}
Sends a prompt to the LLM service and streams the response.
Parameters:
| Name | Type | Attributes | Description |
|---|---|---|---|
prompt |
string | Array.<LlaminateMessage> | The input prompt or messages to send to the service. | |
config |
LlaminateConfig |
<optional> |
Optional configuration settings for this completion. |
Throws:
Will throw an error if the prompt is invalid, the response is
unsuccessful, or if the response does not conform to the expected format.
Yields:
An asynchronous generator yielding
responses from the service.
- Type
- LlaminateResponse
Examples
// EXAMPLE 1: Streaming the response to a simple prompt
const stream = mistral.stream("Tell me a joke and explain it.");
for await (const response of stream) {
console.log(response.message);
}
// EXAMPLE 2: Streaming a response with a structured output
const stream = mistral.stream("Tell me a joke and explain it.", {
schema: {
type: "object",
properties: {
joke: {
type: "string",
description: "Your response to the user's query."
},
explanation: {
type: "string",
description: "Your internal thoughts about the user's query."
},
},
required: ["joke", "explanation"],
additionalProperties: false,
}
});
for await (const response of stream) {
// Initially streams as a string until the JSON schema can be validated
console.log(response.message);
console.log(response.message?.joke);
console.log(response.message?.explanation);
}