Typescript
Chat

Chat

The ChatCompletion endpoint allows you to have a chat conversation with a model.

const chat = await anthropic.ChatCompletion.create({
  messages: [
    { role: "user", content: "Hello!" },
    // { role: "system", content: "Hi how are you doing today." },
  ],
  model: "anthropic.claude-v2",
  max_tokens_to_sample: 300,
});
 
console.log(chat["messages"]);

Usage

import AnthropicBedrock from "anthropic-bedrock";
 
const anthropic = new AnthropicBedrock({
  access_key: process.env["AWS_ACCESS_KEY"],
  secret_key: process.env["AWS_SECRET_KEY"],
  region: process.env["AWS_REGION"],
});
 
async function main() {
  const chat = await anthropic.ChatCompletion.create({
    messages: [
      { role: "user", content: "Hello!" },
      // { role: "system", content: "Hi how are you doing today." },
    ],
    model: "anthropic.claude-v2",
    max_tokens_to_sample: 300,
  });
 
  console.log(chat["messages"]);
}
 
main();

Configuration

model

The model that will complete your prompt. Refer to the models page.

const chat = await anthropic.ChatCompletion.create({
  model: "anthropic.claude-v2",
  max_tokens_to_sample: 300,
  messages: [
    { role: "user", content: "Hello!" },
    // { role: "system", content: "Hi how are you doing today." },
  ],
});

messages

The messages of the conversation.

const chat = await anthropic.ChatCompletion.create({
  model: "anthropic.claude-v2",
  max_tokens_to_sample: 300,
  messages: [
    { role: "user", content: "Hello!" },
    // { role: "system", content: "Hi how are you doing today." },
  ],
});

max_tokens_to_sample (optional)

The maximum number of tokens to generate before stopping.

  • Default: 256
  • Range depends on the model, refer to the models page.
const chat = await anthropic.ChatCompletion.create({
  model: "anthropic.claude-v2",
  max_tokens_to_sample: 300,
  messages: [
    { role: "user", content: "Hello!" },
    // { role: "system", content: "Hi how are you doing today." },
  ],
});

stop_sequences (optional)

Sequences that will cause the model to stop generating completion text.

  • Default: []
const chat = await anthropic.ChatCompletion.create({
  model: "anthropic.claude-v2",
  max_tokens_to_sample: 300,
  messages: [
    { role: "user", content: "Hello!" },
    // { role: "system", content: "Hi how are you doing today." },
  ],
  stop_sequences: ["sequence"],
});

temperature (optional)

Amount of randomness injected in the response.

  • Default: 1
  • Range: 0-1
const chat = await anthropic.ChatCompletion.create({
  model: "anthropic.claude-v2",
  max_tokens_to_sample: 300,
  temperature: 0.7,
  messages: [
    { role: "user", content: "Hello!" },
    // { role: "system", content: "Hi how are you doing today." },
  ],
});

top_p (optional)

Use nucleus sampling.

  • Default: 1
  • Range: 0-1
const chat = await anthropic.ChatCompletion.create({
  model: "anthropic.claude-v2",
  max_tokens_to_sample: 300,
  top_p: 0.7,
  messages: [
    { role: "user", content: "Hello!" },
    // { role: "system", content: "Hi how are you doing today." },
  ],
});

top_k (optional)

Only sample from the top K options for each subsequent token.

  • Default: 250
  • Range: 0-500
const chat = await anthropic.ChatCompletion.create({
  model: "anthropic.claude-v2",
  max_tokens_to_sample: 300,
  top_k: 250,
  messages: [
    { role: "user", content: "Hello!" },
    // { role: "system", content: "Hi how are you doing today." },
  ],
});