Streaming
Streaming is just an addition to the Completion
endpoint. It supports streaming responses using Server Side Events (SSE).
const completion = await anthropic.Completion.create({
model: "anthropic.claude-v2",
prompt: "In one sentence, what is good about the color blue?",
max_tokens_to_sample: 300,
stream: true,
});
Usage
import AnthropicBedrock from "anthropic-bedrock";
const anthropic = new AnthropicBedrock({
access_key: process.env["AWS_ACCESS_KEY"],
secret_key: process.env["AWS_SECRET_KEY"],
region: process.env["AWS_REGION"],
});
async function main() {
const completion = await anthropic.Completion.create({
model: "anthropic.claude-v2",
prompt: "In one sentence, what is good about the color blue?",
max_tokens_to_sample: 300,
stream: true,
});
for await (const completion of stream) {
console.log(completion["completion"]);
}
}
main();
Configuration
The configuration parameters are the exact same as completion, except the stream=True
parameter must be passed into the completion function.
model
The model that will complete your prompt. Refer to the models page.
const completion = await anthropic.Completion.create({
model: "anthropic.claude-v2",
prompt: "In one sentence, what is good about the color blue?",
max_tokens_to_sample: 300,
stream: true,
});
prompt
The prompt you want to use.
- Type:
string
const completion = await anthropic.Completion.create({
model: "anthropic.claude-v2",
prompt: "In one sentence, what is good about the color blue?",
max_tokens_to_sample: 300,
stream: true,
});
max_tokens_to_sample (optional)
The maximum number of tokens to generate before stopping.
- Default:
256
- Range depends on the model, refer to the models page.
const completion = await anthropic.Completion.create({
model: "anthropic.claude-v2",
prompt: "In one sentence, what is good about the color blue?",
max_tokens_to_sample: 300,
stream: true,
});
stop_sequences (optional)
Sequences that will cause the model to stop generating completion text.
- Default:
[]
const completion = await anthropic.Completion.create({
model: "anthropic.claude-v2",
prompt: "In one sentence, what is good about the color blue?",
max_tokens_to_sample: 300,
stop_sequences: ["sequence"],
stream: true,
});
temperature (optional)
Amount of randomness injected in the response.
- Default:
1
- Range:
0-1
const completion = await anthropic.Completion.create({
model: "anthropic.claude-v2",
prompt: "In one sentence, what is good about the color blue?",
max_tokens_to_sample: 300,
temperature: 0.7,
stream: true,
});
top_p (optional)
Use nucleus sampling.
- Default:
1
- Range:
0-1
const completion = await anthropic.Completion.create({
model: "anthropic.claude-v2",
prompt: "In one sentence, what is good about the color blue?",
max_tokens_to_sample: 300,
top_p: 0.7,
stream: true,
});
top_k (optional)
Only sample from the top K options for each subsequent token.
- Default:
250
- Range:
0-500
const completion = await anthropic.Completion.create({
model: "anthropic.claude-v2",
prompt: "In one sentence, what is good about the color blue?",
max_tokens_to_sample: 300,
top_k: 0.7,
stream: True,
});