VectorStackAI's Embeddings API Reference
Embeddings Operations
vectorstackai.Client.embed
Generates embeddings for a batch of text inputs using the specified model.
This method encodes a batch of text documents or queries into dense vector representations using the selected embedding model. It supports both document and query embeddings, with an optional instruction for instruction-tuned models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
texts
|
List[str]
|
Batch of text strings to be embedded as a list of strings. Each string represents either a document or a query. |
required |
model
|
str
|
The name of the embedding model to use (e.g., |
required |
is_query
|
bool
|
A flag indicating whether the input texts are queries ( |
False
|
instruction
|
str
|
An optional instruction to guide the model when embedding queries. Recommended for instruction-tuned models. Defaults to an empty string. |
''
|
Returns:
Name | Type | Description |
---|---|---|
EmbeddingsObject |
EmbeddingsObject
|
An object that holds embeddings for the batch of texts.
The embeddings are stored as a NumPy array of shape
|
Raises:
Type | Description |
---|---|
ValueError
|
If |
Example
client = vectorstackai.Client(api_key="your_api_key")
texts = [
"The defendant was charged with violation of contract terms.",
"Consumers have 30 days to return a defective product."
]
embeddings = client.embed(texts=texts, model="vstackai-law-1", is_query=False)
print(embeddings.embeddings.shape) # (2, 1536)