Complete Pinecone integration: vector operations, search, index management, AI embeddings, and document reranking.
Use Pinecone as a trigger to kick off a workflow, or use it as an action to do something automatically in your workflow.
No triggers available
Upsert vectors into a namespace. If a new value is upserted for an existing vector ID, it will overwrite the previous value. Important: Vector dimensions must match your index configuration (e.g., 1024, 1536, etc.).
Look up and return vectors by ID from a single namespace. The returned vectors include the vector data and/or metadata.
Update a vector in a namespace. If a value is included, it will overwrite the previous value. If set_metadata is included, the values of the fields specified in it will be added or overwrite the previous value.
Delete vectors by id from a single namespace.
List the IDs of vectors in a single namespace of a serverless index. An optional prefix can be passed to limit the results to IDs with a common prefix. Note: Only supported for serverless indexes.
Search a namespace using a query vector. Retrieves the ids of the most similar items in a namespace, along with their similarity scores.
Search a namespace with a query text, query vector, or record ID and return the most similar records. Text search requires indexes with integrated embedding models.
List all indexes in a project.
Create a Pinecone index. This is where you specify the measure of similarity, the dimension of vectors to be stored in the index, which cloud provider you would like to deploy with, and more.
Get a description of an index.
Delete an existing index.
Return statistics about the contents of an index, including the vector count per namespace, the number of dimensions, and the index fullness. Serverless indexes scale automatically as needed, so index fullness is relevant only for pod-based indexes.
Generate vector embeddings for input data using Pinecone's hosted embedding models. Note: Requires access to Pinecone's inference API which may not be available on all plans.
Rerank results according to their relevance to a query using Pinecone's reranking models. Note: Requires access to Pinecone's inference API which may not be available on all plans.
List the embedding and reranking models hosted by Pinecone. Note: This requires access to Pinecone's inference API which may not be available on all plans. You can use these models for embedding generation and reranking.