Sign in Agent Mode
Categories
Become a Channel Partner Sell in AWS Marketplace Amazon Web Services Home Help

Reviews from AWS customer

33 AWS reviews

External reviews

50 reviews
from and

External reviews are not included in the AWS star rating for the product.


3-star reviews ( Show all reviews )

    reviewer2812962

Vector search has transformed brand root-cause analysis but pricing and GPU controls need work

  • March 30, 2026
  • Review provided by PeerSpot

What is our primary use case?

My main use case for Pinecone is to provide managed vector search for high-dimensional data, ideal for AI apps like semantic search and RAG, where I identify reasons for brand de-growth for big pharma brands through a sales agent and a process agent, utilizing Pinecone for easy use of RAG and vector embeddings with high-dimensional data.

In my workflow, I have used Pinecone in agentic AI and RAG pipelines that require quick scaling without infrastructure management, aligning well with Python workflows and similar to PGVector extensions.

What is most valuable?

Pinecone stands out as a fully managed, cloud-native vector database in my brand de-growth analysis, contrasting with libraries such as FAISS or self-hosted options such as Milvus, as it prioritizes ease for production AI apps, allowing easy deployment as a fully managed serverless application with auto-scaling clusters and pay-per-usage cost, making it ideal for production RAG and AI chatbots by using guided search to retrieve outputs from Pinecone vector database.

The best feature Pinecone offers is its scalability since it auto-scales clusters, and its fully managed deployment as a serverless solution is one of the best aspects. Additionally, Pinecone is easily integratable with Python and its ease of use with Python is phenomenal.

Pinecone's scalability allows it to handle billions of vectors with auto-sharding, a capability other databases do not provide. Pinecone is stable, excelling in managed production scaling.

Pinecone has positively impacted my organization by enabling fast similarity searches using metrics such as cosine or Euclidean distance on billions of vectors with low latency around 20 to 100 milliseconds, with key capabilities including hybrid search combining semantic and keyword, real-time updates, filtering, and re-ranking.

The low latency and hybrid search from Pinecone have significantly improved my team's productivity, as when coupled with the RAG pipeline, it has enhanced solution accuracy, reducing query response time to around 10 to 15 seconds compared to 40 to 60 seconds without RAG.

What needs improvement?

From a cost perspective, I believe Pinecone is a bit expensive compared to other solutions such as FAISS and Milvus, which are free and open source, while Weaviate is more cost-effective at scale, so I would request improvement in Pinecone's pricing structure.

Furthermore, in cases of GPU-accelerated experiments requiring control over indexing strategies, I would prioritize FAISS due to its cost-free prototyping, extreme customization, and high-performance local computation, as Pinecone lacks custom GPU support compared to FAISS and fine-tuned algorithms.

For how long have I used the solution?

I have been using Pinecone for around two years.

What do I think about the stability of the solution?

Pinecone is stable, excelling in managed production scaling.

What do I think about the scalability of the solution?

Pinecone's scalability allows it to handle billions of vectors with auto-sharding, a capability other databases do not provide, and I have experienced no issues with scalability.

How are customer service and support?

Customer support for Pinecone is tied to billing plans, generally starting with standard tier access through console tickets, although I feel free support is lacking.

Which solution did I use previously and why did I switch?

Before adopting Pinecone, we used a Power BI dashboard to identify brand RCA, but it involved many manual and friction points in navigating boards, which did not provide clear insights, while Pinecone's multi-agent architecture has cut down the analysis time from around one week or 10 days to just one day.

I evaluated ChromaDB before implementing Pinecone.

How was the initial setup?

Pinecone is deployed in my organization on a private cloud.

What about the implementation team?

We utilize enterprise licensing for Pinecone.

What was our ROI?

We have seen a return on investment as we have reduced the work of 10 FTEs, allowing the Salesforce analytics team to self-serve the data they formerly depended on other business analysts to pull, effectively consolidating the work into one person with the integration of this solution.

What's my experience with pricing, setup cost, and licensing?

We utilize enterprise licensing for Pinecone, and while I cannot specify the exact costs, it should be approximately around $100 to $150 per month.

Which other solutions did I evaluate?

In cases of GPU-accelerated experiments requiring control over indexing strategies, I would prioritize FAISS due to its cost-free prototyping, extreme customization, and high-performance local computation, as Pinecone lacks custom GPU support compared to FAISS and fine-tuned algorithms.

What other advice do I have?

I advise those looking to use Pinecone to consider it for building a serverless, scalable solution as it achieves millisecond searches across billions of vectors using optimized indexing such as HNSW, with operational simplicity as it is fully managed and serverless, able to be upgraded without infrastructure operations unlike FAISS or ChromaDB.

Overall, I feel Pinecone excels in operational simplicity and scalability, making it a flexible solution ideal for real-time RAG or agentic systems. I would rate this product a 7 out of 10.


    Pavan Javed

Vector chatbots have delivered fast, accurate replies but pricing still needs major improvement

  • March 28, 2026
  • Review from a verified AWS customer

What is our primary use case?

My main use case for Pinecone is making chatbots for custom solutions, and I use it as a primary vector database for my AI-powered chatbots.

Pinecone fits into my chatbot solutions by storing customer-related knowledge bases completely in vectors.

I have a few additional insights about my main use case or how Pinecone helps my chatbot solutions. It is a low-latency database, and while industry-high standard vector database options are available, Pinecone is a bit expensive.

What is most valuable?

I find Pinecone offers great features such as low latency and industry standards, which I find valuable. I also appreciate the simplicity of Pinecone that allows installation in our terminals to start coding. I can ingest my files through curl methods directly from my terminal into Pinecone.

I find Pinecone very good at scalability. I have handled over 100 gigabytes of data previously for different customers of mine.

Pinecone has positively impacted my organization. Compared to any other vector databases, it is a little ahead due to its latency, scalability, and robust architecture.

What needs improvement?

I have not seen a specific outcome or metric of reduced costs since I started using Pinecone because it is very expensive compared to any other vector databases.

I think Pinecone can be improved by potentially reducing some costs.

There are no other improvements needed for Pinecone that I have not mentioned, except for the cost.

For how long have I used the solution?

I have been working in my current field for about four years until now.

What do I think about the stability of the solution?

Pinecone is stable.

What do I think about the scalability of the solution?

Regarding scalability, I find Pinecone very good at it. I have handled over 100 gigabytes of data previously for different customers of mine.

Pinecone's scalability is fine, and I would rate it up to eight out of 10.

Which solution did I use previously and why did I switch?

Previously, I switched to Qdrant for just testing it out and tried Weaviate. I felt Pinecone was doing better, but I had to switch to Qdrant because of the expensive pricing of Pinecone.

What was our ROI?

I have seen a return on investment. The efficiency of my bot has increased, and I might have spent about $50 a month, but the revenue I got was about 50 times greater than that.

What's my experience with pricing, setup cost, and licensing?

My experience with the pricing, setup cost, and licensing of Pinecone is that it is a gray area. I would like them to work on the pricing.

Which other solutions did I evaluate?

Before choosing Pinecone, I did evaluate options such as Qdrant and Weaviate.

What other advice do I have?

My advice to others looking into using Pinecone is to test it thoroughly in local environments and then push everything into Pinecone for production because Pinecone is a bit pricey.

Which deployment model are you using for this solution?

Private Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)


    Tushar Prasad

Chatbots have transformed document search and now need lower costs and more flexible deployment

  • March 24, 2026
  • Review from a verified AWS customer

What is our primary use case?

I have been using Pinecone for two years, starting with agents and RAG models. My main use case for Pinecone is to build a RAG model to create chatbots for enterprise.

We created a chatbot and used Pinecone for storing the embeddings generated to create that RAG model. This chatbot helps people understand more about their documentation. Users can ask queries, and it retrieves the nearest vector embedding, passes it to the LLM, and comes back with the nearest possible available context.

Primarily, I am using Pinecone for chatbots only. However, there are some additional use cases. Pinecone helps with RAG, semantic search, and a combination of hybrid search with hybrid vector search plus semantic search. Some of my teammates are also using it for creating recommendation systems for our customers.

What is most valuable?

Pinecone offers fully managed infrastructure, so there is no need to manage servers, sharding, indexing, or scaling, which reduces DevOps overhead significantly. It has high performance and low latency.

Pinecone's high performance and low latency have made a difference for my team since I am able to drastically reduce the retrieval time. It provides millisecond-level similarity search across billions or millions of vectors and uses optimized approximate nearest neighbor algorithms to provide the results, which really reduces the overall response time.

The developer experience with Pinecone is also good, with very clear, well-maintained documentation and minimal setup required, and it is perfectly built for handling AI use cases.

Pinecone has positively impacted my organization by helping us build those RAG models. Those chatbots help because earlier the users and specialists used to go to the documentation and refer to it manually, but with Pinecone integration retrieval model, I am able to ask queries to the chatbot, and it provides the appropriate context text along with citations. This helps organizations transition from keyword-based systems to semantic systems.

What needs improvement?

Pinecone is not open-source. The cost can escalate based on the pay-as-you-go pricing, so when there are high volume large embeddings, the cost would automatically rise. Additionally, there is no on-premises application available; it is only cloud-based, which becomes a problem for industries that are highly regulated. Since it is into vector, there is no particular conversion for joins and structured queries, which becomes a problem. A system that could automatically convert into structured SQL queries would help increase overall acceptance.

For how long have I used the solution?

I have been using Pinecone for two years, starting when I began working with agents and RAG models.

What do I think about the stability of the solution?

I have not faced any issues with Pinecone; the reliability factor is there. It is able to withstand the enormous data load and manage it effectively. Till now, I have not experienced any downtime. Pinecone is stable.

What do I think about the scalability of the solution?

Pinecone handles scaling as my data grows by providing good response time even though I have enormous amounts of data. It uses horizontal scaling, which helps, and it also does automatic sharding; it splits vector data into shards, and each shard can be independently indexed and queried, helping with parallel query execution. I would rate Pinecone's scalability an eight or nine out of ten.

How are customer service and support?

Pinecone's customer support is good. I would rate the customer support a nine out of ten.

Which solution did I use previously and why did I switch?

I did not use a different solution before Pinecone; I started with Pinecone after getting reviews from Trustpilot and G2. I understood that it is designed to be very easy to use compared to FAISS and Weaviate.

How was the initial setup?

The integration of Pinecone with my existing tech stack was a very good experience. The developer documents were up to the mark, clearly documented, and it exposes clean REST and SDK interfaces. The core operations of creating an index, upserting a vector, or querying a vector are minimal, making it a plug-and-play experience with the LLM ecosystem. It works seamlessly with LangChain, LlamaIndex, and other embeddings.

What was our ROI?

Overall, the time to go through the documentation has drastically reduced. I have achieved a 30 to 40% reduction in time to go through the documentation because now I can ask a query from the chatbot, and it provides the result with the appropriate source link. Pinecone helps me save about two to three hours daily because of the manual effort required to go through the documentation. Now it is fast; at my fingertips, I can get any information, allowing me to go through that guideline.

What's my experience with pricing, setup cost, and licensing?

Pricing was handled by the procurement team, but it follows a usage-based pricing model, and I have to pay for storage, read operations, and write operations. Sometimes it just exceeds, so having a quota or limit would help.

Which other solutions did I evaluate?

I evaluated FAISS and Weaviate before choosing Pinecone.

What other advice do I have?

If you are looking for a highly scalable, performance-oriented, highly reliable system, go for Pinecone. It is especially designed for handling AI use cases. I would give Pinecone a rating of seven out of ten.

I feel Pinecone is secure for most enterprise use cases, with strong controls around data isolation, encryption, and access management. It uses HTTPS and TLS encryption to protect data during API calls, and the data at rest is also encrypted. It follows the multi-tenant isolation model, which is managed through indexes on namespaces. The security posture is at the highest level, which is what I need from an enterprise point of view. The documentation is top-notch; it is highly quality, developer-friendly, and production-oriented, especially for different use cases like RAG and semantic search. It is designed to get teams from zero to working systems quickly, with clear starting guides, impact explanation, and strong code examples provided.


    Y

Fine Product, Tough Debugging

  • November 16, 2023
  • Review from a verified AWS customer

The product overall is fine, but the GUI is quite disappointing when I try to debug. I have to write queries in the odd QL, which makes the entire process frustrating.


showing 1 - 4