Listing Thumbnail

    Redis Cloud: Real-Time Cache, Vector Search & AI Agent Memory

     Info
    Sold by: Redis 
    Deployed on AWS
    Free Trial
    Vendor Insights
    AWS Free Tier
    Redis Cloud: sub-millisecond cache, semantic caching, vector search, and AI agent memory - fully managed, AWS-native, 99.999% uptime. $500 free trial included.
    4.5

    Overview

    Play video

    Redis Cloud is a fully managed real-time data platform that powers applications at any scale, from a single developer database to hundreds of millions of operations per second globally. Built on Redis 8, it delivers sub-millisecond latency with every read and write happening in memory.

    It combines in-memory data storage, vector search, semantic caching, agent memory, full-text search, JSON, time series, pub-sub messaging, and rate limiting in a single subscription, eliminating the need for multiple tools or data movement.

    For AI workloads, semantic caching reduces LLM costs by handling similar queries before they reach the model. A high performance vector database enables accurate RAG pipelines with hybrid search. Agent Memory Server allows AI agents to retain context across sessions.

    Active Active geo replication enables simultaneous read and write across regions with automatic conflict resolution, backed by a 99.999 per cent uptime SLA. Auto tiering extends performance to large datasets, while data persistence ensures durability.

    Redis Cloud meets enterprise-grade security standards, including SOC 2 Type 2, ISO 27001, PCI DSS, GDPR, and HIPAA, with encryption, role based access control, and private endpoints.

    Use cases include session stores, real-time personalisation, fraud detection, feature stores, RAG pipelines, IoT, and agent memory, all from one platform.

    A 14-day free trial includes 500 dollars of full feature usage. Cancellation of AWS Marketplace will delete resources, so ensure a backup payment method is added to preserve data.

    Highlights

    • Redis Cloud is a unified real-time data platform for caching, vector search, and multi-model data, powering everything from high-performance apps to AI workloads. Build faster, respond instantly, and scale without re-architecting. Start with a 14-day free trial including $500 in credits, then grow with your workloads.
    • 99.999% uptime SLA with Active-Active geo-replication - read and write simultaneously from any region with sub-millisecond latency and automatic failover. Dedicated Pro infrastructure with no throughput caps for production workloads.
    • Session stores, leaderboards, fraud detection, real-time personalisation, ML feature stores, and IoT time series, all without switching platforms. Multi-cloud across AWS, GCP, and Azure with no vendor lock-in

    Details

    Sold by

    Delivery method

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Trust Center

    Trust Center
    Access real-time vendor security and compliance information through their Trust Center powered by Drata. Review certifications and security standards before purchase.

    Buyer guide

    Gain valuable insights from real users who purchased this product, powered by PeerSpot.
    Buyer guide

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Vendor Insights

     Info
    Skip the manual risk assessment. Get verified and regularly updated security info on this product with Vendor Insights.
    Security credentials achieved
    (3)

    Pricing

    Free trial

    Try this product free according to the free trial terms set by the vendor.

    Redis Cloud: Real-Time Cache, Vector Search & AI Agent Memory

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Usage costs (2)

     Info
    Dimension
    Cost/unit
    Redis Cloud Usage
    $0.01
    Redis Cloud Data Transfer
    $0.01

    Vendor refund policy

    Please contact seller support team for refund details.

    Custom pricing options

    Request a private offer to receive a custom quote.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Software as a Service (SaaS)

    SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.

    Resources

    Support

    Vendor support

    For help setting up your Redis® account through the AWS Marketplace, or questions on contract terms and pricing, please contact aws@redis.com  For additional training, please check out Redis® University.

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly

    Accolades

     Info
    Top
    25
    In Databases & Analytics Platforms, Databases, Generative AI
    Top
    10
    In Analytic Platforms, Databases & Analytics Platforms, Databases

    Customer reviews

     Info
    Sentiment is AI generated from actual customer reviews on AWS and G2
    Reviews
    Functionality
    Ease of use
    Customer service
    Cost effectiveness
    0 reviews
    Insufficient data
    Insufficient data
    Insufficient data
    Insufficient data
    Positive reviews
    Mixed reviews
    Negative reviews

    Overview

     Info
    AI generated from product descriptions
    In-Memory Data Storage with Sub-Millisecond Latency
    Sub-millisecond latency for all read and write operations with in-memory data storage built on Redis 8
    Vector Search and Semantic Caching
    High-performance vector database enabling accurate RAG pipelines with hybrid search capabilities and semantic caching to reduce LLM costs
    Active-Active Geo-Replication
    Simultaneous read and write access across multiple regions with automatic conflict resolution and 99.999% uptime SLA
    Multi-Model Data Support
    Unified platform combining in-memory data storage, vector search, full-text search, JSON, time series, pub-sub messaging, rate limiting, and agent memory in a single subscription
    Enterprise Security Compliance
    SOC 2 Type 2, ISO 27001, PCI DSS, GDPR, and HIPAA compliance with encryption, role-based access control, and private endpoints
    In-Memory Data Structure Store
    Supports sub-millisecond latencies with in-memory storage of various data types including strings, hashes, lists, sets, and sorted sets for real-time data processing.
    Persistence and Replication
    Configurable persistence models with replication capabilities to maintain data durability while benefiting from in-memory speed and enabling data redundancy.
    High Availability and Failover
    Redis Sentinel provides automatic failover and monitoring to ensure uninterrupted service and high availability for critical applications.
    Horizontal Scalability
    Redis Cluster implementation enables horizontal scaling to handle high volumes of requests and support growing data demands.
    Security Features
    Built-in access control lists and encryption capabilities to safeguard data and comply with industry security best practices.
    Distributed SQL Database Architecture
    Fully managed, distributed SQL database with lock-free cloud-native architecture designed for transactional (OLTP) and analytical (OLAP) workloads
    High-Throughput Data Ingestion
    Parallel, distributed lock-free ingestion capable of processing millions of events per second using Pipelines
    Vector Search Capabilities
    Indexed vector search with full-text search capabilities for generative AI applications with elastic scale-out architecture
    Real-Time Query Processing
    Low-latency SQL query execution on billions of rows of data with support for tens or hundreds of thousands of concurrent users
    Unified Workload Engine
    Single engine supporting transactional (OLTP), analytical (OLAP), and vector (GenAI) workloads without requiring data movement between systems

    Security credentials

     Info
    Validated by AWS Marketplace
    FedRAMP
    GDPR
    HIPAA
    ISO/IEC 27001
    PCI DSS
    SOC 2 Type 2
    -
    -
    -
    No security profile
    No security profile

    Contract

     Info
    Standard contract
    No
    No

    Customer reviews

    Ratings and reviews

     Info
    4.5
    69 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    65%
    35%
    0%
    0%
    0%
    11 AWS reviews
    |
    58 external reviews
    External reviews are from G2  and PeerSpot .
    Rituraj NSIT

    Caching has improved response times and reduces database load for high-traffic applications

    Reviewed on Apr 08, 2026
    Review from a verified AWS customer

    What is our primary use case?

    My main use case for Redis  is caching to improve application performance and reduce database load.

    One specific example from my backend services is using Redis  to cache frequently accessed data like product details. Instead of querying the database every time, the application first checks Redis. If data is present, it returns instantly, which significantly reduces the database load and improves response time.

    Apart from cache, I have also used Redis for session storage and rate limiting. It helps in managing user sessions efficiently and controlling traffic spikes, which improves overall system reliability.

    What is most valuable?

    Redis stands out for its extremely fast in-memory performance, support for rich data structures such as string, hash, and list, and features such as TTL for automatic expiration. It is also very useful for caching, sessions management, and rate limiting. I rely mostly on the fast memory performance combined with caching, which helps reduce database load and improve response time for frequently accessed data.

    Redis has played a key role in improving system scalability and performance. By offloading frequent reads from the database and enabling fast in-memory cache access, it reduced latency, improved throughput, and helped maintain stability during peak loads.

    What needs improvement?

    Redis is very reliable, but it could be improved in areas such as monitoring, debugging, and feasibility into memory use. Better built-in tools for observability would help teams manage it more effectively at scale. Managing memory efficiently and troubleshooting issues can sometimes require additional tooling, so these areas can also be improved.

    One practical challenge I experienced is managing memory efficiently. Since Redis is in-memory, we need to carefully configure eviction policies and monitor usage. Debugging  cache-related issues such as stale data or cache invalidation can sometimes be tricky. Additionally, tuning memory usage and eviction policies needs to be planned very carefully.

    For how long have I used the solution?

    I have been using Redis for the last two years.

    What do I think about the stability of the solution?

    Redis is quite stable.

    What do I think about the scalability of the solution?

    Redis is very scalable. It supports both vertical and horizontal scaling, and with features such as clustering and replication, it can handle high traffic and a large database very effectively.

    How are customer service and support?

    The customer support I have experienced has been good overall. Since Redis is quite stable and well-documented, we have not needed much support, but when required, the response has been helpful.

    Which solution did I use previously and why did I switch?

    Before choosing Redis, we mainly relied on database-level caching or direct queries. As the application scaled, it started impacting performance, so we switched to Redis for its speed and better caching capabilities.

    Before Redis, we relied on the normal database, but before we considered Redis, we looked at a few alternatives such as Memcached. Redis stood out because of its richer data structures and additional features such as persistence and pub/sub features.

    What was our ROI?

    We have seen a strong ROI after implementing Redis. We reduced the database read load by around 30 to 40 percent and improved API response time by 20 to 30 percent, specifically for frequently accessed endpoints.

    What's my experience with pricing, setup cost, and licensing?

    The pricing is reasonable for the performance provided. Since we use it as a managed service, there is no licensing complexity, and setup costs were minimal. Most of the cost depends on the use cases and scaling, which was beneficial for us.

    What other advice do I have?

    Redis is very reliable and easy to integrate. Its simplicity combined with the performance makes it a great choice for backend developers.

    My advice would be to first clearly define your use cases, specifically for caching or real-time scenarios, and also pay attention to memory management. Choose the right eviction policies and implement proper monitoring from the beginning. Plan for memory optimization, set appropriate TTLs, and implement strong monitoring and alerting for stability at any scale.

    Redis is a powerful and reliable tool for improving application performance. Its speed and flexibility make it a great choice for modern backend systems. It significantly improves performance and scalability with proper planning. It works very effectively for high-traffic applications. I would rate this product an 8 out of 10.

    Which deployment model are you using for this solution?

    Public Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Varuns Ug

    Caching has accelerated complex workflows and delivers low latency for high-traffic microservices

    Reviewed on Apr 03, 2026
    Review from a verified AWS customer

    What is our primary use case?

    I have used Redis  for around four years. I have completed several projects using Redis . At Paytm, I used it for caching and performance optimization, and then I used it at MakeMyTrip for a multi-layer caching architecture.

    At MakeMyTrip, I am using Redis for a multi-layer caching architecture. In one of my recent projects, I used Redis as a distributed L2 cache for storing frequently accessed data and reducing downstream service calls, which significantly improves latency and system throughput. In a hotel cancellation policy system, I aggregate data from several microservices including inventory, partner system, and internal policy service. These calls are expensive and add latency. I cache the final computed policy response in Redis with a TTL of around five minutes.

    For the five-minute TTL for the cache, the decision was based on balancing data freshness and performance. The cancellation policy does not change very frequently, but when it does, it must reflect responsively and quickly. I analyzed the update frequency versus the request volume, and five minutes provided a good trade-off. Most reads can be served from cache while keeping the data sufficiently fresh. I complemented the TTL with event-driven invalidation for critical updates. In cases where policy changes, I do not have to wait for the TTL to expire.

    Apart from caching, I have completed several other use cases of Redis at MakeMyTrip. One was rate limiting, where I use Redis to control traffic at a per-user or per-partner level to protect downstream services. I leverage Redis fast atomic operations to maintain counters and enforce limits without adding latency. I also use Redis for temporary state management, especially in scenarios where I need to store short-lived intermediate data between multi-step flows. I use Redis as an in-memory solution and it is very fast. Another aspect I focus on is cache design and observability to ensure proper key structuring, monitor cache hit-miss ratios, and tune TTL based on traffic patterns. This helps me continuously optimize performance and avoid issues such as stale data or cache stampede.

    What is most valuable?

    A few features of Redis that I use on a day-to-day basis and feel are among the best are extremely low latency and high throughput. Since Redis is in-memory, it makes it ideal for cases such as caching and rate limiting where response time is critical. TTL expiry support is very useful in Redis as it allows me to automatically evict stale data without manual cleanup, which is something I use heavily in my caching strategy. Another point I can mention is that the rich data structures such as strings, hashes, and even sorted sets are very powerful. I have used strings for caching responses and counters, whereas I have used hashes for storing structured objects. One more feature I can tell you about is atomic operations. Redis guarantees atomicity for operations such as incrementing a counter, which is very useful for rate limiting and avoiding race conditions in distributed systems. Finally, I want to emphasize that Redis is easy to scale and integrate, whether through clustering or using a distributed cache across microservices.

    Redis has impacted my organization positively by providing default support that is very useful. For metrics, in one of my core systems, introducing Redis as a distributed cache helped me achieve around an 80% cache hit rate, which reduced repeated downstream services. Real API latency also improved from around two seconds to approximately 450 milliseconds for P99. It also helped reduce the load on dependent services and databases, which improved overall system reliability.

    What needs improvement?

    There are some points where I feel Redis can be improved. One issue is cache invalidation. Keeping cache data consistent with the source of truth can be tricky, especially in distributed systems. I address this using a combination of TTL-based expiry and event-driven invalidation, but it still requires careful design. Another point I want to add is memory management. Since Redis is in-memory, storing large and improperly structured data can quickly increase memory usage and costs. I had to optimize key design, data size, and eviction policies such as LRU to manage it effectively.

    For how long have I used the solution?

    I have been working in my current field for around four and a half years.

    What do I think about the stability of the solution?

    In my experience, Redis is highly stable.

    What do I think about the scalability of the solution?

    Redis scalability in my environment is quite good. It is highly scalable. I scale Redis horizontally using clustering and sharding, where data is distributed across multiple nodes to handle higher traffic and larger data sets. This helps avoid bottlenecks and ensures consistent performance even as load increases. I use replica nodes to handle read traffic and improve availability. For high throughput scenarios, this allows me to offload reads from the primary node and maintain low latency.

    How are customer service and support?

    Regarding customer support, I have not directly engaged with Redis customer support very often, mainly because I use it as a managed service and most operational issues are handled internally by my infrastructure team. From an application perspective, Redis has proven to be quite stable and predictable. Most issues I encounter, such as cache misses or memory pressure, I handle through monitoring, tuning, and design improvements. The documentation and community support for Redis are very strong, making troubleshooting quicker. For deeper infrastructure-level issues, my platform team typically coordinates with cloud provider support.

    Which solution did I use previously and why did I switch?

    Before Redis, I primarily relied on direct database queries and some in-memory caching solutions such as Guava. The main issue was that this approach increased latency and added higher loads on downstream services and databases, especially for frequently accessed or aggregated data. In some cases, repeated calls to multiple microservices made APIs slow and less reliable during peak traffic. Switching to Redis solved these issues effectively.

    What was our ROI?

    The return on investment with Redis is clearly evident. For example, from a system perspective, Redis helped me achieve around an 80% cache hit rate, which reduces repeated downstream calls, as I mentioned earlier. It improved API latency from two seconds to 450 milliseconds for P99. From a productivity standpoint, it significantly reduced manual troubleshooting and performance firefighting. Many latency and load issues were absorbed by the caching layer, and in some workflows, automation and caching together reduced manual intervention by about 60 to 80%. This allowed my team to focus on building features instead of handling operational issues.

    What's my experience with pricing, setup cost, and licensing?

    I have not been directly involved in the pricing aspect, but I have seen that the costs are primarily driven by memory consumption and cluster size, since Redis operates in-memory. Because of that, I am quite careful about optimizing data size and choosing appropriate TTLs to avoid unnecessary cache bloat. I was not directly involved in pricing decisions, but I did contribute to cost efficiency through better cache design and memory optimization.

    Which other solutions did I evaluate?

    I had a few options to consider before choosing Redis, but one option was to rely more on database-level optimizations such as indexing or query tuning, which did not solve the problems related to repeated reads and high latency. In-memory caches such as Guava worked well locally but do not scale across multiple instances since they are not sharded. As for distributed caching, I also considered Memcached. However, Redis stood out because of its richer data structures, built-in TTL support, atomic operations, and better flexibility for use cases such as rate limiting and structured caching.

    What other advice do I have?

    My advice for others looking into using Redis is to design caching carefully. Focus on good key data structures, appropriate TTLs, and a clear invalidation strategy because cache consistency is often the biggest challenge I face in Redis. Be mindful of memory use since Redis is in-memory, and optimize data size and eviction policies accordingly.

    I have shared most of my experience with Redis previously. Overall, I want to say that Redis truly adds value, especially for low latency and high throughput use cases. Redis is extremely powerful, but to realize its full potential, it requires careful design around data and traffic patterns. I would rate this review an 8.

    Which deployment model are you using for this solution?

    Hybrid Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Ravi Raushan Kumar

    Caching and session design has improved performance and now supports high-traffic workloads

    Reviewed on Mar 27, 2026
    Review from a verified AWS customer

    What is our primary use case?

    My main use case for Redis  is caching frequently accessed data to improve performance and reduce database load. For example, I cache API responses and user-related data so that repeated requests can be served quickly without hitting the database every time. I use TTL to automatically expire stale data and ensure caching freshness. In some cases, I also use Redis  for session management and handling short-lived data efficiently.

    I have used Redis for session management in a back-end system, where the main idea was to store user session data in Redis instead of keeping it in memory on a single server, which helps me scale across multiple instances. When a user logs in, we generate a session ID or token and store session-related data like user ID and metadata in Redis, and this session is associated with a TTL. It automatically expires after a certain period of time or after a certain time of inactivity. On each request, the session ID is validated by fetching data from Redis, which is very fast due to its in-memory nature, ensuring low latency and allowing us to handle the highest traffic efficiently. This approach helps us achieve horizontal scalability and avoids issues concerning session stickiness. Additionally, we ensure security by expiring inactive sessions or occasionally refreshing TTL for active users.

    Apart from caching and session management, I worked on interesting challenges using Redis, particularly around caching consistency and handling stale data. Initially, we faced issues where cached data would become outdated after database updates, and to solve this, we implemented a cache-aside strategy where we explicitly invalidated or updated the cache whenever the underlying data changed. Another scenario was handling cache misses during high traffic to avoid multiple requests hitting the database simultaneously, where we introduced techniques such as setting approaches, TTLs, and in some cases, using locking to ensure only one request rebuilds the cache. We also tuned invocation policies and memory usage to ensure Redis remains performant under load. These experiences helped me understand how to use Redis not just as a cache, but as a critical component in system performance and scalability. For maintaining the high traffic system, we also explored using Redis for rate limiting and short-lived counters, which further reduced our load on our core system.

    What is most valuable?

    The best features Redis offers are the ones that stand out most based on real-world usage. First is its in-memory preference, as Redis is extremely fast, making it ideal for caching and session management where low latency is critical. Second, it supports multiple data structures such as strings, hashes, lists, and sets, which are very powerful. I have used hashes for storing session data and structured objects efficiently. Another key feature is TTL, which allows automatic expiration of keys; this is very useful for managing sessions and ensuring stable cache, as stale cache data gets cleaned up without manual intervention. I also find Redis very useful for distributed systems because it acts as a centralized store that multiple services can access consistently. Overall, its simplicity, speed, and flexibility make it a very effective tool for performance and scalability improvement.

    Using data structures such as hashes in Redis made the implementation much cleaner and more efficient. For session management, instead of storing the entire session as a serialized object, we used a Redis hash where each field represents a session attribute such as user ID, login time, and roles. This allowed us to update specific fields without rewriting the whole object, which improved performance and flexibility. Hashes are also memory efficient compared to storing multiple keys, helping us optimize memory usage when handling a large number of sessions. A specific scenario where TTL helped was with session expiration; instead of building a separate cleanup object to remove inactive sessions, we simply set a TTL on each session key, allowing Redis to automatically remove the expired sessions. This reduces operational overhead and avoids stale session buildup. Without TTL, we would have needed a background scheduler or a cron job to help clean up expired sessions, which adds complexity and potential failure points. Redis handled it natively and very efficiently.

    Using Redis has had a specific positive impact on our system performance and scalability. The biggest improvement is in response time; by caching frequently accessed data, we reduce the API latency from database level milliseconds to sub-millisecond responses in many cases. It also helps significantly reduce the database load, especially during peak traffic, improving overall system stability and preventing bottlenecks. From a scalability perspective, Redis enables us to handle higher traffic without needing to scale the database proportionally, making the system more cost-efficient.

    What needs improvement?

    Overall, Redis is a powerful and reliable tool, but there are a few areas for improvement. One limitation is that Redis is memory-based, so scaling can become expensive compared to disk-based systems. While it offers persistence options, it is not always ideal for large datasets where cost efficiency is critical. Another area is cache consistency; Redis itself does not enforce consistency with the primary database, so developers need to carefully design cache invalidation strategies. More built-in mechanisms or patterns to simplify this would be helpful.

    Additional areas where Redis could improve include monitoring, security, and ease of use in large-scale ecosystems. From a monitoring perspective, while Redis provides basic metrics, deep visibility into issues such as memory fragmentation, hot keys, or latency spikes often requires external tools; more built-in, user-friendly options would make diagnosing production issues quicker. Regarding security, Redis has improved over time, but historically, it required careful configurations; features such as authentication and encryption exist but are not always enabled by default, posing a risk if not properly set up. A strong, secure by default configuration would be beneficial. In terms of ease of use, while Redis is straightforward for basic use cases, managing clusters and persistence strategies can become complex at scale, so better abstractions or tooling for distributed setups and operations would make it more developer-friendly.

    For how long have I used the solution?

    I have been using Redis for the last three years, and it is a part of my back-end development work where I mainly use it as a caching layer to improve my application's performance and reduce database load.

    What other advice do I have?

    My main advice for those looking into using Redis is to focus on the use case; Redis excels where low latency is critical, such as caching, session management, or real-time features, rather than using it as a primary database for everything. Pay close attention to the caching design, especially cache invalidation and TTL strategies; poorly designed caches can lead to stale data or inconsistency. Plan for scalability and failure scenarios early; decide how you will handle Redis downtime. If possible, consider using a managed service such as those from Amazon Web Services  to reduce operational overhead and focus more on application logic.

    I find Redis particularly valuable because of how versatile it is. Many people think it is only a key-value pair cache, but its support for atomic operations and different data structures makes it useful for solving various real-world problems. For example, features such as atomic increment operations are extremely useful for building things such as rate limiting or counters without worrying about race conditions. Another underrated aspect is how simple yet powerful TTL and expiration handling are, eliminating the need for complex cleanup logic, which can otherwise introduce bugs or operational overhead. I also think more people should leverage Redis for lightweight distributed coordination, such as using Redis for distributed locks or request duplication, which can simplify system design when multiple services are involved.

    Using Redis has definitely helped us improve cost efficiency. One of the main impacts was reducing the load on primary databases since a large portion of read requests is served from Redis, so we did not need to scale the database so aggressively, which saved costs on computing and storage. We also observed fewer database connections and queries, leading to lower CPU usage and lower input-output usage, which reduced the need for high-end database instances. For example, during peak traffic, instead of increasing database capacity, Redis absorbed most of the repeated requests, helping us delay or even avoid additional infrastructure provisioning, which directly reduces costs. Of course, Redis itself adds some cost since it requires memory, but the overall savings from reduced database load and improved efficiency outweigh the cost in our case.

    Overall, my experience with Redis has been very positive, and it has played a key role in improving performance, scalability, and system responsiveness in our back-end system. What stands out to me is its simplicity combined with powerful capabilities; it is easy to get started with but also flexible enough to handle more advanced uses such as caching, session management, and real-time processing. The key is to use it thoughtfully, specifically regarding caching design and understanding its potential. When used correctly, it delivers significant value, and it is definitely a tool I would continue to use in future systems. I would rate my overall experience with Redis as a nine out of ten.

    Computer & Network Security

    Redis Cloud Delivers Speed and Reliability with Hassle-Free Managed Scaling

    Reviewed on Mar 12, 2026
    Review provided by G2
    What do you like best about the product?
    What I like most about Redis Cloud is its speed and reliability. It makes caching and real-time data processing extremely fast, and the managed infrastructure removes the hassle of handling scaling, backups, and failover manually.
    What do you dislike about the product?
    The biggest downside of Redis Cloud is the cost as you scale up. The managed service is convenient, but pricing can rise quickly as memory usage increases. I’d also like to see more flexible configuration options, along with stronger monitoring capabilities.
    What problems is the product solving and how is that benefiting you?
    Redis Cloud helps address performance bottlenecks in applications that need fast access to frequently requested data. By caching responses and session data in Redis, we can reduce database calls and improve response times, making the application faster and more scalable overall.
    KarimGarchi

    Performance shines with seamless session caching and minimal configuration

    Reviewed on Jul 10, 2025
    Review provided by PeerSpot

    What is our primary use case?

    Redis  is used for a part of a booking engine for travel, specifically for the front part to get some sessions and information about the sessions. If a customer or user is using the sites in different parts, we use Redis  to get this information in cache.

    What is most valuable?

    The best features of Redis, from my personal perspective, are the performance, which is very quick, and it's very simple to implement.

    Since I started using Redis, I feel that the product is saving me some performance tuning time. It's very easy, I have few parameters to tune, and it seems to have performance without a lot of working on the performance, compared to Cassandra , where you have to configure the memory and many other settings.

    The integration capability of Redis is excellent.

    Redis is very affordable because it's free.

    What needs improvement?

    The disadvantage of Redis is that it's a little bit hard to have too many clusters or too many nodes and create the clusters. The sync between the nodes is easier to implement with Couchbase , for example, and this is the only problem, the only disadvantage for me.

    For how long have I used the solution?

    I started using Redis this year.

    What do I think about the stability of the solution?

    The stability of Redis rates nine out of ten, with one being not stable and ten being very stable.

    What do I think about the scalability of the solution?

    The scalability of Redis rates eight out of ten, with one being not scalable and ten being very scalable.

    How are customer service and support?

    Technical support rates at three out of ten.

    Which solution did I use previously and why did I switch?

    We started using Redis this year when we switched from Couchbase  at the beginning of the year.

    I have decommissioned Couchbase, which was not my database but my customer's database. They decommissioned it this year and chose Redis for the cache data parts, so I'm not using Couchbase anymore.

    What about the implementation team?

    We use community support and we don't have a provider for the support, but to be honest, we don't need support. From the time we implemented, I hope it will continue this way.

    What was our ROI?

    I see about 40% savings since using Redis.

    Which other solutions did I evaluate?

    In my projects, we use documents basically, so all the NoSQL databases can be mapped with an API to have a kind of independence from Redis and any tool. If tomorrow we want to move from Redis to something better, we are independent from that.

    What other advice do I have?

    If Redis has questions or comments related to my review, it's possible for them to reach me via email to clarify something.

    I am interested in being a reference for Redis.

    On a scale of 1-10, I rate Redis a 10.

    View all reviews