Caching has improved response times and reduces database load for high-traffic applications
What is our primary use case?
My main use case for
Redis is caching to improve application performance and reduce database load.
One specific example from my backend services is using Redis to cache frequently accessed data like product details. Instead of querying the database every time, the application first checks Redis. If data is present, it returns instantly, which significantly reduces the database load and improves response time.
Apart from cache, I have also used Redis for session storage and rate limiting. It helps in managing user sessions efficiently and controlling traffic spikes, which improves overall system reliability.
What is most valuable?
Redis stands out for its extremely fast in-memory performance, support for rich data structures such as string, hash, and list, and features such as TTL for automatic expiration. It is also very useful for caching, sessions management, and rate limiting. I rely mostly on the fast memory performance combined with caching, which helps reduce database load and improve response time for frequently accessed data.
Redis has played a key role in improving system scalability and performance. By offloading frequent reads from the database and enabling fast in-memory cache access, it reduced latency, improved throughput, and helped maintain stability during peak loads.
What needs improvement?
Redis is very reliable, but it could be improved in areas such as monitoring, debugging, and feasibility into memory use. Better built-in tools for observability would help teams manage it more effectively at scale. Managing memory efficiently and troubleshooting issues can sometimes require additional tooling, so these areas can also be improved.
One practical challenge I experienced is managing memory efficiently. Since Redis is in-memory, we need to carefully configure eviction policies and monitor usage. Debugging cache-related issues such as stale data or cache invalidation can sometimes be tricky. Additionally, tuning memory usage and eviction policies needs to be planned very carefully.
For how long have I used the solution?
I have been using Redis for the last two years.
What do I think about the stability of the solution?
Redis is quite stable.
What do I think about the scalability of the solution?
Redis is very scalable. It supports both vertical and horizontal scaling, and with features such as clustering and replication, it can handle high traffic and a large database very effectively.
How are customer service and support?
The customer support I have experienced has been good overall. Since Redis is quite stable and well-documented, we have not needed much support, but when required, the response has been helpful.
Which solution did I use previously and why did I switch?
Before choosing Redis, we mainly relied on database-level caching or direct queries. As the application scaled, it started impacting performance, so we switched to Redis for its speed and better caching capabilities.
Before Redis, we relied on the normal database, but before we considered Redis, we looked at a few alternatives such as Memcached. Redis stood out because of its richer data structures and additional features such as persistence and pub/sub features.
What was our ROI?
We have seen a strong ROI after implementing Redis. We reduced the database read load by around 30 to 40 percent and improved API response time by 20 to 30 percent, specifically for frequently accessed endpoints.
What's my experience with pricing, setup cost, and licensing?
The pricing is reasonable for the performance provided. Since we use it as a managed service, there is no licensing complexity, and setup costs were minimal. Most of the cost depends on the use cases and scaling, which was beneficial for us.
What other advice do I have?
Redis is very reliable and easy to integrate. Its simplicity combined with the performance makes it a great choice for backend developers.
My advice would be to first clearly define your use cases, specifically for caching or real-time scenarios, and also pay attention to memory management. Choose the right eviction policies and implement proper monitoring from the beginning. Plan for memory optimization, set appropriate TTLs, and implement strong monitoring and alerting for stability at any scale.
Redis is a powerful and reliable tool for improving application performance. Its speed and flexibility make it a great choice for modern backend systems. It significantly improves performance and scalability with proper planning. It works very effectively for high-traffic applications. I would rate this product an 8 out of 10.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Caching has accelerated complex workflows and delivers low latency for high-traffic microservices
What is our primary use case?
I have used Redis for around four years. I have completed several projects using Redis. At Paytm, I used it for caching and performance optimization, and then I used it at MakeMyTrip for a multi-layer caching architecture.
At MakeMyTrip, I am using Redis for a multi-layer caching architecture. In one of my recent projects, I used Redis as a distributed L2 cache for storing frequently accessed data and reducing downstream service calls, which significantly improves latency and system throughput. In a hotel cancellation policy system, I aggregate data from several microservices including inventory, partner system, and internal policy service. These calls are expensive and add latency. I cache the final computed policy response in Redis with a TTL of around five minutes.
For the five-minute TTL for the cache, the decision was based on balancing data freshness and performance. The cancellation policy does not change very frequently, but when it does, it must reflect responsively and quickly. I analyzed the update frequency versus the request volume, and five minutes provided a good trade-off. Most reads can be served from cache while keeping the data sufficiently fresh. I complemented the TTL with event-driven invalidation for critical updates. In cases where policy changes, I do not have to wait for the TTL to expire.
Apart from caching, I have completed several other use cases of Redis at MakeMyTrip. One was rate limiting, where I use Redis to control traffic at a per-user or per-partner level to protect downstream services. I leverage Redis fast atomic operations to maintain counters and enforce limits without adding latency. I also use Redis for temporary state management, especially in scenarios where I need to store short-lived intermediate data between multi-step flows. I use Redis as an in-memory solution and it is very fast. Another aspect I focus on is cache design and observability to ensure proper key structuring, monitor cache hit-miss ratios, and tune TTL based on traffic patterns. This helps me continuously optimize performance and avoid issues such as stale data or cache stampede.
What is most valuable?
A few features of Redis that I use on a day-to-day basis and feel are among the best are extremely low latency and high throughput. Since Redis is in-memory, it makes it ideal for cases such as caching and rate limiting where response time is critical. TTL expiry support is very useful in Redis as it allows me to automatically evict stale data without manual cleanup, which is something I use heavily in my caching strategy. Another point I can mention is that the rich data structures such as strings, hashes, and even sorted sets are very powerful. I have used strings for caching responses and counters, whereas I have used hashes for storing structured objects. One more feature I can tell you about is atomic operations. Redis guarantees atomicity for operations such as incrementing a counter, which is very useful for rate limiting and avoiding race conditions in distributed systems. Finally, I want to emphasize that Redis is easy to scale and integrate, whether through clustering or using a distributed cache across microservices.
Redis has impacted my organization positively by providing default support that is very useful. For metrics, in one of my core systems, introducing Redis as a distributed cache helped me achieve around an 80% cache hit rate, which reduced repeated downstream services. Real API latency also improved from around two seconds to approximately 450 milliseconds for P99. It also helped reduce the load on dependent services and databases, which improved overall system reliability.
What needs improvement?
There are some points where I feel Redis can be improved. One issue is cache invalidation. Keeping cache data consistent with the source of truth can be tricky, especially in distributed systems. I address this using a combination of TTL-based expiry and event-driven invalidation, but it still requires careful design. Another point I want to add is memory management. Since Redis is in-memory, storing large and improperly structured data can quickly increase memory usage and costs. I had to optimize key design, data size, and eviction policies such as LRU to manage it effectively.
For how long have I used the solution?
I have been working in my current field for around four and a half years.
What do I think about the stability of the solution?
In my experience, Redis is highly stable.
What do I think about the scalability of the solution?
Redis scalability in my environment is quite good. It is highly scalable. I scale Redis horizontally using clustering and sharding, where data is distributed across multiple nodes to handle higher traffic and larger data sets. This helps avoid bottlenecks and ensures consistent performance even as load increases. I use replica nodes to handle read traffic and improve availability. For high throughput scenarios, this allows me to offload reads from the primary node and maintain low latency.
How are customer service and support?
Regarding customer support, I have not directly engaged with Redis customer support very often, mainly because I use it as a managed service and most operational issues are handled internally by my infrastructure team. From an application perspective, Redis has proven to be quite stable and predictable. Most issues I encounter, such as cache misses or memory pressure, I handle through monitoring, tuning, and design improvements. The documentation and community support for Redis are very strong, making troubleshooting quicker. For deeper infrastructure-level issues, my platform team typically coordinates with cloud provider support.
Which solution did I use previously and why did I switch?
Before Redis, I primarily relied on direct database queries and some in-memory caching solutions such as Guava. The main issue was that this approach increased latency and added higher loads on downstream services and databases, especially for frequently accessed or aggregated data. In some cases, repeated calls to multiple microservices made APIs slow and less reliable during peak traffic. Switching to Redis solved these issues effectively.
What was our ROI?
The return on investment with Redis is clearly evident. For example, from a system perspective, Redis helped me achieve around an 80% cache hit rate, which reduces repeated downstream calls, as I mentioned earlier. It improved API latency from two seconds to 450 milliseconds for P99. From a productivity standpoint, it significantly reduced manual troubleshooting and performance firefighting. Many latency and load issues were absorbed by the caching layer, and in some workflows, automation and caching together reduced manual intervention by about 60 to 80%. This allowed my team to focus on building features instead of handling operational issues.
What's my experience with pricing, setup cost, and licensing?
I have not been directly involved in the pricing aspect, but I have seen that the costs are primarily driven by memory consumption and cluster size, since Redis operates in-memory. Because of that, I am quite careful about optimizing data size and choosing appropriate TTLs to avoid unnecessary cache bloat. I was not directly involved in pricing decisions, but I did contribute to cost efficiency through better cache design and memory optimization.
Which other solutions did I evaluate?
I had a few options to consider before choosing Redis, but one option was to rely more on database-level optimizations such as indexing or query tuning, which did not solve the problems related to repeated reads and high latency. In-memory caches such as Guava worked well locally but do not scale across multiple instances since they are not sharded. As for distributed caching, I also considered Memcached. However, Redis stood out because of its richer data structures, built-in TTL support, atomic operations, and better flexibility for use cases such as rate limiting and structured caching.
What other advice do I have?
My advice for others looking into using Redis is to design caching carefully. Focus on good key data structures, appropriate TTLs, and a clear invalidation strategy because cache consistency is often the biggest challenge I face in Redis. Be mindful of memory use since Redis is in-memory, and optimize data size and eviction policies accordingly.
I have shared most of my experience with Redis previously. Overall, I want to say that Redis truly adds value, especially for low latency and high throughput use cases. Redis is extremely powerful, but to realize its full potential, it requires careful design around data and traffic patterns. I would rate this review an 8.
Which deployment model are you using for this solution?
Hybrid Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Redis Cloud Delivers Speed and Reliability with Hassle-Free Managed Scaling
What do you like best about the product?
What I like most about Redis Cloud is its speed and reliability. It makes caching and real-time data processing extremely fast, and the managed infrastructure removes the hassle of handling scaling, backups, and failover manually.
What do you dislike about the product?
The biggest downside of Redis Cloud is the cost as you scale up. The managed service is convenient, but pricing can rise quickly as memory usage increases. I’d also like to see more flexible configuration options, along with stronger monitoring capabilities.
What problems is the product solving and how is that benefiting you?
Redis Cloud helps address performance bottlenecks in applications that need fast access to frequently requested data. By caching responses and session data in Redis, we can reduce database calls and improve response times, making the application faster and more scalable overall.
Redis key deploying
What do you like best about the product?
The each entry of Redis cashes to maintain ttl for long time as per our request
What do you dislike about the product?
The maintenance window which was managed by Redis team
What problems is the product solving and how is that benefiting you?
Redis Cloud is solving several critical problems that developers and businesses face when managing high-performance, real-time applications.
Optimize AI projects with reliable data processing while addressing scaling challenges
What is our primary use case?
We use
Redis for several purposes, including ranking, counting, saving, sharing, caching, and setting time-to-live notifications. These functionalities are employed across various AI projects and in data processing tools, where
Redis helps with the ongoing data pipeline process.
What is most valuable?
Redis has multiple valuable features such as being a free and reliable open-source tool. It functions similarly to a foundational building block in a larger system, enabling native integration and high functionality in core data processes. Despite its limitations, Redis provides valuable performance enhancement through system fine-tuning and multi-thread handling.
What needs improvement?
There are a few areas where Redis could improve. The pub-sub capabilities could be optimized to handle network sessions better, as there are challenges with maintaining sessions between clients and systems. Data persistence and recovery face issues with compatibility across major versions, making upgrades possible but downgrades not active. There's a need for better migration tools to support data movements in a hybrid environment. Concerns exist about licensing and community engagement due to changes in Redis and its forks.
For how long have I used the solution?
I have been working with Redis for maybe ten years.
What was my experience with deployment of the solution?
We encountered several challenges during the deployment process. Redis required a comprehensive setup process, with attention to hosting parameters, environment preparation, and network rules configuration. It is particularly complex in high-performance scalability contexts, taking us around one week to deploy initially.
What do I think about the stability of the solution?
Redis is fairly stable, although improvements are needed concerning user load and direct answering time, which sometimes results in downtime on the user side.
What do I think about the scalability of the solution?
Redis is somewhat limited in scalability, rating around four or five. Data migration and changes to application-side configurations are challenging due to the lack of automatic migration tools in a non-clustered legacy system.
Which solution did I use previously and why did I switch?
We have been using Redis since before I joined the company, so I am unaware of any previous solutions.
How was the initial setup?
The initial setup of Redis was difficult, with a rating of two or three out of ten. A deep understanding of Redis’s core and high technical knowledge was required, making the process lengthy and complex.
What about the implementation team?
Our implementation was handled internally by a small team. Typically, deploying Redis requires participation from around two or three people.
What's my experience with pricing, setup cost, and licensing?
Since we use an open-source version of Redis, we do not experience any setup costs or licensing expenses. The solution is integrated and utilized internally without financial investment.
Which other solutions did I evaluate?
We did not evaluate other solutions before selecting Redis, as it was already decided by the time I joined the company.
What other advice do I have?
I rate Redis seven out of ten overall. While it's a powerful open-source tool, it has areas needing improvement in terms of scalability and certain functionalities. Despite this, the tool provides reliability for our needs. I recommend considering these aspects before adopting Redis for large-scale operations, especially if high technical competencies are needed.
Fast performance with scalable and seamless deployment
What is our primary use case?
I use Redis as a cache to store user sessions with login details and also some current status of the devices.
What is most valuable?
The performance of Redis is very fast. Its deployment is pretty easy when using it on ElasticCache, and I did not need to worry about scalability on AWS. It's pretty scalable and stable.
What needs improvement?
For the PubSub feature, we had to create our own tools to monitor the events.
For how long have I used the solution?
I have been using Redis for about six years.
What do I think about the stability of the solution?
The ElasticCache is pretty stable.
What do I think about the scalability of the solution?
I did not need to worry about it on AWS, so it's pretty scalable.
How are customer service and support?
I have never contacted the Redis support team.
How would you rate customer service and support?
What other advice do I have?
I would probably advise learning how to use command-line tools.
I'd rate the solution eight out of ten.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)