External reviews
191 reviews
from
External reviews are not included in the AWS star rating for the product.
Total control and visibility: efficient optimization of costs, CPU, and memory with Cast AI
What do you like best about the product?
With Cast AI, I can clearly identify the amount of CPUs and memory that my cluster is using, as well as the cost it is generating. Additionally, the tool allows for very efficient and secure resource optimization, providing more control and visibility over the environment's usage.
What do you dislike about the product?
No disadvantages. Since we started using Cast AI in our environments, it has helped us a lot in cost optimization.
What problems is the product solving and how is that benefiting you?
Cast AI has been helping us control and reduce the costs of our environments by offering the best cluster instances in terms of cost-effectiveness and automatically optimizing our workloads.
Smarter Kubernetes Optimization with Real Cost Impact
What do you like best about the product?
What I like best about Cast AI is how effectively it combines cost optimization with operational simplicity. It continuously analyzes our Kubernetes workloads and automatically right-sizes nodes, scales clusters, and leverages spot instances without requiring constant manual tuning from our DevOps team. The visibility into resource utilization and savings is clear and actionable, which makes it easier to justify infrastructure decisions internally. Beyond the cost savings, the real value is the time saved and the confidence that the cluster is always running in an optimized state without daily intervention.
What do you dislike about the product?
One downside is that some of the more advanced configuration and optimization features require a deeper understanding of Kubernetes and cloud infrastructure to fully leverage. While the basics are easy to set up, fine tuning policies and understanding the impact of certain automation decisions can take time. In addition, more granular cost reporting and forecasting capabilities would be helpful for organizations that need detailed financial breakdowns across teams or projects.
What problems is the product solving and how is that benefiting you?
Cast AI is solving the problem of inefficient Kubernetes resource utilization and unpredictable cloud costs. Before using it, we were overprovisioning to avoid performance risks, which resulted in wasted spend and constant manual monitoring. Cast AI automates cluster scaling, right sizing, and spot instance management, which reduces overprovisioning while maintaining reliability. This directly benefits us by lowering infrastructure costs, improving resource efficiency, and freeing our engineering team from repetitive operational tasks so they can focus on higher value initiatives.
Cost-Saving Automation with Robust Support
What do you like best about the product?
I really like the way CAST AI is designed because it aligns well with our company culture. Their support is also very helpful, which I appreciate. I find the AI support integration features beneficial for addressing simple questions and debugging scenarios. The ease of enabling features like vertical auto scaling and scheduling rebalancing is quite user-friendly. I also value the early access feature that allows us to hibernate our development and staging clusters during off hours, which is very helpful for us. And one big feature that helped us a lot was the ability to allocate a part of workload in to spot machines.
What do you dislike about the product?
There's definitely a learning curve to the platform, especially understanding the concepts and how it scales costs for you. Also, the billing aspect takes some time to understand. And some guarantees on the billing aspects (like if the cost reduction doesn't happen as expected) would reduce the initial inhibition of customers to adapt the system. For us personally another challenge was understanding how to measure cast AI's performance once we start using their system but also our own system starts growing. Actually there are indicators in the cast AI for this (like the cluster score, CPU & Memory over-provisioning percentage etc). Tracking these actually helps us understand if the cast AI's optimisations are as good as when we started. This could be documented as an indicator for cast AI's efficiency which will help the customers a lot.
What problems is the product solving and how is that benefiting you?
We use CAST AI mainly for cost optimizations and dev ops. It's successfully brought down our costs and helped with automation to optimize for cost and stability. Features like vertical auto scaling, rebalancing, splitting workload in to on-demand & spot machines and hibernation of clusters during off hours are very helpful.
A very effective tool for reducing costs on Kubernetes and optimizing resources
What do you like best about the product?
Easy installation and quick setup. I noticed a significant reduction in overprovisioning and cloud costs. Resource utilization is better, with fewer workloads lacking resources.
What do you dislike about the product?
The learning module does not yet allow for the definition of business days. Consequently, weekend activity can disproportionately influence resource calculations.
What problems is the product solving and how is that benefiting you?
It allows you to no longer have to perform a manual review of resources, which becomes humanly impossible on a large cluster. Bin packing helps reduce the number of nodes needed and, consequently, the bill.
Cuts Cluster Costs with Super Support and Easy Helm Deployment
What do you like best about the product?
It reduces the cost of clusters and also there is super good support by the cast AI team. It's deployed easily to the cluster helm charts.
What do you dislike about the product?
Right now, the only thing that was an issue was onboarding user. No other comments
What problems is the product solving and how is that benefiting you?
Right now trying to resolve Streaming jobs
Cut k8s Cluster Costs by ~60% and Freed Budget to Scale
What do you like best about the product?
It helped me reduce the k8 s cluster cost by approx 60%. This revenue saved helped us build and scale up other clusters with better services
What do you dislike about the product?
The UI could be a bit more intuitive. The AI-based auto-sizing also seems extremely aggressive, which has caused some operational issues for us. In the end, we had to turn it off.
What problems is the product solving and how is that benefiting you?
Cost for my infra
Autoscaler scheduling
Autoscaler scheduling
Flexible API with Responsive Engineering Support
What do you like best about the product?
What I have been involved most with is the API. So I like the flexibility it provides. We used it to build our cost dashboards that we need to for our internal cost monitoring
Also we had chats with the engineering team for some of our requests and they were very responsive which made collaboration smooth and helped us get our results in time.
Also we had chats with the engineering team for some of our requests and they were very responsive which made collaboration smooth and helped us get our results in time.
What do you dislike about the product?
My primary interaction was with the API overall Im very happy with it, maybe some more examples in the documentation would be nice.
What problems is the product solving and how is that benefiting you?
We use Cast AI cost exporting API to allocate costs at the workload level live. This level of detail its very important to us it gives us better cost visibility and enables us to take actions based on it.
Enhancing Cluster Visibility and Reducing Costs with CAST AI
What do you like best about the product?
I’m genuinely impressed with the way CAST AI presents its user interface. The layout feels clean, intuitive, and thoughtfully designed, which makes it incredibly easy to navigate and understand without needing extensive documentation or onboarding. This intuitive experience allows me to make data‑driven decisions with confidence and quickly follow through with corrective actions whenever necessary.
Since adopting CAST AI, I’ve seen an almost 80% reduction in the manual effort previously required for continuous monitoring. Tasks that once demanded constant attention have now become streamlined and largely automated.
One feature I especially appreciate is the clear visibility into cost analytics. CAST AI distinctly highlights the actual cost versus the optimized effective cost, making it simple to understand the financial impact of its automation. The platform also provides transparent insights into savings achieved through right‑sizing and resource allocation based on real usage patterns. This level of clarity significantly helps me with planning, forecasting, and overall execution.
Additionally, the initial setup process was remarkably quick and hassle‑free, allowing me to start leveraging its capabilities almost immediately.
Since adopting CAST AI, I’ve seen an almost 80% reduction in the manual effort previously required for continuous monitoring. Tasks that once demanded constant attention have now become streamlined and largely automated.
One feature I especially appreciate is the clear visibility into cost analytics. CAST AI distinctly highlights the actual cost versus the optimized effective cost, making it simple to understand the financial impact of its automation. The platform also provides transparent insights into savings achieved through right‑sizing and resource allocation based on real usage patterns. This level of clarity significantly helps me with planning, forecasting, and overall execution.
Additionally, the initial setup process was remarkably quick and hassle‑free, allowing me to start leveraging its capabilities almost immediately.
What do you dislike about the product?
I’ve noticed that during the initial pod initialization, CAST AI doesn’t really catch up with the metrics, Following are details
Key Observations About Pod Initialization Metrics in CAST AI
Initial pod‑startup metrics are not fully captured
During the very first phase of pod initialization, CAST AI appears to miss short‑lived spikes in resource demand. This leads to incomplete or inaccurate metric collection for that specific window.
Short bursts of CPU requirements go unreported
If a pod briefly requires a full 1 core at startup—even for a fraction of a second—CAST AI currently does not record this spike. As a result, the platform overlooks an important requirement needed for successful initialization.
Reported CPU utilization does not reflect real startup needs
When the pod’s average CPU usage settles around, say, 300 millicores, CAST AI reports only that average. It does not reflect that the pod initially needed 1 full core to boot successfully.
This leads to misleading CPU insights
Since CAST AI displays only the averaged metrics, it suggests that the pod’s CPU requirement is consistently low. However, operationally the pod still cannot start without that initial 1‑core burst.
Practical implication: startup failures despite “adequate” reported CPU
Even though the dashboard may show that 300 millicores is sufficient, the absence of a guaranteed 1‑core burst at initialization can cause pod startup delays or failures—none of which the current reporting highlights.
Overall effect on capacity planning and rightsizing
This gap in visibility can cause confusion during rightsizing exercises, as CAST AI does not reflect the full picture. Teams might allocate too little CPU based on averaged metrics, unaware of the critical startup requirement.
Key Observations About Pod Initialization Metrics in CAST AI
Initial pod‑startup metrics are not fully captured
During the very first phase of pod initialization, CAST AI appears to miss short‑lived spikes in resource demand. This leads to incomplete or inaccurate metric collection for that specific window.
Short bursts of CPU requirements go unreported
If a pod briefly requires a full 1 core at startup—even for a fraction of a second—CAST AI currently does not record this spike. As a result, the platform overlooks an important requirement needed for successful initialization.
Reported CPU utilization does not reflect real startup needs
When the pod’s average CPU usage settles around, say, 300 millicores, CAST AI reports only that average. It does not reflect that the pod initially needed 1 full core to boot successfully.
This leads to misleading CPU insights
Since CAST AI displays only the averaged metrics, it suggests that the pod’s CPU requirement is consistently low. However, operationally the pod still cannot start without that initial 1‑core burst.
Practical implication: startup failures despite “adequate” reported CPU
Even though the dashboard may show that 300 millicores is sufficient, the absence of a guaranteed 1‑core burst at initialization can cause pod startup delays or failures—none of which the current reporting highlights.
Overall effect on capacity planning and rightsizing
This gap in visibility can cause confusion during rightsizing exercises, as CAST AI does not reflect the full picture. Teams might allocate too little CPU based on averaged metrics, unaware of the critical startup requirement.
What problems is the product solving and how is that benefiting you?
I use CAST AI extensively for end‑to‑end cluster management, including monitoring, analyzing resource utilization, and optimizing both cost and performance. The platform has significantly streamlined my operations by automating many of the routine oversight tasks that previously required continuous manual effort. In fact, it has reduced my manual monitoring workload by nearly 80%, allowing me to focus more on strategic improvements rather than day‑to‑day checks.
The intuitive and thoughtfully designed UI plays a major role in this efficiency. It presents complex metrics and optimization insights in a clear, easy‑to‑interpret manner, enabling me to make informed, data‑driven decisions with confidence. Additionally, CAST AI highlights cost savings transparently—showing both actual and optimized spending—which makes it much easier to track financial impact and justify optimization initiatives.
Overall, CAST AI has become an essential part of my workflow for maintaining efficient, cost‑effective, and high‑performing Kubernetes environments.
The intuitive and thoughtfully designed UI plays a major role in this efficiency. It presents complex metrics and optimization insights in a clear, easy‑to‑interpret manner, enabling me to make informed, data‑driven decisions with confidence. Additionally, CAST AI highlights cost savings transparently—showing both actual and optimized spending—which makes it much easier to track financial impact and justify optimization initiatives.
Overall, CAST AI has become an essential part of my workflow for maintaining efficient, cost‑effective, and high‑performing Kubernetes environments.
CastAI Optimizes Kubernetes Cost & Performance with Best-in-Class Support
What do you like best about the product?
CastAI delivers strong value in optimizing both Kubernetes cluster cost and performance. We’ve been using it in production and can confidently vouch for its impact based on our experience. Their support is best-in-class—the CastAI team works with us daily to resolve issues, plan ahead, and improve our cluster architecture. It’s also easy to implement and operate. We’re in the process of integrating CastAI across all of our clusters to drive additional savings and performance, and we use it regularly.
What do you dislike about the product?
We’d like to see a dashboard that highlights cost savings and performance optimizations at both the enterprise and org levels. This would help our leadership quickly understand, at a glance, how well CastAI is working for us.
What problems is the product solving and how is that benefiting you?
It has helped us save money while also improving the performance of our clusters, which in turn supports the solutions we deliver to our clients.
Major Cost Savings with Rapid Provisioning, Spot Instances & Autoscaling
What do you like best about the product?
Cost savings through rapid provisioning, Spot instance usage, and workload autoscaling.
It was also easy to change Node settings, making it easy to switch from Intel to Graviton.
It was also easy to change Node settings, making it easy to switch from Intel to Graviton.
What do you dislike about the product?
Thanks to the great support, I was able to onboard without any problems.
What problems is the product solving and how is that benefiting you?
The advantages are the same as mentioned above. As the service grows, concerns about cost and stability arise, and both are met.
showing 21 - 30