I use Cribl for optimizing Splunk data. For example, I have approximately 10 TB of daily data integrations. I route the data through Cribl, optimize it, and index it into Splunk, reducing it by 30 to 40 percent. For instance, at 10 TB of integrations, it becomes 5 TB after Cribl optimization. I use Cribl for firewall logs, event logs, Windows logs, metrics logs, and EDR logs.
Cribl.Cloud Suite
CriblExternal reviews
External reviews are not included in the AWS star rating for the product.
Centralized data pipelines have reduced daily log volumes and optimize observability workflows
What is our primary use case?
What is most valuable?
The feature I appreciate is the connection between Splunk and Cribl, which is very useful for routing data and pipeline filtering. Cribl has a central management system that controls all data pipelines and configurations.
Cribl works centrally by using the main Cribl instance and managing configurations, pipelines, routing routes, and all worker nodes. The leader nodes act as a central node and manage pipelines, route packs, and configurations while distributing them to the worker nodes. The worker nodes process actual logs and send the processed logs to destinations such as Splunk, S3, and other SIEM tools.
What needs improvement?
Cribl pricing is a concern. Cribl Streams is very powerful but costly as it scales with data volumes. For large and heavy systems, it becomes pricey compared to other similar tools. While it is flexible, it is not beginner-friendly. Pipeline routes and transforms can feel complex at first.
For how long have I used the solution?
I have been using Cribl for my business for the last 1.5 years.
What do I think about the stability of the solution?
Sometimes Cribl goes down, and we miss logs during that time, which is an issue. I experience downtime with Cribl, and this is the only issue I face. Otherwise, we do not have any other issues. When there is downtime, we cannot get logs into Splunk, and based on those logs, we get alerts and crypto triggering repeatedly, creating multiple incidents and sending emails to our customers, which is very problematic during downtime.
What do I think about the scalability of the solution?
Cribl is excellent for scalability. It is good overall for pipeline maintaining, horizontal scaling, distributed architecture, parallel pipelines, and load balancing. We handle real-time data with several GB of data per day and one TB of data, which is a very high volume of observability pipelines. Multiple pipelines run at once and different data sources process independently. There are no signal bottlenecks, and managing configuration is straightforward. Overall, it is long-lasting and good for stability and scalability.
Which solution did I use previously and why did I switch?
As of now, I do not use any alternative to Cribl.
How was the initial setup?
The initial setup is moderate. It is not too hard and not too easy. For experienced people, it is very easy. One person is enough for a Cribl deployment if you do not have a very large environment. Otherwise, you need different types of people at a large-scale environment. For beginners, it is moderate, neither too hard nor too easy. For experienced people, it is very easy because they have experience with it.
What about the implementation team?
All the nodes and components can be deployed from start to end within a certain timeframe. A quick setup following the official guide from the documentation takes approximately one hour. Normally, production setup takes one to three days. The breakdown is approximately two days for deployment and configuration, and the third and fourth days for pipelines and testing. A full enterprise deployment at a much higher level takes one to four weeks, depending on the difficulties and architecture involved.
What's my experience with pricing, setup cost, and licensing?
For the current user at a small level, the pricing is good. At a large level, it is not too heavy. The main model of pricing is based on data integrations at approximately $0.32 per GB for ST enterprise estimate. This is good and not too high or too low, falling within a medium-level range.
Filtering has reduced daily data volumes and central routing now simplifies log management
What is our primary use case?
We work on Splunk, so we use Cribl. Our company works with a system where approximately 12 to 15 TB of data comes daily in Splunk. We don't store the data directly into Splunk; instead, we use Cribl first. By using Cribl, it removes unnecessary data and keeps the important data, which can reduce the size.
What is most valuable?
My favorite feature is that Cribl is connected with Splunk very easily and it routes the data. The filtering is the most important feature because it removes unwanted logs, and the central control manages everything from one place. Cribl provides pipelines, which process the data step-by-step, so all the features are very useful.
What needs improvement?
It is very difficult to learn as a beginner.
I sometimes experience downtime, and by that, we sometimes miss logs, which creates a problem, but not for a long time. Sometimes we face these issues.
For how long have I used the solution?
I have been using Cribl for four months.
What do I think about the stability of the solution?
I sometimes experience downtime, and by that, we sometimes miss logs, which creates a problem, but not for a long time. Sometimes we face these issues.
How are customer service and support?
I have a very good experience with customer support. When we are in trouble, they give us fast responses and good responses, which is very useful for us.
How was the initial setup?
The initial deployment when I first started using Cribl was not that difficult. As a beginner, I think it is a little difficult, not that much easy. However, once you start learning and become an experienced user, it is very easy. One person can handle the whole setup without needing a large team.
What other advice do I have?
Cribl's interface is very good, and it is easy to understand how to use Cribl. When I started to use Cribl, it wasn't that difficult to learn. I learned how to pass the data into Cribl, so it is easy. Cribl has a good user interface, which makes work easier for me. I would rate this product a 9 out of 10.
Log optimization has reduced ingestion costs and simplifies routing of critical security data
What is our primary use case?
Cribl is used daily in our main use case. For example, our client had application logs where 60 to 70% of logs were just debug messages, which were mostly unusual or not required for any use case of the customer or purpose. We use Cribl to drop those logs, keeping only error and warning logs. This alone reduces their ingestion by about 20 to 30%. Mostly, this is the use case of the customer: reducing their ingestion so they could get a lower cost on the particular platform or SIEM tools.
Cribl's ability to contain data cost and complexity is very great. Cribl Stream is the main product that I have used extensively. Mostly, I have used Cribl Stream only, not Cribl Edge and other things, but Cribl Stream is what the client required right now. We could also modify the logs. Suppose a long log is coming, and it is complex to read, and there are many more fields in it, but they are not required for everything. Suppose there is a message coming in the logs that we don't need. Only the error message field and where the log came from is required. We could reduce the log's complexity, and we could send only the required number of fields from the logs. This way, Cribl Stream is useful for my use case for the client: to reduce the complexity of the log and send it to the SIEM tool.
My thoughts on Cribl's ability to handle high volumes of diverse data types, like metrics and logs, are positive. That's what I said about the use case of the client. A more complex log is coming and a long log is coming. Windows Events logs come with a lot of lines, and there are many more fields that are not required for any use case or anything else. They are just messages that this log came from this and that. If we don't want that kind of log or that kind of long log, you could shorten it down with Cribl Stream and pipelines. We could send it to the particular SIEM tool. That way, it reduces the complexity of the log and also reduces the licensing cost of the SIEM tool. We could also reduce the noise of the logs and duplicate logs. Suppose one event is sending two to three logs, and they are the same kind of logs, we could drop them, de-dupe them, and send only one kind of log.
What is most valuable?
Cribl is user-friendly, which I think is one of its most valuable features. When I learned from my senior and Cribl University, at first sight, it was very user-friendly. The UI of Cribl is very good for new users, a particular client, or anybody to understand. The ingestion pipelines, basically the pipeline feature, is very great. If a client or security team only needed security logs, we can send security logs to Splunk. We can send application logs to, suppose, we want to send to AWS, you could separate it via Cribl.
Suppose we are getting lots of logs, but we want to separate it from one platform; you could separate it and send it to different platforms if you want. This is the feature of Cribl I appreciate the most.
The Search in Place feature of Cribl is very impressive. Suppose we have Kubernetes logs coming from Kubernetes or an AWS server, and we are sending it via Cribl to Splunk. We could also search the logs from the source. There are three flows in this: source, destination, and between them, there is the pipeline. Suppose we are getting logs from the source, which are original logs, and we are modifying them through the pipeline. If we want to debug and check how the log is being modified, the searching feature belongs in both the destination and source. We could see the original and the modified one. This feature of Cribl is very great.
What needs improvement?
Cribl could improve certain areas, such as the learning curve, which is a kind of drawback. They don't provide the labs; they just provide the video lectures from Cribl University. The user needs to perform everything by themselves: test, trial, and error, like brute force. Suppose one thing is not working, they need to check out another thing. That learning curve should be improved. Next is the debugging of the pipeline. If a pipeline becomes complex with multiple rules, sometimes it's hard to figure out exactly what is breaking or where exactly the thing is breaking. So, they could improve in that area. I'm not saying that the pipeline should be less loaded; I'm saying that if a pipeline is very complex and includes a lot of functions and there is some error in one function, the debugging is very difficult. We could find out the errors from the pipeline more easily.
In a high-volume environment, Cribl needs proper CPU, and infrastructure is important for it. They should provide proper documentation of the infrastructure. For Cribl Cloud, it is great. For on-prem services, such as when we are deploying on an AWS server, they should provide proper documentation that if your ingestion is like 10 to 50 GB, then you should use this many CPUs and this much memory.
For how long have I used the solution?
I have been working with Cribl for about seven months.
What do I think about the stability of the solution?
I find Cribl very stable and reliable, as it has been developed in the Go language, which is a very stable product. It provides a faster way of ingestion. There are many products like Cribl, but it is famous for its faster response to the ingestion.
What do I think about the scalability of the solution?
I believe Cribl is very scalable. Suppose a client has only one employee who can handle Cribl, it can be handled by him. They just take the consultant service. Suppose I am from Data Elicit, and one client is coming and they take the consultant services from us. At that point, at the first stage, suppose some customer only needs the ingestion. Suppose they are converting from ONUM and they wanted the whole infrastructure to be converted into Cribl. They will use the consultant services, and at that stage, they only need the ingestion of that infrastructure from ONUM to Cribl. If they just take the consultant services and after the ingestion and the infrastructure is built, they only need one or two engineers to handle the pipeline for debugging purposes and for more ingestion. They could be handled by one or two of their employees, we can say. So they don't need a large workforce for Cribl in particular. As it is easy, and if they take the consultant services, we will also handle it for them. It is easy to adapt, we can say. It has a wide functionality of integration. It gets more integrations. There are about 1,000 to 1,500 plus integrations available in Cribl. So it can be adapted by any client. It also provides custom integration, so any use case of the client can be included in Cribl.
How are customer service and support?
I often communicate with the technical support or customer service of Cribl. I did in one or two cases where I was not getting an issue in the pipeline. I just raised a support case and they provided me with support within 24 to 48 hours. Their support system is very great.
What was our ROI?
I have seen a decrease in firewall logs with Cribl, and in terms of cost, the customers have benefited a lot. Customers were having 60 to 70% logs which were very unusual and not useful for their use cases. They were dropped through Cribl, and only the errors, warnings, and important logs were sent to their particular SIEM tool. Suppose the SIEM tool is Splunk, the ingestion cost will be reduced. Therefore, the overall licensing cost of Splunk will be reduced.
What other advice do I have?
I started my journey with Cribl as it was taught by my colleague or senior. He started by giving me the roadmap for Cribl. Then he gave me access to Cribl University and said I should start with Cribl User first. There is a certification for Cribl User; then comes the Admin. Right now, I'm about to complete the Cribl User certification. We get the overview of Cribl from Cribl University. Whenever I got stuck in Cribl, my senior was there beside me, so he could help me with that. I faced fewer difficulties. But suppose someone is starting with just Cribl University, they would face more problems. They don't have a proper lab environment. In my case, my senior just gave me the roadmap, so I knew how to start, how to end, and where to end, and how to debug. Cribl University is just for the overall knowledge of Cribl. But if you want practical knowledge, you need to implement it by yourself. They don't provide a lab structure or infrastructure so we could get practical knowledge and learn where to debug. They should provide all that kind of thing. In my case, it was basically less difficult for me, as I had the proper roadmap. My overall review rating for Cribl is nine out of ten.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Data pipelines have reduced log noise and now route critical observability events efficiently
What is our primary use case?
My primary use case for Cribl is to manage and optimize observability data before sending it to different destinations, such as routing. I deal with a very large volume of logs coming from multiple sources, including large log systems. This includes system logs, application logs, and security-related logs. Using Cribl, I can filter unnecessary logs and transform that data as required, and I can route important data to the appropriate destinations. This is very helpful to me and helps me reduce data volume and improve performance. I also use pipeline configurations to control how logs flow through the entire system. This makes it very easy for me to maintain data consistency and manage large log systems across different environments.
What is most valuable?
The most valuable thing or feature for me in Cribl is data routing and pipeline flexibility. Cribl allows me to define how data should be processed, filtered, and routed to different destinations. One of the things I also find very useful is edge processing, which allows me to process data closer to the source, which helps reduce unnecessary data and improve performance. Overall, flexibility and control over observability data are the things I appreciate most about Cribl.
Cribl handles large logs very efficiently by using its pipeline-based architecture, which I find most useful. It allows me to transform data through routing and filtering before sending it to downstream systems. When dealing with large volumes of logs, I can define pipelines that drop unnecessary fields and remove duplicate logs. There can be so many duplicates and redundancies that filtering them out significantly reduces the overall data volume. Another helpful capability is routing, which helps me route different types of logs to different destinations and prioritize fields that I want. For example, critical logs can be sent to one destination while lowering the priority of other logs, which are stored elsewhere. This helps me in large-scale log environments very effectively. Cribl also supports horizontal scaling, where I can add more worker nodes to handle increasing log volumes. This ensures my performance remains stable, even as log ingestion increases.
I have seen a decrease in logs by using pipelines, which helps me decrease logs by filtering and optimizing data before sending it downstream. For firewall logs specifically, I have seen that it helps reduce volume by filtering unnecessary or repetitive events. When a firewall device generates a large number of logs or deny logs, many of which are repetitive or not always useful, Cribl filters out the low-priority logs such as allowed traffic and routine events. I remove the unnecessary fields from firewall logs, which reduces the log size.
What needs improvement?
The main downside of Cribl is that it is not very beginner-friendly. They could include tutorials or something more interactive for beginners. For experienced users, it works well. The learning curve is significant; learning Cribl from the initial stage for someone who doesn't have any background knowledge may be difficult. Since it offers lots of flexibility with pipelines and routing, it can take time for beginners to understand how everything works properly and to complete the configuration. The initial setup is also a little complex. Additionally, Cribl has limited built-in analytics compared to dedicated monitoring tools.
For how long have I used the solution?
I have been working with Cribl for more than one year or one and a half years.
How are customer service and support?
Technical support is very helpful. My experience with Cribl support has always been positive. They do not delay responses. The documentation covers almost everything for the use case, especially all the major features they include. For any issues I encounter, I was able to resolve them by using mostly documentation and community resources without needing to contact support directly. For technical clarification, if required, the available resources including guides and examples of best practices are quite helpful. The support ecosystem around Cribl is very good, and most issues are resolved quickly.
Which solution did I use previously and why did I switch?
I was previously using Splunk. Splunk was mostly used for storing, searching, and analyzing logs. Once I discovered Cribl, I found it more useful. Cribl helped me with managing, filtering, pipeline routing, and flexibility before sending data to destinations or monitoring tools. Cribl sits between a data source and an analytics tool, which helps me reduce my flow, save time, and optimize data volume. If I had to choose between Splunk and Cribl for filtering and routing, I would obviously choose Cribl. For analyzing and searching, I continue to use Splunk.
How was the initial setup?
The initial deployment of Cribl is not very user-friendly for beginners. For beginners, they might find that they have to first study and get to know everything about it. Once they get used to it, they will find that it is a very useful tool. It is not very beginner-friendly, but if the user is experienced or knows the relevant terms, then it will be very easy.
What's my experience with pricing, setup cost, and licensing?
For cost optimization, Cribl's pricing is moderate. I will not say it is too high or too low.
Which other solutions did I evaluate?
For something similar to Cribl, I have used Splunk.
What other advice do I have?
The maintenance for Cribl is relatively minimal. Most of the time, I focus on monitoring pipelines, which is manual work. I check the data flow and make small adjustments as I need them. For new log sources or adding anything, that is the manual work I have to do. I also review pipeline configurations to ensure logs are being filtered and routed correctly. If there are any changes in log formats or new data sources, I update the pipelines accordingly. Monitoring system performance and ensuring the worker nodes are running properly is something I always do. If the volume of logs increases, I scale the nodes to handle the load. Overall, maintenance from my side is minimal. Once the pipelines and configurations are done, Cribl runs very smoothly with very minimal manual intervention. I would rate this review as a nine out of ten.
Data routing has improved precision and flexibility while pricing and alerting still need work
What is our primary use case?
I use Cribl as our data ingestion source, with Cribl Edge agents installed across all servers. Cribl is used at the pipeline or routing level to send data to our SIEM platform.
Firewall logs are sent to Cribl, and Cribl routes specific logs to our SIEM tool while sending others to archive storage. This segregation and separation capability is not possible with any other tool, which makes me very satisfied. However, Cribl charges us for all firewall logs that it observes, not just what it processes and outputs.
What is most valuable?
Cribl performs parsing and field reduction exceptionally well, cutting down unnecessary fields and delivering only the right data. However, Cribl charges for everything it sees rather than just what it parses. We might ingest a large volume of data but only process about forty percent of it, yet we are charged for one hundred percent of the data ingested into Cribl.
The ability to bifurcate or trifurcate data and send it to multiple destinations is a feature we love. I have been a Splunk user for over eight years, and this is something Splunk did not have until Cribl introduced it specifically for this purpose.
Cribl handles logs, metrics, and various data sources really well. I have ingested up to fifty terabytes of data per day, and Cribl has never failed or caused trouble from that perspective. Cribl handles huge volumes of data exceptionally well.
What needs improvement?
A feature I would want Cribl to add in future releases is the ability to create a greater number of fleets. Currently, Cribl has a limitation on the number of fleets that can be created. In an enterprise environment, different types of servers belong to different applications and should be organized accordingly, as each has a different change management cycle and upgrade cycle. Cribl cannot be upgraded all at once, so we want to separate fleets so we can perform upgrades in batches rather than all in one shot. Increasing the number of fleets would be greatly appreciated.
Data cost is a concern, as Cribl charges for everything it sees rather than everything it processes. I do not see much cost-effectiveness from this approach. If we could do pre-processing before sending data to Cribl, then Cribl would be cheaper than other tools, but if we could do that, we would not need Cribl at all. This costing model has been concerning for a while. Better options based on user base, enterprise size, or data volume would be beneficial. More options to choose from for pricing tiers are needed, as the current offerings are very limited.
I have used Splunk previously and have been using Palo Alto XSIAM. Palo Alto XSIAM has integrated features from Cribl, Splunk, and Sentinel into one comprehensive tool, taking the best features from all three. Another concern is that there is not much default alerting available for Cribl metrics, and custom alerting is also difficult to configure. For example, backpressure monitoring has only very limited use cases available out of the box when monitoring Cribl environment health. Cribl could take steps to increase the number of use cases and add guardrails around how much volume can be ingested. Options to create custom alerting would be helpful, such as alerts when certain metrics go down or up, or when the catchall is filling up. These options exist but are very complicated to set up. Unlike users who have been using Splunk for ten years and transitioned to Cribl, I find it very difficult to navigate and create alerts in Cribl. The ease of use could be improved by providing default options that can be leveraged and customized as needed.
Cribl initial deployment was easy, but for large enterprise networks and big organizations, Cribl does not support operating systems earlier than 2012. This creates a problem, and a package should be available for anything below 2012 that works as expected. Currently, Cribl only approves packages for 2012 and above, but some organizations require applications to run on legacy servers. This option is not available, and we are unable to get Cribl installed without finding alternatives or going back to using Splunk to pull data and then stream it to Cribl. This causes significant operational challenges, and if this could be fixed with one version that supports everything below 2012, it would be greatly appreciated.
Cribl is deployed both on-premise and in the cloud. Cribl placed sample data in one of the YAML files that contained examples of personal data like social security numbers or credit card information. When this YAML file was included in Cribl package itself, vulnerability scanners detected it as a non-compliance or data loss concern, even though there was no actual personal information, API keys, or sensitive data present. These were just examples provided by Cribl. Cribl fixed this issue in the latest version after we brought it to their attention. Going forward, I would like Cribl to think about this from a bigger enterprise perspective, as endpoint security tools will detect all of these concerns. It is not just about processing data but also about the problems faced when deploying it in a large enterprise. This thought process needs to increase from Cribl's side.
For how long have I used the solution?
I have used Cribl for over a year.
How are customer service and support?
A dedicated support portal is available, and support cases are usually raised through a dedicated email. Responses are received at reasonable times, so this has not been a problem. I would give support a rating of seven out of ten.
Log management has become efficient and now trims and enriches massive enterprise log data
What is our primary use case?
My use case involves analyzing very large log files coming from middleware and system log files for both functional and non-functional errors. To perform this analysis effectively, we fetch these logs into tools such as Splunk or Dynatrace, but since those tools charge based on the volume of logs ingested, it is crucial to filter out unnecessary log data. Cribl helps us by trimming irrelevant logs and enriching the data as needed based on input from different teams, allowing us to streamline our log files before sending them to analytical tools.
What is most valuable?
The best features of Cribl include its ability to handle logs, allowing us to avoid redundant data input while ensuring that we send only the information we need to analytical tools for insights. This tool excels at performing tasks on the fly and lets us run different pipelines for our logs, combining data from various sources, such as application logs, intra logs, and network logs, and customizing it according to our data center or region.
I appreciate the twenty-four seven availability of Cribl, which is essential for ensuring our data is always accessible, even during downtime. This is a significant challenge, and maintaining that availability is crucial for operational continuity.
With Cribl Edge, the centralized fleet management has simplified how we deploy, upgrade, and manage agents across our environment. We automate configuration files based on regional needs and have developed a naming convention to categorize our configurations in a way that is easily manageable through the GUI.
Cribl handles high volumes of diverse data, including logs and metrics, exceptionally well, which is why we continue using it. With large amounts of data from enterprises such as Vodafone, it is essential to trim and enrich this data to achieve good results and avoid sending garbage data to analytics tools.
Managing log processing tasks through Cribl's user interface is quite intuitive, making it user-friendly.
What needs improvement?
There is room for improvement in Cribl, as managing data from around forty thousand servers can become complex. Automating the upgrading process for the Cribl agent would significantly improve usability, especially since we sometimes experience issues when using Blade Logic for updates.
I would appreciate more automation in the processes, and I have not explored the AI features that Cribl offers, such as ChatGPT.
For how long have I used the solution?
I have been working with Cribl for three years and three and a half years to be precise.
What do I think about the stability of the solution?
Cribl is a scalable product. We have challenges integrating it with data from forty thousand servers across various platforms while maintaining stability and scalability, and I would rate our scalability at nine.
How are customer service and support?
From my experience, I would rate Cribl's technical support as around eight or eight and a half. There is room for improvement, especially regarding urgent issues that occur in production environments.
How was the initial setup?
The deployment was initially complex, but it is now stable and functional, largely because of the thorough documentation and excellent certifications provided by Cribl.
What about the implementation team?
In my company, approximately twenty-five to thirty specialists work with Cribl.
What was our ROI?
The solution saves a significant amount of time and resources. I would estimate the return on investment to be double or triple the investment we made.
What other advice do I have?
The unified management provided by Cribl Edge has dramatically reduced the time and effort needed for maintaining endpoint telemetry collection. Once the handshake occurs on the server side, any issues can be quickly identified from the GUI, and we only need to configure what information we want to fetch from the agent.
For firewall logs, we define and open specific firewall ports in our configurations to either collect bidirectional or unidirectional information, depending on the server's security requirements.
I have used Cribl Search primarily for our log patterns, but my involvement has largely been from an operational perspective, with limited usage of this feature.
I find Cribl to be cheaper compared to other solutions and believe it will become a leading product in the industry due to its fast performance and excellent results. When considering log ingestion, it allows us to extract only the necessary parameters from a larger dataset, which contributes to reduced data handling and effective dashboard creation.
Maintenance is necessary, especially for upgrades, but Cribl allows for these modifications on the fly without requiring system reboots, ensuring that production is not disrupted.
I would certainly recommend this product, emphasizing its effectiveness and potential to become a leader in the field, as its marketing presence is currently less than that of competitors such as Splunk and Dynatrace. I rate this product at nine overall.
Data routing has become simpler and costs are reduced with flexible log aggregation
What is our primary use case?
A few use cases for Cribl include mainly reducing the amount of data that goes into our CM solution by reducing the data that flows through and only sending the important data into our CM solution.
With Cribl, I have seen a decrease in firewall logs as we send a lot of firewall logs into Cribl, aggregating and reducing the log size by aggregation or removing unwanted data, which works smoothly. Anything with logs—firewall, network logs, DNS logs—works fine.
Cribl does a great job at containing data costs, which is our major use case to reduce data costs for the CM solution, and we do that quite efficiently with Cribl by aggregating the data, masking unnecessary parts, and changing the structure into key-value pairs, thus reducing the cost significantly.
What is most valuable?
What I like about Cribl is that it is quite easy to use because everything is via UI, so there is no coding involved, making it more like a drag and drop functionality to add your items. It is an easy tool, easy to learn, and handy, allowing a lot more to be done without requiring extensive coding.
Cribl UI feels quite intuitive based on my experience after using Cribl for four years with my team and other vendors. It is easy to use, allowing many people to work at the same time, and versioning is already integrated. The same packs can be used with different machines and different workflows, which is also a good part. Cribl provides free education, unlike other tools, allowing us to learn the necessary skills and implement them in the actual production environment.
Cribl brings significant benefits like cost-effectiveness, reducing CM costs, and making our data vendor-agnostic since data flows through Cribl. If I decide to change my CM solution later, it will be an easy switch. Complex data can be simplified into easier formats like key-value pairs, making our current use cases streamlined.
What needs improvement?
I would like to see improvements in the metrics and traces, as Cribl is currently more geared towards logs, making it hard to get very long traces to view in the UI when they are quite big. I have not used metrics much because I am aware of the issues Cribl has with handling proper metrics, particularly with multi-metrics when there are multiple dimensions into a single metric. We use Cribl nearly 99.9% for logs only, not for metrics and traces, but I hope to see improvements in the future.
On the other hand, I would like to see improvements in pack management, which is currently a mess with no way to manage packs differently across worker groups. I also wish Cribl would introduce more functions, as sometimes we have to create more JavaScript functions ourselves. Aside from that, everything is going well, especially with recent AI integrations.
For how long have I used the solution?
I have been working with Cribl for four years.
What do I think about the stability of the solution?
Cribl is pretty stable, with me experiencing only minor hiccups and no major alarms. Previous data loss issues have been resolved over the past two and a half years, making it a stable option.
What do I think about the scalability of the solution?
I consider Cribl scalable as we are using the Kubernetes version, and I have seen that scaling is manageable. We have also checked on-prem and found similar results, confirming it to be a scalable solution.
How are customer service and support?
Cribl technical support is generally good, albeit sometimes inconsistent. The U.S. team is excellent once a ticket is escalated, while the support in Germany or Europe could be improved. I would rate the technical support at a seven on a scale of one to ten.
Which solution did I use previously and why did I switch?
Prior to Cribl, I had not used any different product of the same kind, which is an advantage for Cribl. While there are a few products emerging now, the last time I checked, they were not equivalent to Cribl.
How was the initial setup?
Cribl initial setup was not complex because Cribl is very similar to another product we used for multiple years, allowing us to extend scripts easily. I would say installation is pretty straightforward, and the documentation and education provided by Cribl greatly aids the process.
What about the implementation team?
Our deployment was primarily in-house, with initial assistance from Cribl engineers. We have managed it internally for the last three and a half years.
What was our ROI?
Regarding ROI, Cribl reduces our CM cost by about twenty to twenty-five percent due to the data that is flowing in and reducing the overall amount.
Which other solutions did I evaluate?
I did not evaluate any other options before choosing Cribl since there was hardly anything on the market like it at that time, although I see a couple of viable options now.
What other advice do I have?
My advice for organizations considering Cribl is that it is a nice tool, very effective with limited competition, but you should plan thoroughly regarding your use case to avoid wasting licenses. It is essential to implement something significant, considering the infrastructure as well. I rate Cribl at an eight overall.
Data pipelines have reduced noise and now send controlled, optimized logs to security tools
What is our primary use case?
Cribl's main use case in our company is log routing and data optimization before sending it into our SIEM platform. In our environment, we collect logs from multiple sources like endpoints, applications, and infrastructure, and Cribl helps us process the data in the pipeline before it reaches the SIEM. We can filter unnecessary logs, transform fields when needed, drop unnecessary fields, and add necessary fields from eval functions through pipelines, then route the data to different destinations depending on the use.
In our environment, for log routing and data optimization in our pipeline using Cribl, we were receiving firewall data from different parts of the country. The issue was related to time zone differences. We had to convert the time zone of all the firewall logs into GMT format. We used Cribl's pipeline to convert all the firewall logs, which were in different time zones, to GMT time zone, and then routed it to our main SIEM platform.
What is most valuable?
The best features Cribl offers include the ability to see the data flow right away when the data is flowing. Capturing live data was a very good feature. We get pretty much different functions to transform data in the pipeline. Another feature we really like is the pipeline-based processing, where we can easily create rules for parsing, masking, or modifying log fields.
Seeing the live data flow with Cribl has definitely been helpful. It makes it much easier to see how logs are moving through the pipeline in real-time and understand where transformations or routing are happening, or where the data is breaking, or where the error is coming from—whether it is from the source only or breaking at the pipeline. There was a situation where we were not seeing certain logs reaching our SIEM platform, even though the source system was generating them. Using the live data preview in Cribl, we were able to trace the logs through the pipeline and quickly identify that a filtering rule was unintentionally dropping some events. Because of that visibility, we could adjust the pipeline rule immediately and verify the fix in real-time. Instead of spending a lot of time troubleshooting across multiple systems, the transparency in the data pipeline really speeds up debugging and operational monitoring for us.
Cribl has had a positive impact on our organization mainly in terms of better control over our log data and improved efficiency in our log management pipeline. Before using a tool like Cribl, a lot of raw logs would directly go into SIEM, which could create noise and increase ingestion volume. With Cribl, we are able to filter unnecessary events, transform logs, and route data more intelligently before it reaches the SIEM. This helps ensure that the security team is working with more relevant and structured data, which improves analysis and detection workflow.
What needs improvement?
Cribl is a very capable platform, but one area where it could improve is the learning curve for new users. Since it offers a lot of flexibility in building pipelines and transformation, it can take some time for beginners to fully understand how to design efficient pipelines. Another platform we have used provides a workflow-like UI so you can directly configure the source, the pipeline, and the destination, which we think Cribl is lacking here. We know there is a Quick Connect option also, but it is not that much efficient in our perspective. Another improvement could be building more built-in templates or pre-configured pipelines for common log sources. That could help the team get faster, especially when integrating new data sources. Also, while the platform provides good visibility into data flow and enhanced troubleshooting and monitoring, insights for pipeline performance could make debugging even easier in larger environments.
One thing that Cribl could improve is the workflow creation of source, pipeline, and the destination, which we still feel is lacking in Cribl.
What do I think about the stability of the solution?
Cribl is generally a stable platform, especially when it's properly deployed and monitored. It is designed to handle large volumes of telemetry data like logs and metrics, and many organizations run it as a central data pipeline without major downtime issues.
What do I think about the scalability of the solution?
Cribl is quite scalable, especially for environments that handle large volumes of logs and telemetry data. The architecture allows you to scale both vertically and horizontally, depending on the workload. For example, you can scale up by adding more CPUs and memory to a single instance or scale out by adding more worker nodes to distribute the processing load across multiple systems. This distributed worker architecture helps handle increasing data volumes and more complex pipelines without significantly affecting performance. Another advantage is that the load can be balanced across worker nodes, which allows the platform to process very large streams of data efficiently and maintain high throughput. Cribl scales very well for enterprise environments where log volumes keep growing and multiple data sources need to be processed simultaneously.
How are customer service and support?
Cribl's customer support has been quite good whenever teams run into issues or need guidance with pipeline configuration or deployments. The support team is generally responsive and knowledgeable. Based on what we have seen and heard from other users as well, support tickets are usually handled quickly, and the team tends to understand technical problems well, which helps resolve issues efficiently.
Which solution did I use previously and why did I switch?
Before using Cribl, most of the log processing was handled directly within the SIEM platform itself, mainly using native parsing and filtering capabilities in tools such as Splunk. While that works, it means the raw logs first get ingested into the SIEM, and then you handle the transformation or filtering afterward. The reason for moving toward Cribl was mainly to introduce a dedicated data pipeline layer before the SIEM.
Before adopting Cribl, we did evaluate a few other approaches. Some of the evaluation was around using native capabilities within SIEM platforms like Splunk, as well as open-source log processing tools like Logstash for handling data pipelines. Those options can work for log collection and processing, but Cribl stood out because it provides a dedicated platform specifically designed for observability and security data pipelines. It offers more flexibility, routing, filtering, and transforming logs without heavily relying on the SIEM itself. That is why we chose Cribl over any other platform.
How was the initial setup?
In terms of the setup, the initial deployment was not very complicated, especially if you already have experience with log pipelines and SIEM integrations. Most of the effort usually goes into designing the pipeline and configuring the routing and transformation rather than licensing or installation itself. Overall, the model feels fairly aligned with modern observability tools, where you can scale usage based on your data volume and infrastructure needs.
What was our ROI?
We have seen a positive return on investment from using Cribl, mainly through better data control and operational efficiency. One of the biggest benefits is the reduction in unnecessary log ingestion into the SIEM. By filtering and routing logs through Cribl first, we avoid sending low-value or redundant data downstream, which helps optimize the storage and licensing costs.
One noticeable outcome from using Cribl has been better control over the volume of data being sent to the SIEM. By filtering unnecessary logs and routing only relevant events, we were able to reduce the overall log ingestion volume, which indirectly helps with storage and licensing costs. Another improvement is in operational efficiency because the data is already cleaned and structured in the pipeline, making it easier for analysts to search and investigate events in the SIEM, which can speed up investigations. The licensing cost is saved via Cribl.
What other advice do I have?
Another feature that we found very useful about Cribl is the ease of integration with multiple destinations. We just have to route the main pipeline to multiple destinations, and it will go to multiple destinations. Sometimes the data needs to be routed to different platforms for security monitoring, observability, or long-term storage. Cribl makes it very easy to send the same data to multiple destinations with different processing rules. We also like the flexibility in data transformation. If log formats change or we need to mask sensitive information or normalize fields, we can handle that directly in the pipeline without modifying the source system.
The pricing and the licensing model for Cribl seem quite flexible, although the purchasing was handled by our organization rather than by us directly. Our role has been more on the technical and operational side of using the platform.
Cribl can handle high volumes of diverse data types like logs and metrics quite well. In environments where you're collecting logs from many different sources, the platform is designed to process and route that data efficiently through pipelines. We found useful its ability to apply filtering, parsing, and transformations at scale, which helps manage large data streams without overwhelming downstream systems like SIEM platforms.
Another useful approach is to leverage the documentation and built-in pipeline functions because Cribl provides many ready-to-use processing capabilities that can save time.
Our advice would be to start by clearly understanding your data pipeline requirements before implementing Cribl. Since it is a very flexible platform, it works best when you know what data you want to keep, what data you want to filter out, and where the data should be routed. We would also recommend starting with a few simple pipelines first, then gradually expanding as you become more comfortable with the platform. We give this review a rating of eight out of ten.
Data workflows have become streamlined as I manage costs and parse diverse sources efficiently
What is our primary use case?
I use Cribl to move data and help with moving data, connecting different data sources to different destinations, which is what I mainly use it for.
I also use it to help parse the data as well.
What is most valuable?
Something that I really appreciate about Cribl is the preview feature. Whether it would be on the JavaScript I'm working on, it shows me the output in real time, which really helps with development.
I also appreciate the preview feature when it comes to data pipelines, as it shows me in real time how my pipeline would be working with the data. Additionally, I really appreciate the live capture feature as well to get an idea of how the data looks at different stages in Cribl environment.
I think Cribl is an excellent tool for helping to manage data cost and keep it down as well as manage complexity.
What needs improvement?
Cribl has come a long way. I've been using it for three years, but there are still a lot of other features that I would appreciate regarding new data sources. One example would be open WebSockets.
There's currently not a native feature for that, so that requires a lot of time in development. I would also appreciate better support for JWT tokens for a REST API collection. While sometimes it does work, it seems very janky and seems like a stitched-together solution. It would be nice if there was a more supported version to help with JWT.
For how long have I used the solution?
I've been working with Cribl for a long time, at least three years, maybe more.
What do I think about the stability of the solution?
Cribl is very robust. It's not perfect, but very good stability.
What do I think about the scalability of the solution?
Cribl is very scalable. The product itself lends itself well to being scaled. Any issues I've had with scaling have mainly just been human issues of people not wanting to scale, but the product itself is very capable of scaling.
How are customer service and support?
The speed was fast. The quality, however, there wasn't a solution just because I think it was a bug and it was never fixed as far as I know. The speed was nice, but there was never a solution provided.
Which solution did I use previously and why did I switch?
I use Splunk.
What was our ROI?
From what I understand, I'm mainly on the engineering side, not the sales side, but the pricing is very competitive. Although the pricing can be a little bit high, I know that Cribl as a product helps save a lot of money by reducing data storage. The pricing is offset by the money I save by using Cribl.
What's my experience with pricing, setup cost, and licensing?
Cribl does require maintenance, especially if I'm deploying it on-premises. If I'm deploying on-premises on my machines, I've just got to make sure that they're being provisioned well, that they're being updated successfully, and that they're constantly balancing the worker processing across them.
Which other solutions did I evaluate?
I definitely prefer Cribl more, mainly for the UI and the preview feature that I mentioned about being able to see in real time my in and out for development. I think that speeds things up a lot.
However, I do like Splunk a lot too.
I think Splunk is better tailored for visualizations and presenting to clients, especially around metrics. I think I can do some visualizations and presentations of metrics in Cribl, but it's not as robust as Splunk.
What other advice do I have?
Definitely for large corporations, they would see the most benefit, but I think small and medium businesses could also benefit as well.
Log pipelines have reduced daily data volume and now simplify traffic analysis
What is our primary use case?
We generally use Cribl for dropping or optimizing our logs and data. We optimize logs using Cribl pipelines, then we route it to Splunk. That was our primary use case.
Our primary goal with using Cribl was to reduce our Cisco firewall logs where we are dropping the logs which are not necessary in our traffic-related logs, or the logs which generally only show a connection status. Those types of logs we were dropping using Cribl.
What is most valuable?
What I like most about Cribl is the overall pipeline structure and easiness. It is very easy to use and it also provides all the necessary features which are required in data processing. We do not need to learn so many things to do complex tasks. That's what I really appreciate about it. It's doing a simple process where you just need to know about your logic and that thing may be pre-built on Cribl. Cribl provides packages and all the features.
I would say Cribl provides you the value of your money. It provides you a good user interface where you manage all your data. You don't need to worry about your backend. Specifically, I'm talking about Cribl Cloud, as I have mostly been working with Cribl Cloud. It's very cost-optimized, or I can say whatever I'm paying, I'm getting all out of that.
What needs improvement?
Overall, the pipelines and all the features are good with Cribl. The UI is good. Just sometimes, when I actually started using Cribl, I faced the issue where I was not able to connect the nodes. The pipeline is structured in a certain way, then the data will be routed to there, and something of that nature. I was very much confused about their whole products, such as Data Lake and pipelines. It's possible that at that time I didn't take any university courses, which is why I did not know much. But if they can give an intro on how we can connect nodes, or they can provide simple use cases showing what you can do with Cribl, it would help. If you just need to add the source and the destination and pre-build some proper workflow, then it will be easy for new customers to navigate through Cribl.
For how long have I used the solution?
I have been working with Cribl for around one and a half years.
What do I think about the stability of the solution?
I don't feel Cribl has any issue with handling high volumes of diverse data types. We were ingesting around 10 TB of data daily, and we were reducing it to around six or five and a half terabytes. So it is pretty efficient. We have not faced any major issues with our ingestion or anything of that nature. It has the capability to catch up according to the data ingestion rate.
I have not seen any lagging, crashing, or downtime in Cribl at any particular time. But if I speak about lagging or anything, I faced some issue while capturing the log on the live source. Whenever I tried to capture the logs, I was a bit confused about whether logs are getting captured or if I was doing anything wrong. Because it does not show any error if my configuration is missing or something of that nature. Otherwise, I don't have any issue regarding Cribl performance or anything.
What do I think about the scalability of the solution?
I don't think there is any issue regarding scalability with Cribl. As we were ingesting around 10 terabytes of data every day and it didn't affect or cause any issue on any day.
Which solution did I use previously and why did I switch?
I would not say I have tried an alternative to Cribl properly. We tried to implement the same use case using Splunk Ingest Processor or Edge Processor, which is the recent product of Splunk. It is not that straightforward as Cribl. We must play in a restricted environment where we have limited support of the Splunk command. So I cannot say that it is actually similar to Cribl or something of that nature, and I have not used any others.
What other advice do I have?
I was able to create one simple pipeline with Cribl which was just dropping the data in around eight to twelve hours total. In which I basically understood what routes and pipelines are. I was playing with the UI and how the functions are working, how the pipeline flows the data, how can I duplicate the data, how can I drop, how can I null queue, and things of that nature.