
Overview
Cribl Product Overview
How telemetry data was managed over the last 10 years will not work for the next 10. Cribl is purpose built to meet the unique challenges IT and Security teams face.
Cribl.Cloud is the easiest way to try Cribl products in the cloud through a unified platform. Cribls suite of products gives flexibility and control back to customers. With routing, shaping, enriching, and search functionalities that make data more manageable, you can easily clean up your data, get it where it needs to be, work more efficiently, and ultimately gain the control and confidence needed to be successful.
Cribl Cloud suite of products includes:
Stream: A highly scalable data router for data collection, reduction, enrichment, and routing of observability data.
Edge: An intelligent, scalable edge-based data collection system for logs, metrics, and application data.
Lake: Storage that does not lock data in. Cribl Lake is a turnkey data lake makes it easy and economical to store, access, replay, and analyze data no expertise needed.
Search: A search feature to perform federated search-in-place queries on any data, in any form.
Getting Started
When you purchase your Cribl.Cloud subscription directly from the AWS Marketplace, you can experience a smooth billing process that you're already familiar with, without needing to set up a separate procurement plan to use Cribl products. Track billing and usage directly in Cribl.Cloud.
Enjoy a quick and easy purchasing experience by utilizing your existing spend commitments through the AWS Enterprise Discount Program (EDP) to subscribe to Cribl.Cloud. Get flexible pricing and terms by purchasing through a private offer. Purchase the Cribl Cloud Suite of offerings at a pre-negotiated price. Contact awsmp@cribl.io or a sales representative for flexible pricing for 12/24/36-month terms.
We are available in US-West-2 (Oregon), US-East-2 (Ohio), US-East-1 (Virginia), CA-Central-1 (Canada Central), EU-West-2 (London), EU-Central-1 (Frankfurt), and AP-Southeast-2 (Sydney) with more regions coming soon! Regional pricing will apply.
To learn more about pricing and the consumption pricing philosophy, please visit: Cribl Pricing - https://cribl.io/cribl-pricing/ Cribl.Cloud Simplified with Consumption Pricing Blog - https://cribl.io/blog/cribl-cloud-consumption-pricing/
Highlights
- Fast and easy onboarding - With zero-touch deployment, you can quickly start using Cribl products without the hassle, burden, and cost of managing infrastructure.
- Instant scalability - The cloud provides flexibility to easily scale up or down to meet changing business needs and dynamic data demands.
- Trusted security - Cribl knows how important protecting data is, and built all Cribl products and services from the ground up with security as the top priority. Cribl.Cloud is SOC 2 compliant, ensuring all your data is protected and secure. Cribl.Cloud is currently In Process for FedRAMP IL4.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Security credentials achieved
(3)



Buyer guide

Financing for AWS Marketplace purchases
Quick Launch
Pricing
Free trial
Dimension | Description | Cost/12 months |
|---|---|---|
Cribl.Cloud Free | Cribl.Cloud Suite Free Tier | $0.00 |
Cribl.Cloud Enterprise | Cribl.Cloud Suite Enterprise with 1TB Daily ingestion | $142,800.00 |
The following dimensions are not included in the contract terms, which will be charged based on your usage.
Dimension | Cost/unit |
|---|---|
Overage Fees | $0.01 |
Vendor refund policy
Cribl will refund prior payments attributable to the unused remainder of your purchase.
Custom pricing options
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Software as a Service (SaaS)
SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.
Additional details
Usage instructions
Cribl Cloud Trust IAM Role CloudFormation Template
This CloudFormation template creates an IAM role that allows Cribl Cloud to access specific AWS resources in your account. The role is designed to provide Cribl Cloud with the necessary permissions to interact with S3 buckets and SQS queues.
Template Overview
The template does the following:
- Creates an IAM role named CriblTrustCloud
- Configures a trust relationship with Cribl Cloud's AWS account
- Attaches a policy that grants access to S3 and SQS resources
- Outputs the role name, ARN, and an external ID for authentication
Parameters
- CriblCloudAccountID: The AWS account ID of Cribl Cloud (default: '012345678910')
IAM Role Details
Trust Relationship
The role trusts two specific roles in the Cribl Cloud account:
- arn:aws:iam::{CriblCloudAccountID}:role/search-exec-main
- arn:aws:iam::{CriblCloudAccountID}:role/main-default
These roles can assume the CriblTrustCloud role using the sts:AssumeRole, sts:TagSession, and sts:SetSourceIdentity actions.
Permissions
The role has a policy named CriblCloudS3SQSPolicy that grants the following permissions:
- S3 access:
- List buckets
- Get and put objects
- Get bucket location
- SQS access:
- Receive and delete messages
- Change message visibility
- Get queue attributes and URL
These permissions apply to all S3 buckets and SQS queues in the account.
Security Feature
The template includes a security feature that requires an external ID for authentication. This external ID is derived from the CloudFormation stack ID, providing an additional layer of security when assuming the role.
Outputs
The template provides three outputs:
- RoleName: The name of the created IAM role
- RoleArn: The ARN of the created role
- ExternalId: The external ID required for authentication when assuming the role
Usage
To use this template:
- Deploy it in your AWS account using CloudFormation
- Provide the resulting role ARN and external ID to Cribl Cloud
- Cribl Cloud can then assume this role to access your S3 and SQS resources
Remember to review and adjust the permissions as necessary to align with your security requirements and the specific needs of your Cribl Cloud integration1 2 3 .
<div style="text-align: center">⁂</div>Enable CloudTrail and VPC Flow Logging for Cribl Cloud
This document explains the resources that will be created when deploying the provided CloudFormation template. The template is designed to create an IAM role that trusts Cribl Cloud and sets up CloudTrail and VPC Flow logging to an S3 bucket.
Template Overview
The template automates the creation of AWS resources to enable centralized logging, specifically focusing on CloudTrail logs and VPC Flow Logs. It creates S3 buckets for storing these logs, SQS queues for triggering processes upon log arrival, and an IAM role to allow Cribl Cloud to access these logs.
Resources Created
Here's a breakdown of the resources defined in the CloudFormation template:
-
CriblCTQueue (AWS::SQS::Queue): Creates an SQS queue named according to the CTSQS parameter (default: cribl-cloudtrail-sqs). This queue will be used to trigger actions when new CloudTrail logs are written to the S3 bucket.
- Properties:
- QueueName: !Ref CTSQS - Sets the queue name to the value of the CTSQS parameter.
- Properties:
-
CriblCTQueuePolicy (AWS::SQS::QueuePolicy): Defines the policy for the CriblCTQueue, allowing s3.amazonaws.com to send messages to the queue. The policy includes a condition that the source account must match the AWS account ID in which the stack is deployed. This ensures only S3 events from the current AWS account can trigger the queue.
- Properties:
- PolicyDocument:
- Statement:
- Effect: Allow - Allows actions specified in the policy.
- Principal: Service: s3.amazonaws.com - Specifies the service that can perform the actions.
- Action: SQS:SendMessage - Allows sending messages to the queue.
- Resource: !GetAtt CriblCTQueue.Arn - The ARN of the SQS queue.
- Condition:
- StringEquals: 'aws:SourceAccount': !Ref AWS::AccountId - Restricts the source account to the account where the stack is deployed.
- Statement:
- Queues: !Ref CTSQS - Associates the policy with the SQS queue.
- PolicyDocument:
- Properties:
-
TrailBucket (AWS::S3::Bucket): Creates an S3 bucket used to store CloudTrail logs. The bucket is configured with a NotificationConfiguration that sends an event to the CriblCTQueue when a new object is created (specifically, a PUT operation). This will trigger processing when new CloudTrail logs are available.
- Properties:
- NotificationConfiguration:
- QueueConfigurations:
- Event: s3:ObjectCreated:Put - Specifies that the notification should be triggered when an object is created using a PUT operation.
- Queue: !GetAtt CriblCTQueue.Arn - The ARN of the SQS queue to send the notification to.
- QueueConfigurations:
- NotificationConfiguration:
- DependsOn: CriblCTQueuePolicy - Ensures that the queue policy is created before the bucket.
- Properties:
-
TrailBucketPolicy (AWS::S3::BucketPolicy): Defines the policy for the TrailBucket. This policy grants permissions to:
-
delivery.logs.amazonaws.com: Allows the AWS Logs service to write objects to the bucket, ensuring proper log delivery. It requires bucket-owner-full-control ACL.
-
cloudtrail.amazonaws.com: Allows CloudTrail to get the bucket ACL and put objects into the bucket. It also requires bucket-owner-full-control ACL.
-
A Deny statement that enforces the use of SSL for all requests to the bucket, enhancing security.
-
Properties:
- Bucket: !Ref TrailBucket - The name of the S3 bucket.
- PolicyDocument:
- Version: 2012-10-17 - The version of the policy document.
- Statement:
- Sid: AWSLogDeliveryWrite
- Effect: Allow - Allows the action specified.
- Principal: Service: delivery.logs.amazonaws.com - The AWS Logs service principal.
- Action: s3:PutObject - Allows putting objects into the bucket.
- Resource: !Sub '${TrailBucket.Arn}/AWSLogs/' - The S3 bucket and prefix to allow the action on.
- Condition: StringEquals: 's3:x-amz-acl': bucket-owner-full-control - Requires the bucket-owner-full-control ACL.
- Sid: AWSCloudTrailAclCheck
- Effect: Allow
- Principal: Service: cloudtrail.amazonaws.com
- Action: s3:GetBucketAcl
- Resource: !Sub '${TrailBucket.Arn}'
- Sid: AWSCloudTrailWrite
- Effect: Allow
- Principal: Service: cloudtrail.amazonaws.com
- Action: s3:PutObject
- Resource: !Sub '${TrailBucket.Arn}/AWSLogs/*/*'
- Condition: StringEquals: 's3:x-amz-acl': 'bucket-owner-full-control'
- Sid: AllowSSLRequestsOnly
- Effect: Deny
- Principal: * - Applies to all principals.
- Action: s3:* - Denies all S3 actions.
- Resource:
- !GetAtt TrailBucket.Arn
- !Sub '${TrailBucket.Arn}/*'
- Condition: Bool: 'aws:SecureTransport': false - Denies requests that are not using SSL.
- Sid: AWSLogDeliveryWrite
-
-
ExternalTrail (AWS::CloudTrail::Trail): Creates a CloudTrail trail. It is configured to:
-
Store logs in the TrailBucket.
-
Include global service events.
-
Enable logging.
-
Create a multi-region trail.
-
Enable log file validation.
-
Properties:
- S3BucketName: !Ref TrailBucket - The name of the S3 bucket where the logs will be stored.
- IncludeGlobalServiceEvents: true - Includes global service events.
- IsLogging: true - Enables logging.
- IsMultiRegionTrail: true - Creates a multi-region trail.
- EnableLogFileValidation: true - Enables log file validation.
- TrailName: !Sub '${TrailBucket}-trail' - Sets the name of the trail.
-
DependsOn:
- TrailBucket
- TrailBucketPolicy
-
-
CriblVPCQueue (AWS::SQS::Queue): Creates an SQS queue named according to the VPCSQS parameter (default: cribl-vpc-sqs). This queue will be used to trigger actions when new VPC Flow Logs are written to the S3 bucket.
- Properties:
- QueueName: !Ref VPCSQS - Sets the queue name.
- Properties:
-
CriblVPCQueuePolicy (AWS::SQS::QueuePolicy): Defines the policy for the CriblVPCQueue, allowing s3.amazonaws.com to send messages to the queue. Similar to CriblCTQueuePolicy, it restricts access to events originating from the same AWS account.
- Properties:
- PolicyDocument:
- Statement:
- Effect: Allow
- Principal: Service: s3.amazonaws.com
- Action: SQS:SendMessage
- Resource: !GetAtt CriblVPCQueue.Arn
- Condition: StringEquals: 'aws:SourceAccount': !Ref "AWS::AccountId"
- Statement:
- Queues: !Ref VPCSQS
- PolicyDocument:
- Properties:
-
LogBucket (AWS::S3::Bucket): Creates an S3 bucket used to store VPC Flow Logs. The bucket is configured with a NotificationConfiguration to send an event to the CriblVPCQueue when new objects are created.
- Properties:
- NotificationConfiguration:
- QueueConfigurations:
- Event: s3:ObjectCreated:Put
- Queue: !GetAtt CriblVPCQueue.Arn
- QueueConfigurations:
- NotificationConfiguration:
- DependsOn: CriblVPCQueuePolicy
- Properties:
-
LogBucketPolicy (AWS::S3::BucketPolicy): Defines the policy for the LogBucket. This policy grants permissions to:
-
delivery.logs.amazonaws.com: Allows the AWS Logs service to write objects to the bucket. It requires bucket-owner-full-control ACL.
-
Allows delivery.logs.amazonaws.com to get the bucket ACL.
-
Enforces SSL for all requests to the bucket.
-
Properties:
- Bucket: !Ref LogBucket
- PolicyDocument:
- Version: 2012-10-17
- Statement:
- Sid: AWSLogDeliveryWrite
- Effect: Allow
- Principal: Service: delivery.logs.amazonaws.com
- Action: s3:PutObject
- Resource: !Sub '${LogBucket.Arn}/AWSLogs/${AWS::AccountId}/*'
- Condition: StringEquals: 's3:x-amz-acl': bucket-owner-full-control
- Sid: AWSLogDeliveryAclCheck
- Effect: Allow
- Principal: Service: delivery.logs.amazonaws.com
- Action: s3:GetBucketAcl
- Resource: !GetAtt LogBucket.Arn
- Sid: AllowSSLRequestsOnly
- Effect: Deny
- Principal: *
- Action: s3:*
- Resource:
- !GetAtt LogBucket.Arn
- !Sub '${LogBucket.Arn}/*'
- Condition: Bool: 'aws:SecureTransport': false
- Sid: AWSLogDeliveryWrite
-
-
FlowLog (AWS::EC2::FlowLog): Creates a VPC Flow Log that captures network traffic information for the VPC specified in the VPCId parameter. The flow logs are stored in the LogBucket. The type of traffic to log is determined by the TrafficType parameter (ALL, ACCEPT, or REJECT).
- Properties:
- LogDestination: !Sub 'arn:${AWS::Partition}:s3:::${LogBucket}' - The ARN of the S3 bucket where the flow logs will be stored.
- LogDestinationType: s3 - Specifies that the destination is an S3 bucket.
- ResourceId: !Ref VPCId - The ID of the VPC to log.
- ResourceType: VPC - Specifies that the resource is a VPC.
- TrafficType: !Ref TrafficType - The type of traffic to log (ALL, ACCEPT, REJECT).
- Properties:
-
CriblTrustCloud (AWS::IAM::Role): Creates an IAM role that allows Cribl Cloud to access AWS resources.
- Properties:
- AssumeRolePolicyDocument:
- Version: 2012-10-17
- Statement:
- Effect: Allow
- Principal:
- AWS:
- !Sub 'arn:aws:iam::${CriblCloudAccountID}:role/search-exec-main'
- !Sub 'arn:aws:iam::${CriblCloudAccountID}:role/main-default'
- AWS:
- Action:
- sts:AssumeRole
- sts:TagSession
- sts:SetSourceIdentity
- Condition:
- StringEquals: 'sts:ExternalId': !Select - 4 - !Split - '-' - !Select - 2 - !Split - '/' - !Ref 'AWS::StackId'
- Description: Role to provide access AWS resources from Cribl Cloud Trust
- Policies:
- PolicyName: SQS
- PolicyDocument:
- Version: 2012-10-17
- Statement:
- Effect: Allow
- Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
- Resource:
- !GetAtt CriblCTQueue.Arn
- !GetAtt CriblVPCQueue.Arn
- PolicyDocument:
- PolicyName: S3EmbeddedInlinePolicy
- PolicyDocument:
- Version: 2012-10-17
- Statement:
- Effect: Allow
- Action:
- s3:ListBucket
- s3:GetObject
- s3:PutObject
- s3:GetBucketLocation
- Resource:
- !Sub ${TrailBucket.Arn}
- !Sub ${TrailBucket.Arn}/*
- !Sub ${LogBucket.Arn}
- !Sub ${LogBucket.Arn}/*
- PolicyDocument:
- PolicyName: SQS
- AssumeRolePolicyDocument:
- Properties:
Parameters
The template utilizes parameters to allow customization during deployment:
- CriblCloudAccountID: The AWS account ID of the Cribl Cloud instance. This is required for the IAM role's trust relationship.
- Description: Cribl Cloud Trust AWS Account ID. Navigate to Cribl.Cloud, go to Workspace and click on Access. Find the Trust and copy the AWS Account ID found in the trust ARN.
- Type: String
- Default: '012345678910'
- CTSQS: The name of the SQS queue for CloudTrail logs.
- Description: Name of the SQS queue for CloudTrail to trigger for S3 log retrieval.
- Type: String
- Default: cribl-cloudtrail-sqs
- TrafficType: The type of traffic to log for VPC Flow Logs (ALL, ACCEPT, REJECT).
- Description: The type of traffic to log.
- Type: String
- Default: ALL
- AllowedValues: ACCEPT, REJECT, ALL
- VPCSQS: The name of the SQS queue for VPC Flow Logs.
- Description: Name of the SQS for VPCFlow Logs.
- Type: String
- Default: cribl-vpc-sqs
- VPCId: The ID of the VPC for which to enable flow logging.
- Description: Select your VPC to enable logging
- Type: AWS::EC2::VPC::Id
Outputs
The template defines outputs that provide key information about the created resources:
- CloudTrailS3Bucket: The ARN of the S3 bucket storing CloudTrail logs.
- Description: Amazon S3 Bucket for CloudTrail Events
- Value: !GetAtt TrailBucket.Arn
- VPCFlowLogsS3Bucket: The ARN of the S3 bucket storing VPC Flow Logs.
- Description: Amazon S3 Bucket for VPC Flow Logs
- Value: !GetAtt LogBucket.Arn
- RoleName: The name of the created IAM role.
- Description: Name of created IAM Role
- Value: !Ref CriblTrustCloud
- RoleArn: The ARN of the created IAM role.
- Description: Arn of created Role
- Value: !GetAtt CriblTrustCloud.Arn
- ExternalId: The external ID used for authentication when assuming the IAM role.
- Description: External Id for authentication
- Value: !Select - 4 - !Split - '-' - !Select - 2 - !Split - '/' - !Ref 'AWS::StackId'
Deployment Considerations
- Cribl Cloud Account ID: Ensure the CriblCloudAccountID parameter is set to the correct AWS account ID for your Cribl Cloud instance. This is crucial for establishing the trust relationship.
- S3 Bucket Names: S3 bucket names must be globally unique. If the template is deployed multiple times in the same region, you may need to adjust the names of the buckets. Consider using a Stack name prefix.
- VPC ID: The VPCId parameter should be set to the ID of the VPC for which you want to enable flow logging.
- Security: Regularly review and update IAM policies to adhere to the principle of least privilege. Consider using more restrictive S3 bucket policies if necessary.
- SQS Queue Configuration: Monitor the SQS queues for backlog and adjust the processing capacity accordingly.
- CloudTrail Configuration: Confirm that CloudTrail is properly configured to deliver logs to the designated S3 bucket.
- VPC Flow Log Configuration: Verify that VPC Flow Logs are correctly capturing network traffic.
- External ID: The External ID is a critical security measure for cross-account access. Make sure it's correctly configured in both AWS and Cribl Cloud.
This detailed explanation provides a comprehensive understanding of the resources created by the CloudFormation template, enabling informed deployment and management. Remember to adapt parameters to your specific environment and security requirements.
Footnotes
Resources
Vendor resources
Support
Vendor support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
FedRAMP
GDPR
HIPAA
ISO/IEC 27001
PCI DSS
SOC 2 Type 2
Standard contract
Customer reviews
Centralized data pipelines have reduced daily log volumes and optimize observability workflows
What is our primary use case?
I use Cribl for optimizing Splunk data. For example, I have approximately 10 TB of daily data integrations. I route the data through Cribl , optimize it, and index it into Splunk, reducing it by 30 to 40 percent. For instance, at 10 TB of integrations, it becomes 5 TB after Cribl optimization. I use Cribl for firewall logs, event logs, Windows logs, metrics logs, and EDR logs.
What is most valuable?
The feature I appreciate is the connection between Splunk and Cribl, which is very useful for routing data and pipeline filtering. Cribl has a central management system that controls all data pipelines and configurations.
Cribl works centrally by using the main Cribl instance and managing configurations, pipelines, routing routes, and all worker nodes. The leader nodes act as a central node and manage pipelines, route packs, and configurations while distributing them to the worker nodes. The worker nodes process actual logs and send the processed logs to destinations such as Splunk, S3 , and other SIEM tools.
What needs improvement?
Cribl pricing is a concern. Cribl Streams is very powerful but costly as it scales with data volumes. For large and heavy systems, it becomes pricey compared to other similar tools. While it is flexible, it is not beginner-friendly. Pipeline routes and transforms can feel complex at first.
For how long have I used the solution?
I have been using Cribl for my business for the last 1.5 years.
What do I think about the stability of the solution?
Sometimes Cribl goes down, and we miss logs during that time, which is an issue. I experience downtime with Cribl, and this is the only issue I face. Otherwise, we do not have any other issues. When there is downtime, we cannot get logs into Splunk, and based on those logs, we get alerts and crypto triggering repeatedly, creating multiple incidents and sending emails to our customers, which is very problematic during downtime.
What do I think about the scalability of the solution?
Cribl is excellent for scalability. It is good overall for pipeline maintaining, horizontal scaling, distributed architecture, parallel pipelines, and load balancing. We handle real-time data with several GB of data per day and one TB of data, which is a very high volume of observability pipelines. Multiple pipelines run at once and different data sources process independently. There are no signal bottlenecks, and managing configuration is straightforward. Overall, it is long-lasting and good for stability and scalability.
Which solution did I use previously and why did I switch?
As of now, I do not use any alternative to Cribl.
How was the initial setup?
The initial setup is moderate. It is not too hard and not too easy. For experienced people, it is very easy. One person is enough for a Cribl deployment if you do not have a very large environment. Otherwise, you need different types of people at a large-scale environment. For beginners, it is moderate, neither too hard nor too easy. For experienced people, it is very easy because they have experience with it.
What about the implementation team?
All the nodes and components can be deployed from start to end within a certain timeframe. A quick setup following the official guide from the documentation takes approximately one hour. Normally, production setup takes one to three days. The breakdown is approximately two days for deployment and configuration, and the third and fourth days for pipelines and testing. A full enterprise deployment at a much higher level takes one to four weeks, depending on the difficulties and architecture involved.
What's my experience with pricing, setup cost, and licensing?
For the current user at a small level, the pricing is good. At a large level, it is not too heavy. The main model of pricing is based on data integrations at approximately $0.32 per GB for ST enterprise estimate. This is good and not too high or too low, falling within a medium-level range.
Filtering has reduced daily data volumes and central routing now simplifies log management
What is our primary use case?
We work on Splunk, so we use Cribl . Our company works with a system where approximately 12 to 15 TB of data comes daily in Splunk. We don't store the data directly into Splunk; instead, we use Cribl first. By using Cribl, it removes unnecessary data and keeps the important data, which can reduce the size.
What is most valuable?
My favorite feature is that Cribl is connected with Splunk very easily and it routes the data. The filtering is the most important feature because it removes unwanted logs, and the central control manages everything from one place. Cribl provides pipelines, which process the data step-by-step, so all the features are very useful.
What needs improvement?
It is very difficult to learn as a beginner.
I sometimes experience downtime, and by that, we sometimes miss logs, which creates a problem, but not for a long time. Sometimes we face these issues.
For how long have I used the solution?
I have been using Cribl for four months.
What do I think about the stability of the solution?
I sometimes experience downtime, and by that, we sometimes miss logs, which creates a problem, but not for a long time. Sometimes we face these issues.
How are customer service and support?
I have a very good experience with customer support. When we are in trouble, they give us fast responses and good responses, which is very useful for us.
How was the initial setup?
The initial deployment when I first started using Cribl was not that difficult. As a beginner, I think it is a little difficult, not that much easy. However, once you start learning and become an experienced user, it is very easy. One person can handle the whole setup without needing a large team.
What other advice do I have?
Cribl's interface is very good, and it is easy to understand how to use Cribl. When I started to use Cribl, it wasn't that difficult to learn. I learned how to pass the data into Cribl, so it is easy. Cribl has a good user interface, which makes work easier for me. I would rate this product a 9 out of 10.
Log optimization has reduced ingestion costs and simplifies routing of critical security data
What is our primary use case?
Cribl is used daily in our main use case. For example, our client had application logs where 60 to 70% of logs were just debug messages, which were mostly unusual or not required for any use case of the customer or purpose. We use Cribl to drop those logs, keeping only error and warning logs. This alone reduces their ingestion by about 20 to 30%. Mostly, this is the use case of the customer: reducing their ingestion so they could get a lower cost on the particular platform or SIEM tools.
Cribl's ability to contain data cost and complexity is very great. Cribl Stream is the main product that I have used extensively. Mostly, I have used Cribl Stream only, not Cribl Edge and other things, but Cribl Stream is what the client required right now. We could also modify the logs. Suppose a long log is coming, and it is complex to read, and there are many more fields in it, but they are not required for everything. Suppose there is a message coming in the logs that we don't need. Only the error message field and where the log came from is required. We could reduce the log's complexity, and we could send only the required number of fields from the logs. This way, Cribl Stream is useful for my use case for the client: to reduce the complexity of the log and send it to the SIEM tool.
My thoughts on Cribl's ability to handle high volumes of diverse data types, like metrics and logs, are positive. That's what I said about the use case of the client. A more complex log is coming and a long log is coming. Windows Events logs come with a lot of lines, and there are many more fields that are not required for any use case or anything else. They are just messages that this log came from this and that. If we don't want that kind of log or that kind of long log, you could shorten it down with Cribl Stream and pipelines. We could send it to the particular SIEM tool. That way, it reduces the complexity of the log and also reduces the licensing cost of the SIEM tool. We could also reduce the noise of the logs and duplicate logs. Suppose one event is sending two to three logs, and they are the same kind of logs, we could drop them, de-dupe them, and send only one kind of log.
What is most valuable?
Cribl is user-friendly, which I think is one of its most valuable features. When I learned from my senior and Cribl University, at first sight, it was very user-friendly. The UI of Cribl is very good for new users, a particular client, or anybody to understand. The ingestion pipelines, basically the pipeline feature, is very great. If a client or security team only needed security logs, we can send security logs to Splunk. We can send application logs to, suppose, we want to send to AWS , you could separate it via Cribl.
Suppose we are getting lots of logs, but we want to separate it from one platform; you could separate it and send it to different platforms if you want. This is the feature of Cribl I appreciate the most.
The Search in Place feature of Cribl is very impressive. Suppose we have Kubernetes logs coming from Kubernetes or an AWS server, and we are sending it via Cribl to Splunk. We could also search the logs from the source. There are three flows in this: source, destination, and between them, there is the pipeline. Suppose we are getting logs from the source, which are original logs, and we are modifying them through the pipeline. If we want to debug and check how the log is being modified, the searching feature belongs in both the destination and source. We could see the original and the modified one. This feature of Cribl is very great.
What needs improvement?
Cribl could improve certain areas, such as the learning curve, which is a kind of drawback. They don't provide the labs; they just provide the video lectures from Cribl University. The user needs to perform everything by themselves: test, trial, and error, like brute force. Suppose one thing is not working, they need to check out another thing. That learning curve should be improved. Next is the debugging of the pipeline. If a pipeline becomes complex with multiple rules, sometimes it's hard to figure out exactly what is breaking or where exactly the thing is breaking. So, they could improve in that area. I'm not saying that the pipeline should be less loaded; I'm saying that if a pipeline is very complex and includes a lot of functions and there is some error in one function, the debugging is very difficult. We could find out the errors from the pipeline more easily.
In a high-volume environment, Cribl needs proper CPU, and infrastructure is important for it. They should provide proper documentation of the infrastructure. For Cribl Cloud, it is great. For on-prem services, such as when we are deploying on an AWS server, they should provide proper documentation that if your ingestion is like 10 to 50 GB, then you should use this many CPUs and this much memory.
For how long have I used the solution?
I have been working with Cribl for about seven months.
What do I think about the stability of the solution?
I find Cribl very stable and reliable, as it has been developed in the Go language, which is a very stable product. It provides a faster way of ingestion. There are many products like Cribl, but it is famous for its faster response to the ingestion.
What do I think about the scalability of the solution?
I believe Cribl is very scalable. Suppose a client has only one employee who can handle Cribl, it can be handled by him. They just take the consultant service. Suppose I am from Data Elicit, and one client is coming and they take the consultant services from us. At that point, at the first stage, suppose some customer only needs the ingestion. Suppose they are converting from ONUM and they wanted the whole infrastructure to be converted into Cribl. They will use the consultant services, and at that stage, they only need the ingestion of that infrastructure from ONUM to Cribl. If they just take the consultant services and after the ingestion and the infrastructure is built, they only need one or two engineers to handle the pipeline for debugging purposes and for more ingestion. They could be handled by one or two of their employees, we can say. So they don't need a large workforce for Cribl in particular. As it is easy, and if they take the consultant services, we will also handle it for them. It is easy to adapt, we can say. It has a wide functionality of integration. It gets more integrations. There are about 1,000 to 1,500 plus integrations available in Cribl. So it can be adapted by any client. It also provides custom integration, so any use case of the client can be included in Cribl.
How are customer service and support?
I often communicate with the technical support or customer service of Cribl. I did in one or two cases where I was not getting an issue in the pipeline. I just raised a support case and they provided me with support within 24 to 48 hours. Their support system is very great.
What was our ROI?
I have seen a decrease in firewall logs with Cribl, and in terms of cost, the customers have benefited a lot. Customers were having 60 to 70% logs which were very unusual and not useful for their use cases. They were dropped through Cribl, and only the errors, warnings, and important logs were sent to their particular SIEM tool. Suppose the SIEM tool is Splunk, the ingestion cost will be reduced. Therefore, the overall licensing cost of Splunk will be reduced.
What other advice do I have?
I started my journey with Cribl as it was taught by my colleague or senior. He started by giving me the roadmap for Cribl. Then he gave me access to Cribl University and said I should start with Cribl User first. There is a certification for Cribl User; then comes the Admin. Right now, I'm about to complete the Cribl User certification. We get the overview of Cribl from Cribl University. Whenever I got stuck in Cribl, my senior was there beside me, so he could help me with that. I faced fewer difficulties. But suppose someone is starting with just Cribl University, they would face more problems. They don't have a proper lab environment. In my case, my senior just gave me the roadmap, so I knew how to start, how to end, and where to end, and how to debug. Cribl University is just for the overall knowledge of Cribl. But if you want practical knowledge, you need to implement it by yourself. They don't provide a lab structure or infrastructure so we could get practical knowledge and learn where to debug. They should provide all that kind of thing. In my case, it was basically less difficult for me, as I had the proper roadmap. My overall review rating for Cribl is nine out of ten.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Data pipelines have reduced log noise and now route critical observability events efficiently
What is our primary use case?
My primary use case for Cribl is to manage and optimize observability data before sending it to different destinations, such as routing. I deal with a very large volume of logs coming from multiple sources, including large log systems. This includes system logs, application logs, and security-related logs. Using Cribl , I can filter unnecessary logs and transform that data as required, and I can route important data to the appropriate destinations. This is very helpful to me and helps me reduce data volume and improve performance. I also use pipeline configurations to control how logs flow through the entire system. This makes it very easy for me to maintain data consistency and manage large log systems across different environments.
What is most valuable?
The most valuable thing or feature for me in Cribl is data routing and pipeline flexibility. Cribl allows me to define how data should be processed, filtered, and routed to different destinations. One of the things I also find very useful is edge processing, which allows me to process data closer to the source, which helps reduce unnecessary data and improve performance. Overall, flexibility and control over observability data are the things I appreciate most about Cribl.
Cribl handles large logs very efficiently by using its pipeline-based architecture, which I find most useful. It allows me to transform data through routing and filtering before sending it to downstream systems. When dealing with large volumes of logs, I can define pipelines that drop unnecessary fields and remove duplicate logs. There can be so many duplicates and redundancies that filtering them out significantly reduces the overall data volume. Another helpful capability is routing, which helps me route different types of logs to different destinations and prioritize fields that I want. For example, critical logs can be sent to one destination while lowering the priority of other logs, which are stored elsewhere. This helps me in large-scale log environments very effectively. Cribl also supports horizontal scaling, where I can add more worker nodes to handle increasing log volumes. This ensures my performance remains stable, even as log ingestion increases.
I have seen a decrease in logs by using pipelines, which helps me decrease logs by filtering and optimizing data before sending it downstream. For firewall logs specifically, I have seen that it helps reduce volume by filtering unnecessary or repetitive events. When a firewall device generates a large number of logs or deny logs, many of which are repetitive or not always useful, Cribl filters out the low-priority logs such as allowed traffic and routine events. I remove the unnecessary fields from firewall logs, which reduces the log size.
What needs improvement?
The main downside of Cribl is that it is not very beginner-friendly. They could include tutorials or something more interactive for beginners. For experienced users, it works well. The learning curve is significant; learning Cribl from the initial stage for someone who doesn't have any background knowledge may be difficult. Since it offers lots of flexibility with pipelines and routing, it can take time for beginners to understand how everything works properly and to complete the configuration. The initial setup is also a little complex. Additionally, Cribl has limited built-in analytics compared to dedicated monitoring tools.
For how long have I used the solution?
I have been working with Cribl for more than one year or one and a half years.
How are customer service and support?
Technical support is very helpful. My experience with Cribl support has always been positive. They do not delay responses. The documentation covers almost everything for the use case, especially all the major features they include. For any issues I encounter, I was able to resolve them by using mostly documentation and community resources without needing to contact support directly. For technical clarification, if required, the available resources including guides and examples of best practices are quite helpful. The support ecosystem around Cribl is very good, and most issues are resolved quickly.
Which solution did I use previously and why did I switch?
I was previously using Splunk. Splunk was mostly used for storing, searching, and analyzing logs. Once I discovered Cribl, I found it more useful. Cribl helped me with managing, filtering, pipeline routing, and flexibility before sending data to destinations or monitoring tools. Cribl sits between a data source and an analytics tool, which helps me reduce my flow, save time, and optimize data volume. If I had to choose between Splunk and Cribl for filtering and routing, I would obviously choose Cribl. For analyzing and searching, I continue to use Splunk.
How was the initial setup?
The initial deployment of Cribl is not very user-friendly for beginners. For beginners, they might find that they have to first study and get to know everything about it. Once they get used to it, they will find that it is a very useful tool. It is not very beginner-friendly, but if the user is experienced or knows the relevant terms, then it will be very easy.
What's my experience with pricing, setup cost, and licensing?
For cost optimization, Cribl's pricing is moderate. I will not say it is too high or too low.
Which other solutions did I evaluate?
For something similar to Cribl, I have used Splunk.
What other advice do I have?
The maintenance for Cribl is relatively minimal. Most of the time, I focus on monitoring pipelines, which is manual work. I check the data flow and make small adjustments as I need them. For new log sources or adding anything, that is the manual work I have to do. I also review pipeline configurations to ensure logs are being filtered and routed correctly. If there are any changes in log formats or new data sources, I update the pipelines accordingly. Monitoring system performance and ensuring the worker nodes are running properly is something I always do. If the volume of logs increases, I scale the nodes to handle the load. Overall, maintenance from my side is minimal. Once the pipelines and configurations are done, Cribl runs very smoothly with very minimal manual intervention. I would rate this review as a nine out of ten.
Data routing has improved precision and flexibility while pricing and alerting still need work
What is our primary use case?
I use Cribl as our data ingestion source, with Cribl Edge agents installed across all servers. Cribl is used at the pipeline or routing level to send data to our SIEM platform.
Firewall logs are sent to Cribl, and Cribl routes specific logs to our SIEM tool while sending others to archive storage. This segregation and separation capability is not possible with any other tool, which makes me very satisfied. However, Cribl charges us for all firewall logs that it observes, not just what it processes and outputs.
What is most valuable?
Cribl performs parsing and field reduction exceptionally well, cutting down unnecessary fields and delivering only the right data. However, Cribl charges for everything it sees rather than just what it parses. We might ingest a large volume of data but only process about forty percent of it, yet we are charged for one hundred percent of the data ingested into Cribl.
The ability to bifurcate or trifurcate data and send it to multiple destinations is a feature we love. I have been a Splunk user for over eight years, and this is something Splunk did not have until Cribl introduced it specifically for this purpose.
Cribl handles logs, metrics, and various data sources really well. I have ingested up to fifty terabytes of data per day, and Cribl has never failed or caused trouble from that perspective. Cribl handles huge volumes of data exceptionally well.
What needs improvement?
A feature I would want Cribl to add in future releases is the ability to create a greater number of fleets. Currently, Cribl has a limitation on the number of fleets that can be created. In an enterprise environment, different types of servers belong to different applications and should be organized accordingly, as each has a different change management cycle and upgrade cycle. Cribl cannot be upgraded all at once, so we want to separate fleets so we can perform upgrades in batches rather than all in one shot. Increasing the number of fleets would be greatly appreciated.
Data cost is a concern, as Cribl charges for everything it sees rather than everything it processes. I do not see much cost-effectiveness from this approach. If we could do pre-processing before sending data to Cribl, then Cribl would be cheaper than other tools, but if we could do that, we would not need Cribl at all. This costing model has been concerning for a while. Better options based on user base, enterprise size, or data volume would be beneficial. More options to choose from for pricing tiers are needed, as the current offerings are very limited.
I have used Splunk previously and have been using Palo Alto XSIAM. Palo Alto XSIAM has integrated features from Cribl, Splunk, and Sentinel into one comprehensive tool, taking the best features from all three. Another concern is that there is not much default alerting available for Cribl metrics, and custom alerting is also difficult to configure. For example, backpressure monitoring has only very limited use cases available out of the box when monitoring Cribl environment health. Cribl could take steps to increase the number of use cases and add guardrails around how much volume can be ingested. Options to create custom alerting would be helpful, such as alerts when certain metrics go down or up, or when the catchall is filling up. These options exist but are very complicated to set up. Unlike users who have been using Splunk for ten years and transitioned to Cribl, I find it very difficult to navigate and create alerts in Cribl. The ease of use could be improved by providing default options that can be leveraged and customized as needed.
Cribl initial deployment was easy, but for large enterprise networks and big organizations, Cribl does not support operating systems earlier than 2012. This creates a problem, and a package should be available for anything below 2012 that works as expected. Currently, Cribl only approves packages for 2012 and above, but some organizations require applications to run on legacy servers. This option is not available, and we are unable to get Cribl installed without finding alternatives or going back to using Splunk to pull data and then stream it to Cribl. This causes significant operational challenges, and if this could be fixed with one version that supports everything below 2012, it would be greatly appreciated.
Cribl is deployed both on-premise and in the cloud. Cribl placed sample data in one of the YAML files that contained examples of personal data like social security numbers or credit card information. When this YAML file was included in Cribl package itself, vulnerability scanners detected it as a non-compliance or data loss concern, even though there was no actual personal information, API keys, or sensitive data present. These were just examples provided by Cribl. Cribl fixed this issue in the latest version after we brought it to their attention. Going forward, I would like Cribl to think about this from a bigger enterprise perspective, as endpoint security tools will detect all of these concerns. It is not just about processing data but also about the problems faced when deploying it in a large enterprise. This thought process needs to increase from Cribl's side.
For how long have I used the solution?
I have used Cribl for over a year.
How are customer service and support?
A dedicated support portal is available, and support cases are usually raised through a dedicated email. Responses are received at reasonable times, so this has not been a problem. I would give support a rating of seven out of ten.
