Listing Thumbnail

    Cribl.Cloud Suite

     Info
    Sold by: Cribl 
    Deployed on AWS
    Free Trial
    Vendor Insights
    Quick Launch
    Cribl.Cloud gives control over IT and security data without the hassle of running infrastructure.
    4.3

    Overview

    Play video

    Cribl.Cloud is the easiest way to try Cribl products in the cloud through a unified platform. Cribls suite of products gives flexibility and control back to customers. With routing, shaping, enriching, and search functionalities that make data more manageable, you can easily clean up your data, get it where it needs to be, work more efficiently, and ultimately gain the control and confidence needed to be successful.

    Cribl Cloud suite of products includes:

    Stream: A highly scalable data router for data collection, reduction, enrichment, and routing of observability data.

    Edge: An intelligent, scalable edge-based data collection system for logs, metrics, and application data.

    Lake: Storage that does not lock data in. Cribl Lake is a turnkey data lake makes it easy and economical to store, access, replay, and analyze data no expertise needed.

    Search: A search feature to perform federated search-in-place queries on any data, in any form.

    Getting Started

    When you purchase your Cribl.Cloud subscription directly from the AWS Marketplace, you can experience a smooth billing process that you're already familiar with, without needing to set up a separate procurement plan to use Cribl products. Track billing and usage directly in Cribl.Cloud.

    Enjoy a quick and easy purchasing experience by utilizing your existing spend commitments through the AWS Enterprise Discount Program (EDP) to subscribe to Cribl.Cloud. Get flexible pricing and terms by purchasing through a private offer. Purchase the Cribl Cloud Suite of offerings at a pre-negotiated price. Contact awsmp@cribl.io  or a sales representative for flexible pricing for 12/24/36-month terms.

    We are available in US-West-2 (Oregon), US-East-2 (Ohio), US-East-1 (Virginia), CA-Central-1 (Canada Central), EU-West-2 (London), EU-Central-1 (Frankfurt), and AP-Southeast-2 (Sydney) with more regions coming soon! Regional pricing will apply.

    To learn more about pricing and the consumption pricing philosophy, please visit: Cribl Pricing - https://cribl.io/cribl-pricing/  Cribl.Cloud Simplified with Consumption Pricing Blog - https://cribl.io/blog/cribl-cloud-consumption-pricing/ 

    Highlights

    • Fast and easy onboarding - With zero-touch deployment, you can quickly start using Cribl products without the hassle, burden, and cost of managing infrastructure.
    • Instant scalability - The cloud provides flexibility to easily scale up or down to meet changing business needs and dynamic data demands.
    • Trusted security - Cribl knows how important protecting data is, and built all Cribl products and services from the ground up with security as the top priority. Cribl.Cloud is SOC 2 compliant, ensuring all your data is protected and secure. Cribl.Cloud is currently In Process for FedRAMP IL4.

    Details

    Sold by

    Delivery method

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Vendor Insights

     Info
    Skip the manual risk assessment. Get verified and regularly updated security info on this product with Vendor Insights.
    Security credentials achieved
    (3)

    Buyer guide

    Gain valuable insights from real users who purchased this product, powered by PeerSpot.
    Buyer guide

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Quick Launch

    Leverage AWS CloudFormation templates to reduce the time and resources required to configure, deploy, and launch your software.

    Pricing

    Free trial

    Try this product free according to the free trial terms set by the vendor.

    Cribl.Cloud Suite

     Info
    Pricing is based on the duration and terms of your contract with the vendor, and additional usage. You pay upfront or in installments according to your contract terms with the vendor. This entitles you to a specified quantity of use for the contract duration. Usage-based pricing is in effect for overages or additional usage not covered in the contract. These charges are applied on top of the contract price. If you choose not to renew or replace your contract before the contract end date, access to your entitlements will expire.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    12-month contract (2)

     Info
    Dimension
    Description
    Cost/12 months
    Cribl.Cloud Free
    Cribl.Cloud Suite Free Tier
    $0.00
    Cribl.Cloud Enterprise
    Cribl.Cloud Suite Enterprise with 1TB Daily ingestion
    $142,800.00

    Additional usage costs (1)

     Info

    The following dimensions are not included in the contract terms, which will be charged based on your usage.

    Dimension
    Cost/unit
    Overage Fees
    $0.01

    Vendor refund policy

    Cribl will refund prior payments attributable to the unused remainder of your purchase.

    Custom pricing options

    Request a private offer to receive a custom quote.

    How can we make this page better?

    Tell us how we can improve this page, or report an issue with this product.
    Tell us how we can improve this page, or report an issue with this product.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Software as a Service (SaaS)

    SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.

    Additional details

    Usage instructions

    Cribl Cloud Trust IAM Role CloudFormation Template

    This CloudFormation template creates an IAM role that allows Cribl Cloud to access specific AWS resources in your account. The role is designed to provide Cribl Cloud with the necessary permissions to interact with S3 buckets and SQS queues.

    Template Overview

    The template does the following:

    1. Creates an IAM role named CriblTrustCloud
    2. Configures a trust relationship with Cribl Cloud's AWS account
    3. Attaches a policy that grants access to S3 and SQS resources
    4. Outputs the role name, ARN, and an external ID for authentication

    Parameters

    • CriblCloudAccountID: The AWS account ID of Cribl Cloud (default: '012345678910')

    IAM Role Details

    Trust Relationship

    The role trusts two specific roles in the Cribl Cloud account:

    • arn:aws:iam::{CriblCloudAccountID}:role/search-exec-main
    • arn:aws:iam::{CriblCloudAccountID}:role/main-default

    These roles can assume the CriblTrustCloud role using the sts:AssumeRole, sts:TagSession, and sts:SetSourceIdentity actions.

    Permissions

    The role has a policy named CriblCloudS3SQSPolicy that grants the following permissions:

    1. S3 access:
      • List buckets
      • Get and put objects
      • Get bucket location
    2. SQS access:
      • Receive and delete messages
      • Change message visibility
      • Get queue attributes and URL

    These permissions apply to all S3 buckets and SQS queues in the account.

    Security Feature

    The template includes a security feature that requires an external ID for authentication. This external ID is derived from the CloudFormation stack ID, providing an additional layer of security when assuming the role.

    Outputs

    The template provides three outputs:

    1. RoleName: The name of the created IAM role
    2. RoleArn: The ARN of the created role
    3. ExternalId: The external ID required for authentication when assuming the role

    Usage

    To use this template:

    1. Deploy it in your AWS account using CloudFormation
    2. Provide the resulting role ARN and external ID to Cribl Cloud
    3. Cribl Cloud can then assume this role to access your S3 and SQS resources

    Remember to review and adjust the permissions as necessary to align with your security requirements and the specific needs of your Cribl Cloud integration1 2 3 .

    <div style="text-align: center">⁂</div>

    Enable CloudTrail and VPC Flow Logging for Cribl Cloud

    This document explains the resources that will be created when deploying the provided CloudFormation template. The template is designed to create an IAM role that trusts Cribl Cloud and sets up CloudTrail and VPC Flow logging to an S3 bucket.

    Template Overview

    The template automates the creation of AWS resources to enable centralized logging, specifically focusing on CloudTrail logs and VPC Flow Logs. It creates S3 buckets for storing these logs, SQS queues for triggering processes upon log arrival, and an IAM role to allow Cribl Cloud to access these logs.

    Resources Created

    Here's a breakdown of the resources defined in the CloudFormation template:

    • CriblCTQueue (AWS::SQS::Queue): Creates an SQS queue named according to the CTSQS parameter (default: cribl-cloudtrail-sqs). This queue will be used to trigger actions when new CloudTrail logs are written to the S3 bucket.

      • Properties:
        • QueueName: !Ref CTSQS - Sets the queue name to the value of the CTSQS parameter.
    • CriblCTQueuePolicy (AWS::SQS::QueuePolicy): Defines the policy for the CriblCTQueue, allowing s3.amazonaws.com to send messages to the queue. The policy includes a condition that the source account must match the AWS account ID in which the stack is deployed. This ensures only S3 events from the current AWS account can trigger the queue.

      • Properties:
        • PolicyDocument:
          • Statement:
            • Effect: Allow - Allows actions specified in the policy.
            • Principal: Service: s3.amazonaws.com - Specifies the service that can perform the actions.
            • Action: SQS:SendMessage - Allows sending messages to the queue.
            • Resource: !GetAtt CriblCTQueue.Arn - The ARN of the SQS queue.
            • Condition:
              • StringEquals: 'aws:SourceAccount': !Ref AWS::AccountId - Restricts the source account to the account where the stack is deployed.
        • Queues: !Ref CTSQS - Associates the policy with the SQS queue.
    • TrailBucket (AWS::S3::Bucket): Creates an S3 bucket used to store CloudTrail logs. The bucket is configured with a NotificationConfiguration that sends an event to the CriblCTQueue when a new object is created (specifically, a PUT operation). This will trigger processing when new CloudTrail logs are available.

      • Properties:
        • NotificationConfiguration:
          • QueueConfigurations:
            • Event: s3:ObjectCreated:Put - Specifies that the notification should be triggered when an object is created using a PUT operation.
            • Queue: !GetAtt CriblCTQueue.Arn - The ARN of the SQS queue to send the notification to.
      • DependsOn: CriblCTQueuePolicy - Ensures that the queue policy is created before the bucket.
    • TrailBucketPolicy (AWS::S3::BucketPolicy): Defines the policy for the TrailBucket. This policy grants permissions to:

      • delivery.logs.amazonaws.com: Allows the AWS Logs service to write objects to the bucket, ensuring proper log delivery. It requires bucket-owner-full-control ACL.

      • cloudtrail.amazonaws.com: Allows CloudTrail to get the bucket ACL and put objects into the bucket. It also requires bucket-owner-full-control ACL.

      • A Deny statement that enforces the use of SSL for all requests to the bucket, enhancing security.

      • Properties:

        • Bucket: !Ref TrailBucket - The name of the S3 bucket.
        • PolicyDocument:
          • Version: 2012-10-17 - The version of the policy document.
          • Statement:
            • Sid: AWSLogDeliveryWrite
              • Effect: Allow - Allows the action specified.
              • Principal: Service: delivery.logs.amazonaws.com - The AWS Logs service principal.
              • Action: s3:PutObject - Allows putting objects into the bucket.
              • Resource: !Sub '${TrailBucket.Arn}/AWSLogs/' - The S3 bucket and prefix to allow the action on.
              • Condition: StringEquals: 's3:x-amz-acl': bucket-owner-full-control - Requires the bucket-owner-full-control ACL.
            • Sid: AWSCloudTrailAclCheck
              • Effect: Allow
              • Principal: Service: cloudtrail.amazonaws.com
              • Action: s3:GetBucketAcl
              • Resource: !Sub '${TrailBucket.Arn}'
            • Sid: AWSCloudTrailWrite
              • Effect: Allow
              • Principal: Service: cloudtrail.amazonaws.com
              • Action: s3:PutObject
              • Resource: !Sub '${TrailBucket.Arn}/AWSLogs/*/*'
              • Condition: StringEquals: 's3:x-amz-acl': 'bucket-owner-full-control'
            • Sid: AllowSSLRequestsOnly
              • Effect: Deny
              • Principal: * - Applies to all principals.
              • Action: s3:* - Denies all S3 actions.
              • Resource:
                • !GetAtt TrailBucket.Arn
                • !Sub '${TrailBucket.Arn}/*'
              • Condition: Bool: 'aws:SecureTransport': false - Denies requests that are not using SSL.
    • ExternalTrail (AWS::CloudTrail::Trail): Creates a CloudTrail trail. It is configured to:

      • Store logs in the TrailBucket.

      • Include global service events.

      • Enable logging.

      • Create a multi-region trail.

      • Enable log file validation.

      • Properties:

        • S3BucketName: !Ref TrailBucket - The name of the S3 bucket where the logs will be stored.
        • IncludeGlobalServiceEvents: true - Includes global service events.
        • IsLogging: true - Enables logging.
        • IsMultiRegionTrail: true - Creates a multi-region trail.
        • EnableLogFileValidation: true - Enables log file validation.
        • TrailName: !Sub '${TrailBucket}-trail' - Sets the name of the trail.
      • DependsOn:

        • TrailBucket
        • TrailBucketPolicy
    • CriblVPCQueue (AWS::SQS::Queue): Creates an SQS queue named according to the VPCSQS parameter (default: cribl-vpc-sqs). This queue will be used to trigger actions when new VPC Flow Logs are written to the S3 bucket.

      • Properties:
        • QueueName: !Ref VPCSQS - Sets the queue name.
    • CriblVPCQueuePolicy (AWS::SQS::QueuePolicy): Defines the policy for the CriblVPCQueue, allowing s3.amazonaws.com to send messages to the queue. Similar to CriblCTQueuePolicy, it restricts access to events originating from the same AWS account.

      • Properties:
        • PolicyDocument:
          • Statement:
            • Effect: Allow
            • Principal: Service: s3.amazonaws.com
            • Action: SQS:SendMessage
            • Resource: !GetAtt CriblVPCQueue.Arn
            • Condition: StringEquals: 'aws:SourceAccount': !Ref "AWS::AccountId"
        • Queues: !Ref VPCSQS
    • LogBucket (AWS::S3::Bucket): Creates an S3 bucket used to store VPC Flow Logs. The bucket is configured with a NotificationConfiguration to send an event to the CriblVPCQueue when new objects are created.

      • Properties:
        • NotificationConfiguration:
          • QueueConfigurations:
            • Event: s3:ObjectCreated:Put
            • Queue: !GetAtt CriblVPCQueue.Arn
      • DependsOn: CriblVPCQueuePolicy
    • LogBucketPolicy (AWS::S3::BucketPolicy): Defines the policy for the LogBucket. This policy grants permissions to:

      • delivery.logs.amazonaws.com: Allows the AWS Logs service to write objects to the bucket. It requires bucket-owner-full-control ACL.

      • Allows delivery.logs.amazonaws.com to get the bucket ACL.

      • Enforces SSL for all requests to the bucket.

      • Properties:

        • Bucket: !Ref LogBucket
        • PolicyDocument:
          • Version: 2012-10-17
          • Statement:
            • Sid: AWSLogDeliveryWrite
              • Effect: Allow
              • Principal: Service: delivery.logs.amazonaws.com
              • Action: s3:PutObject
              • Resource: !Sub '${LogBucket.Arn}/AWSLogs/${AWS::AccountId}/*'
              • Condition: StringEquals: 's3:x-amz-acl': bucket-owner-full-control
            • Sid: AWSLogDeliveryAclCheck
              • Effect: Allow
              • Principal: Service: delivery.logs.amazonaws.com
              • Action: s3:GetBucketAcl
              • Resource: !GetAtt LogBucket.Arn
            • Sid: AllowSSLRequestsOnly
              • Effect: Deny
              • Principal: *
              • Action: s3:*
              • Resource:
                • !GetAtt LogBucket.Arn
                • !Sub '${LogBucket.Arn}/*'
              • Condition: Bool: 'aws:SecureTransport': false
    • FlowLog (AWS::EC2::FlowLog): Creates a VPC Flow Log that captures network traffic information for the VPC specified in the VPCId parameter. The flow logs are stored in the LogBucket. The type of traffic to log is determined by the TrafficType parameter (ALL, ACCEPT, or REJECT).

      • Properties:
        • LogDestination: !Sub 'arn:${AWS::Partition}:s3:::${LogBucket}' - The ARN of the S3 bucket where the flow logs will be stored.
        • LogDestinationType: s3 - Specifies that the destination is an S3 bucket.
        • ResourceId: !Ref VPCId - The ID of the VPC to log.
        • ResourceType: VPC - Specifies that the resource is a VPC.
        • TrafficType: !Ref TrafficType - The type of traffic to log (ALL, ACCEPT, REJECT).
    • CriblTrustCloud (AWS::IAM::Role): Creates an IAM role that allows Cribl Cloud to access AWS resources.

      • Properties:
        • AssumeRolePolicyDocument:
          • Version: 2012-10-17
          • Statement:
            • Effect: Allow
            • Principal:
              • AWS:
                • !Sub 'arn:aws:iam::${CriblCloudAccountID}:role/search-exec-main'
                • !Sub 'arn:aws:iam::${CriblCloudAccountID}:role/main-default'
            • Action:
              • sts:AssumeRole
              • sts:TagSession
              • sts:SetSourceIdentity
            • Condition:
              • StringEquals: 'sts:ExternalId': !Select - 4 - !Split - '-' - !Select - 2 - !Split - '/' - !Ref 'AWS::StackId'
        • Description: Role to provide access AWS resources from Cribl Cloud Trust
        • Policies:
          • PolicyName: SQS
            • PolicyDocument:
              • Version: 2012-10-17
              • Statement:
                • Effect: Allow
                • Action:
                  • sqs:ReceiveMessage
                  • sqs:DeleteMessage
                  • sqs:GetQueueAttributes
                  • sqs:GetQueueUrl
                • Resource:
                  • !GetAtt CriblCTQueue.Arn
                  • !GetAtt CriblVPCQueue.Arn
          • PolicyName: S3EmbeddedInlinePolicy
            • PolicyDocument:
              • Version: 2012-10-17
              • Statement:
                • Effect: Allow
                • Action:
                  • s3:ListBucket
                  • s3:GetObject
                  • s3:PutObject
                  • s3:GetBucketLocation
                • Resource:
                  • !Sub ${TrailBucket.Arn}
                  • !Sub ${TrailBucket.Arn}/*
                  • !Sub ${LogBucket.Arn}
                  • !Sub ${LogBucket.Arn}/*

    Parameters

    The template utilizes parameters to allow customization during deployment:

    • CriblCloudAccountID: The AWS account ID of the Cribl Cloud instance. This is required for the IAM role's trust relationship.
      • Description: Cribl Cloud Trust AWS Account ID. Navigate to Cribl.Cloud, go to Workspace and click on Access. Find the Trust and copy the AWS Account ID found in the trust ARN.
      • Type: String
      • Default: '012345678910'
    • CTSQS: The name of the SQS queue for CloudTrail logs.
      • Description: Name of the SQS queue for CloudTrail to trigger for S3 log retrieval.
      • Type: String
      • Default: cribl-cloudtrail-sqs
    • TrafficType: The type of traffic to log for VPC Flow Logs (ALL, ACCEPT, REJECT).
      • Description: The type of traffic to log.
      • Type: String
      • Default: ALL
      • AllowedValues: ACCEPT, REJECT, ALL
    • VPCSQS: The name of the SQS queue for VPC Flow Logs.
      • Description: Name of the SQS for VPCFlow Logs.
      • Type: String
      • Default: cribl-vpc-sqs
    • VPCId: The ID of the VPC for which to enable flow logging.
      • Description: Select your VPC to enable logging
      • Type: AWS::EC2::VPC::Id

    Outputs

    The template defines outputs that provide key information about the created resources:

    • CloudTrailS3Bucket: The ARN of the S3 bucket storing CloudTrail logs.
      • Description: Amazon S3 Bucket for CloudTrail Events
      • Value: !GetAtt TrailBucket.Arn
    • VPCFlowLogsS3Bucket: The ARN of the S3 bucket storing VPC Flow Logs.
      • Description: Amazon S3 Bucket for VPC Flow Logs
      • Value: !GetAtt LogBucket.Arn
    • RoleName: The name of the created IAM role.
      • Description: Name of created IAM Role
      • Value: !Ref CriblTrustCloud
    • RoleArn: The ARN of the created IAM role.
      • Description: Arn of created Role
      • Value: !GetAtt CriblTrustCloud.Arn
    • ExternalId: The external ID used for authentication when assuming the IAM role.
      • Description: External Id for authentication
      • Value: !Select - 4 - !Split - '-' - !Select - 2 - !Split - '/' - !Ref 'AWS::StackId'

    Deployment Considerations

    • Cribl Cloud Account ID: Ensure the CriblCloudAccountID parameter is set to the correct AWS account ID for your Cribl Cloud instance. This is crucial for establishing the trust relationship.
    • S3 Bucket Names: S3 bucket names must be globally unique. If the template is deployed multiple times in the same region, you may need to adjust the names of the buckets. Consider using a Stack name prefix.
    • VPC ID: The VPCId parameter should be set to the ID of the VPC for which you want to enable flow logging.
    • Security: Regularly review and update IAM policies to adhere to the principle of least privilege. Consider using more restrictive S3 bucket policies if necessary.
    • SQS Queue Configuration: Monitor the SQS queues for backlog and adjust the processing capacity accordingly.
    • CloudTrail Configuration: Confirm that CloudTrail is properly configured to deliver logs to the designated S3 bucket.
    • VPC Flow Log Configuration: Verify that VPC Flow Logs are correctly capturing network traffic.
    • External ID: The External ID is a critical security measure for cross-account access. Make sure it's correctly configured in both AWS and Cribl Cloud.

    This detailed explanation provides a comprehensive understanding of the resources created by the CloudFormation template, enabling informed deployment and management. Remember to adapt parameters to your specific environment and security requirements.

    Footnotes

    1. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html   

    2. https://github.com/criblio/cribl-aws-cloudformation-templates   

    3. https://awsfundamentals.com/blog/aws-iam-roles-with-aws-cloudformation   

    Support

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly

    Accolades

     Info
    Top
    10
    In Log Management, Security Observability
    Top
    10
    In Migration, Monitoring, Continuous Integration and Continuous Delivery

    Customer reviews

     Info
    Sentiment is AI generated from actual customer reviews on AWS and G2
    Reviews
    Functionality
    Ease of use
    Customer service
    Cost effectiveness
    0 reviews
    Insufficient data
    Insufficient data
    Insufficient data
    Insufficient data
    Positive reviews
    Mixed reviews
    Negative reviews

    Overview

     Info
    AI generated from product descriptions
    Data Routing and Collection
    Highly scalable data router for data collection, reduction, enrichment, and routing of observability data
    Edge-Based Data Collection
    Intelligent, scalable edge-based data collection system for logs, metrics, and application data
    Data Lake Storage
    Turnkey data lake storage that enables storing, accessing, replaying, and analyzing data without vendor lock-in
    Federated Search Capability
    Federated search-in-place query functionality across any data in any form
    Security Compliance
    SOC 2 compliance certification with FedRAMP IL4 authorization in process
    Real-time Data Collection and Indexing
    Collects and indexes machine-generated data from virtually any source or location in real time with automatic indexing upon data ingestion.
    Complex Event Correlation
    Correlates complex events spanning multiple diverse data sources using time-based correlations, transaction-based correlations, sub-searches, lookups, and joins.
    Scalable Data Processing
    Scales to collect and index tens of terabytes of data per day with distributed computing architecture.
    High Availability Clustering
    Provides clustering technology for availability and fault tolerance across distributed computing environments.
    Machine Data Search and Analysis
    Enables searching, analyzing, and visualization of machine data generated by IT systems and technology infrastructure across physical, virtual, and cloud environments.
    Telemetry Data Platform
    Ingests, analyzes, and alerts on metrics, events, logs, and traces in a unified platform
    Full-Stack Observability
    Visualizes and troubleshoots entire software stack in one connected experience with integrated AWS service monitoring
    Anomaly Detection and Issue Correlation
    Automatically detects anomalies, correlates issues, and reduces alert noise through applied intelligence
    Agentless SAP Monitoring
    Provides agentless monitoring for ABAP systems with support for SAP RISE, ECC, S/4HANA, BTP, CALM, Fiori, Ariba, PI/PO, BW and 175+ monitoring points
    AWS Service Integration
    Deep integration with AWS technology stack including Amazon EKS, AWS Lambda, AWS Kinesis, Amazon CloudWatch, and AWS Distro for OpenTelemetry

    Security credentials

     Info
    Validated by AWS Marketplace
    FedRAMP
    GDPR
    HIPAA
    ISO/IEC 27001
    PCI DSS
    SOC 2 Type 2
    -
    -
    -
    No security profile
    No security profile

    Contract

     Info
    Standard contract
    No
    No
    No

    Customer reviews

    Ratings and reviews

     Info
    4.3
    59 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    58%
    37%
    5%
    0%
    0%
    15 AWS reviews
    |
    44 external reviews
    External reviews are from PeerSpot .
    Atharva Khadsare

    Search in place has reduced log ingestion and enables faster deep investigations

    Reviewed on Apr 21, 2026
    Review provided by PeerSpot

    What is our primary use case?

    I am working in a PLM environment, which is product lifecycle management. We deal with lots of system logs and tool integrations. I used Cribl Search for debugging system errors quickly and searching logs stored in long-term storage. Instead of pushing all logs into expensive tools, we used Cribl Search to directly investigate issues from stored data.

    I am currently using Cribl Search only. I have some experience with Cribl Stream, which we are using for our data pipeline solution.

    We have just started using the Search in Place feature because one of our team members recommended it. There is a lot of room for improvement in the way we query the data and the whole data processing pipeline. We weren't using any other tool before.

    What is most valuable?

    I have been using Cribl Search for a long time now, and I think Search in Place is a very good feature in Cribl Search. Unify Search is also valuable, where you can search data from multiple sources in one place. Fast investigation reduces steps from multiple tools to a single workflow. Pre-built search packs save effort to configure the dashboards and write the queries. It also works well with other Cribl tools.

    The traditional way for certain places is that logs are generated, then sent to SIEM tools like Splunk, and then stored again before you can search them. This has problems including data duplication and high storage costs. With Search in Place in Cribl Search, logs stay in storage such as S3, data lakes, or archives. You can directly run queries on that data without any movement, duplication, or reprocessing. Advantages include cost reduction and faster investigation.

    Since we can directly query historical data where it is stored, there is an advantage of deep root cause analysis, which helps understand what happened in the past. This is useful for debugging recurring issues and is cost-efficient. It has helped me in faster troubleshooting because there is no need to reload old logs. We can investigate incidents after days, weeks, or even months. It has the ability to handle large data volumes, so there is no performance bottleneck.

    We reduced unnecessary data ingestion by almost 40 to 50% using Search in Place. We could troubleshoot issues faster because data was already available for querying. It eliminates redundancy and keeps the architecture cleaner. As the data grows, we don't need to scale ingestion pipelines.

    What needs improvement?

    The user interface of Cribl Search can be more simplified because for non-technical users, it is quite difficult to grasp. There is a need for better beginner tutorials.

    Cribl could have built-in guided queries for faster onboarding and better beginner tutorials. A more simplified UI would be better for non-technical people.

    For how long have I used the solution?

    I have been working with Cribl for eight to nine months.

    What do I think about the stability of the solution?

    Until now, we haven't had any downtimes. It has been working very well.

    What do I think about the scalability of the solution?

    It is pretty scalable horizontally. We started with one team member but now there are five to six people using it.

    How are customer service and support?

    We developers ask for support from our in-house IT team, but I don't know what conversation goes on between Cribl customer service and our IT team.

    Which solution did I use previously and why did I switch?

    We evaluated Splunk, but due to some reasons, we went with Cribl Search.

    How was the initial setup?

    Cribl Search was set up by the IT team, but they haven't complained about any issues or complexities that arose during the setup. I think the setup is pretty simple and not that complicated.

    What about the implementation team?

    The implementation was done by our internal IT team.

    What was our ROI?

    With Cribl, we have observed a 40 to 60% reduction in log volume hitting the firewall because Cribl filters unnecessary events and removes verbose fields.

    There is reduced pipeline complexity and faster end-to-end workflow because data doesn't wait in ingestion queues. There is also optimized data processing cost because less data processed equals less compute plus storage cost. Other expensive tools are used only for critical data. There is a shift from processing to querying because traditional systems process first and query later, but Cribl stores data cheaply so we can query it when we need it.

    Cribl has many filters to remove noise from the data and to remove verbose fields, which has been very good to work with.

    Earlier, we had to process and store all logs in monitoring tools, which are very expensive, before analysis. After using Cribl Search, we streamlined the workflow by sending only critical data through pipelines and directly querying archive logs for investigation. This improved efficiency and reduced system load, which helped us indirectly optimize costs. We reduced the overall processing load by around 40%.

    What's my experience with pricing, setup cost, and licensing?

    I'd highly recommend other organizations to use Cribl Search because it did help us a lot with data processing and everything.

    What other advice do I have?

    Cribl Search was set up by the IT team, but they haven't complained about any issues or complexities that arose during the setup, so I think the setup is pretty simple and not that complicated. I would rate this review an 8 out of 10.

    Pal Mavani

    Data routing has simplified high-volume security log management and supports flexible processing

    Reviewed on Apr 17, 2026
    Review provided by PeerSpot

    What is our primary use case?

    I use Cribl  in a data management platform for IT security teams. My use cases include Stream , Edge, Search, and Lake.

    What is most valuable?

    I appreciate data routing the most about Cribl . I use it for data routing, data processing, and integration support. Cribl's ability to handle high volumes of diverse data types such as logs and metrics is impressive. It can easily handle logs because it is highly scalable and built to process millions of events per second, making it very easy to use.

    What needs improvement?

    What I dislike about Cribl are the documentation gaps and the setup complexity.

    For how long have I used the solution?

    I have been working with Cribl for one year.

    What do I think about the stability of the solution?

    Regarding stability, once the pipelines were properly set up, the ongoing maintenance was minimal and mostly involved small adjustments rather than major changes. Overall, Cribl is not maintenance heavy, but sometimes maintenance is needed.Cribl requires some maintenance on my end; it is relatively low compared to traditional log pipelines.

    What do I think about the scalability of the solution?

    Cribl provides high availability through distributed architecture, so we can achieve this by developing multiple workers and using load balancing to ensure continuous data flow even during failures in the pipeline.

    How was the initial setup?

    The initial deployment is medium because the setup is complex. It took me some time to set it up for the first time because my friend helped me, but I found it difficult.

    What other advice do I have?

    I have not seen a significant decrease in firewall logs while working with Cribl because it is highly scalable, so that much decrease has not occurred.

    Abhay Gor

    Data routing has become efficient and log volumes are reduced while monitoring improves

    Reviewed on Apr 15, 2026
    Review provided by PeerSpot

    What is our primary use case?

    I am using Cribl  Stream  for data routing and data processing as part of my company's IT team. We primarily use it for monitoring and collecting data.

    What is most valuable?

    One of the best features is integration support because it offers more than 80 to 90 sources and destinations via Cribl  packs. Additionally, the security is very good because they offer encryption and access control to protect sensitive telemetry data. The data processing and reduction is also excellent because it filters unwanted fields and removes redundant data.

    I have seen a decrease in my firewall logs by 50 to 60%.

    Cribl allows me to handle high volumes of diverse data, such as logs and metrics, and it helps manage them effectively.

    It is helpful because it handles diverse data types and can process logs, metrics, event streams, JSON, text, structured and unstructured data.

    What needs improvement?

    The user interface is acceptable, but I think a person who is just starting to use it will need to go through documentation because there is a steep learning curve to become familiar with Cribl Stream . The setup is also complex, and configuring integrations and pipelines for a large environment requires significant effort.

    The areas that have room for improvement are the complex setup and better documentation, such as a user guide.

    For how long have I used the solution?

    I have been using this product for six to eight months.

    What do I think about the stability of the solution?

    Cribl performs time-to-time updates and maintenance, and it must be managed effectively because we are using it daily and have not experienced any issues for a long time. The team maintaining it must be performing their job very well.

    What do I think about the scalability of the solution?

    Horizontally, it is quite scalable, so I rate that a ten.

    How are customer service and support?

    I rate the technical support a nine, and I rate the stability an eight.

    Which solution did I use previously and why did I switch?

    I have used Splunk, and what Cribl does is it does not replace Splunk; it optimizes the data before sending it to Splunk, reducing cost and load. Therefore, Cribl is not a direct alternative to Splunk; they are complementary to each other.

    How was the initial setup?

    The deployment was quite easy.

    I do not know exactly how long it took to deploy because I was not the one who deployed it on the cloud, but the ones who deployed it told me that it was quite easy to deploy and there were no complaints from them.

    What about the implementation team?

    Roughly five to six users use the solution.

    What was our ROI?

    I checked out Cribl Search once, and it helped me directly search from S3  data lakes, and it did help me save time and cost.

    I have not analyzed the exact amount, but in ballpark terms, it saves about 10 to 20%.

    I think it is cost-efficient because overall, after using Cribl, it helps users save cost and time. If you look at the big picture, it is cost-effective.

    It saves me about 30 to 40% in terms of time and cost.

    Which other solutions did I evaluate?

    I would highly recommend it because it is cost-efficient, helps reduce noisy logs, and filters unnecessary fields.

    What other advice do I have?

    I gave this review a rating of nine.

    HarshShah2

    Cribl has improved real-time infrastructure observability and optimizes server resource costs

    Reviewed on Apr 10, 2026
    Review provided by PeerSpot

    What is our primary use case?

    Our use case for Cribl  is observability from an infrastructure point of view; we use Cribl  for getting the logs from our infrastructure. The metrics or logs which we require from our servers or containers, or the platforms where we have deployed our product, necessitate real-time data processing, so Cribl helps us in that regard.

    What is most valuable?

    I love Cribl Edge feature, which is an agent we can directly deploy at our servers; that is quite a good feature that helps in collecting data locally at the server level. Additionally, the search is good; we can search across all our data sources, and it is quite fast. Cost efficiency also helps in optimizing costs.

    Cribl handles high volumes of diverse data types very well. We have around 200 to 250 in-house servers, and we require observability and visibility over those servers. We don't have a team that manages them, and we cannot hire too many people to manage 200 servers. Cribl provides visibility and helps in that regard; we get real-time metrics, allowing us to see when we need to increase the compute of our servers or when we have over-provisioned resources. It helps in optimizing costs at our infrastructure level, and Cribl is quite cost-efficient, helping in that aspect as well.

    What needs improvement?

    We haven't gone very deep into it, so we don't have a heavy use case, but most probably, as it helps us in optimizing costs, that is the best thing about it. Cribl's UI is quite simple and minimal, helping the developer and team get familiar with it earlier; however, it provides functionalities in a very deep way. Thus, it becomes difficult if we don't require some metrics or something for filtering, as Cribl has provided many functionalities to filter out metrics which we don't require with our lighter use case. That has created some hindrance for us; otherwise, everything is quite good.

    The function section is quite messy and includes too many functionalities which are generally not required at an amateur level. If we advance at that level, then definitely it is required to get the precise logs that filter out unnecessary data when the data stream is quite big. At that time, definitely it is required, but at the initial level, it becomes quite difficult to get the proper data that is required.

    For how long have I used the solution?

    I used the solution about six months ago.

    What do I think about the stability of the solution?

    We haven't faced much regarding instability such as lagging or crashing; the backend team and support staff are quite nice, and we didn't encounter any significant issues with stability.

    What do I think about the scalability of the solution?

    Scaling with Cribl is very easy, both horizontally and vertically, so we don't have any hindrance in scaling the tool.

    How are customer service and support?

    My team has contacted technical support for some tasks they were facing issues with; they reported that the staff is quite nice, and the support is very good. However, we didn't require much support, only maybe twice or thrice.

    Which solution did I use previously and why did I switch?

    We used to utilize Node Exporter, Grafana , and Prometheus.

    Cribl sits in between those tools; it does not replace any of them. Node Exporter helps collect the host metrics, Prometheus is responsible for scraping the metrics, and Grafana  serves as a dashboard. Cribl assists with infrastructure observability without replacing any of the tools. We use all of them right now as well.

    How was the initial setup?

    Cribl's initial deployment is quite easy and nice; we didn't face any difficulties in doing that. Additionally, scaling it horizontally or vertically is very good.

    What about the implementation team?

    I lead my team; I don't set and manage deployment myself anymore. Initially, when we had a very small team, I started building it, but now my team handles all this.

    What's my experience with pricing, setup cost, and licensing?

    I'm not from the team that handles pricing; another department deals with that. However, the pricing appears to be good because I haven't been approached with concerns about why we are spending a particular amount. I think our pricing is fair.

    What other advice do I have?

    For our use case, I would give Cribl a score of 10 out of 10, but overall, if I rated it for a large organization that requires it, it would be fair to give an eight. I would rate this review as an 8 overall.

    reviewer2815500

    Data pipelines have optimized log routing and currently reduce noise and monitoring costs

    Reviewed on Apr 10, 2026
    Review provided by PeerSpot

    What is our primary use case?

    I use Cribl  for data integration, pipelining, data monitoring, scalability, and to check how my monitor is working. The main product we use is Cribl  Stream , which we use for log routing, filtering, and transforming data before sending it to our SIEM  platform. This is the core part of our log management pipeline. Through Cribl Stream , we mainly work with features such as data pipelining, routing rules, and data transformation functions to control how logs move between different systems. My hands-on experience is primarily with Stream, since that is the component we rely on most for processing and optimizing log data in our environment.

    What is most valuable?

    The main product we use is Cribl Stream, which we use for log routing, filtering, and transforming data before sending it to our SIEM  platform. Through Cribl Stream, we mainly work with features such as data pipelining, routing rules, and data transformation functions to control how logs move between different systems. My hands-on experience is primarily with Stream, since that is the component we rely on most for processing and optimizing log data in our environment.

    One of the biggest advantages for my organization is better control over log data. We can filter, transform, and route logs before they reach downstream systems such as the SIEM platform, which helps reduce noise and focus only on relevant data. Another key benefit is cost optimization. By dropping unnecessary logs and sending only important data, we significantly reduce ingestion and storage costs in tools such as Splunk. It also improves operational efficiency.

    What needs improvement?

    One key area is simplifying the user experience, especially for new users. Since it has multiple components such as metrics, traces, and detectors, making onboarding and navigation more intuitive would be beneficial. One area of improvement could be reducing the learning curve. Since it is a very flexible tool with powerful pipeline configuration, new users may take some time to fully understand how to design and optimize pipelines efficiently. Another improvement could be more pre-built templates or out-of-the-box integration of common data sources, which would help teams get started faster without building from scratch. I also think enhanced monitoring and troubleshooting visibility for pipelines would be helpful, especially in large environments where multiple data flows are being processed.

    The main strength is its flexibility, scalability, and cost optimization benefits. It gives strong control over what data is processed and sent to downstream systems. The reason I would not give it a ten is mainly due to the learning curve and initial complexity, especially for new users. Some areas such as documentation or advanced troubleshooting could be improved.

    For how long have I used the solution?

    I have been working in the cybersecurity and security operations space for around one year.

    What do I think about the stability of the solution?

    Cribl is stable and reliable. I would rate stability and reliability at eight out of ten. In my experience, it is generally performing well.

    What do I think about the scalability of the solution?

    I would rate the scalability of Cribl at eight or nine out of ten. Its ability to handle a high volume of different data types would get a rating of eight or nine out of ten. It is designed to process large-scale telemetry data from multiple sources such as firewalls, cloud services, applications, and infrastructure. It can handle different formats such as JSON, syslog, and custom logs, and transform them within the pipeline with its distributed architecture. We can scale horizontally by adding worker nodes, which allows it to handle increased data volumes without major performance issues.

    How are customer service and support?

    We faced an issue with a pipeline dropping certain log events unexpectedly. We reached out to support, and they helped us analyze the pipeline configuration and logs. Initially, the response was general, but after sharing more details such as sample logs and pipeline rules, they were able to identify that the filter condition was incorrectly configured, which was causing the data to be dropped. They guided us on how to modify the rule and validate the data flow using a live preview, and we were able to resolve the issue very quickly. Overall, the support team was very helpful and knowledgeable, especially once the issue was clearly explained, and it helped us solve the problem without major downtime.

    Which solution did I use previously and why did I switch?

    Before Cribl, most log processing was handled directly within the SIEM platforms, mainly using tools such as Splunk native and sometimes Logstash  for data processing. The limitation with that approach was that all the raw log data was first ingested into the SIEM, and then filtering or transformation were applied afterwards. This increased the data volume and cost complexity. We moved to Cribl to introduce a dedicated data pipeline layer before the SIEM, which allows us to filter, transform, and route data more efficiently before ingestion.

    How was the initial setup?

    As I am on the technical side, I was involved in the initial setup of Cribl. My role included configuring data sources, setting up pipelines, and defining routing and filtering rules based on our different requirements. I also worked on integrating Cribl with our SIEM platform, ensuring that only relevant and optimized data is forwarded. During the setup, we focused on designing efficient pipelines, testing data flow, and validating transformations to make sure everything was working correctly. Overall, the initial setup was not very complex, but it required proper planning to design the pipelines.

    Which other solutions did I evaluate?

    Other than this platform, it is more valuable. Before adopting Cribl, we did look at a few other approaches. Some of the evaluations were around using native capabilities within SIEM platforms such as Splunk, as well as open-source log processing tools such as Logstash  for handling data pipelines. Those options can work for log collection and processing, but Cribl stood out because it provides a dedicated platform specifically designed for observability and security data pipelines. It offers more flexibility in routing, filtering, and transforming logs without heavily relying on the SIEM itself. The visual pipeline management and real-time visibility into data flow were also important factors that made Cribl a better fit for managing large volumes of log data across multiple systems. We saw other options, but by way of references, we determined that Cribl is more relevant for our work. So we chose Cribl.

    What other advice do I have?

    I would recommend starting with a few simple pipelines, then gradually expanding as you become more comfortable with the platform. I would rate Cribl eight out of ten. A few improvements in Splunk Observability Cloud  could make it even better. Overall, I would give Cribl a rating of 8.5 out of ten.

    View all reviews