Overview

Product video
Verbis Graph Engine - Container Edition
The only dedicated GraphRAG container on AWS Marketplace, purpose-built for Amazon Bedrock Agents. Verbis Graph Engine enables organizations to deploy a secure, scalable knowledge retrieval layer directly within their own AWS environment - with full data residency and zero data leaving your infrastructure.
By combining vector similarity search with knowledge graph traversal, Verbis Graph Engine captures relationships between entities across documents, enabling multi-hop reasoning, explainable responses, and citation-backed outputs with 35% more accurate results than traditional RAG systems.
Key Features
GraphRAG Hybrid Retrieval
Combines vector search with knowledge graph traversal to deliver context-aware, relationship-aware results beyond traditional vector-only retrieval. Enables multi-hop reasoning across entities and documents.
Explainable, Grounded Responses
All responses are linked to exact source documents, providing full traceability and auditability - critical for compliance and regulated environments.
Enterprise Deployment Inside Customer VPC
Runs entirely inside the customer AWS environment via Amazon ECS or Amazon EKS, ensuring:
- Full data residency
- VPC-level isolation
- Alignment with enterprise security policies
Supports BYOC (Bring Your Own Cloud) and BYOK (Bring Your Own Key) for complete control over infrastructure, data residency, and encryption.
Framework & Ecosystem Integration
Compatible with LangChain, LlamaIndex, AutoGen, CrewAI, and Amazon Bedrock Agents for seamless integration into existing AI pipelines.
Production-Ready Performance
Designed for low-latency retrieval and scalable workloads in production environments.
Why GraphRAG?
Traditional RAG systems rely on vector similarity, which may miss relationships across documents.
Example: When asking "Which marketing campaigns were affected by the supply chain disruption in Q3?" - vector search retrieves similar documents but cannot connect related entities. GraphRAG models relationships across documents, enabling multi-hop reasoning and more complete, context-aware answers.
This matters most in regulated environments where accuracy, traceability, and explainability are non-negotiable.
Use Cases
| Industry | Use Case |
|---|---|
| Enterprise | Knowledge retrieval across internal documents |
| Legal & Compliance | Audit-ready AI with full citation traceability |
| Healthcare | Clinical document intelligence and research |
| Finance | Multi-document analysis and cross-entity reasoning |
| Public Sector | Policy document management with auditability |
| Research | Multi-document analysis using multi-hop reasoning |
How It Works
- Deploy the container in your AWS environment via ECS or EKS
- Ingest documents - knowledge graph and vector embeddings build automatically
- Query via REST API or connect to your preferred AI framework
- Retrieve answers with citations and full reasoning paths
Watch the full setup walkthrough: https://www.youtube.com/watch?v=JhXqYwpJHlE
Deployment Instructions
Step 1 - Launch the Container
- Subscribe to the product in AWS Marketplace
- Deploy using Amazon ECS or Amazon EKS
- Ensure port 8080 is open in your security group
Step 2 - Access the Application
- Obtain the public IP or load balancer URL after deployment
- Navigate to the application on port 8080
Step 3 - First Use
- Upload a document for indexing
- Wait for the indexing process to complete
- Ask questions in plain language
- Explore the generated knowledge graph visualization
Step 4 - API Access (Optional)
- Swagger UI available at your endpoint under /docs
Integrations
Amazon Bedrock - Amazon Neptune - LangChain - LlamaIndex - AutoGen - CrewAI - OpenAI - Anthropic Claude
Security & Compliance
Verbis Graph Engine is deployed entirely within the customer AWS environment, supporting:
- Data residency and control
- VPC-level isolation
- Integration with IAM and enterprise security policies
The platform follows AWS Well-Architected best practices for secure and scalable deployment.
1-Month Pilot / PoC
A 1-month pilot and Proof of Concept period is available on request for evaluation in real-world use cases. Contact support@verbisgraph.com to request access.
Professional Implementation Services
Paid services available for managed container installation, VPC configuration, SDK integration, and custom architecture.
Built by Prodigy AI Solutions - Enterprise support and custom deployments available on request.
Highlights
- Reduce compliance and operational risk by grounding AI outputs directly in your source documents, with clear citations and traceable reasoning paths. Unlike vector-only retrieval systems, Verbis Graph Engine builds and queries a knowledge graph to enable multi-hop reasoning across entities and relationships -delivering context-aware answers that can be verified and audited.
- Retrieve insights that span multiple reports, entities, and relationships - critical for investigations, compliance reviews, risk analysis, and regulated environments. The hybrid graph + vector architecture surfaces cross-document connections that similarity search alone may not detect.
- Support governance and regulatory requirements with explainable outputs linked directly to source documentation. Enable teams to validate AI-generated responses quickly and maintain structured, audit-ready documentation workflows.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
- Monthly subscription
- $399.00/month
Vendor refund policy
Refunds are available only for the unused portion of the included data allowance. Consumed data (uploaded, indexed, processed, or queried) is non-refundable. If a subscription is canceled before the full usage-set is used, a pro-rated refund may be issued based on unused data volume. Refunds are processed via AWS Marketplace in accordance with AWS policies.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
New delivery option 2
- Amazon EKS Anywhere
- Amazon ECS Anywhere
- Amazon EKS
- Amazon ECS
Container image
Containers are lightweight, portable execution environments that wrap server application software in a filesystem that includes everything it needs to run. Container applications run on supported container runtimes and orchestration services, such as Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). Both eliminate the need for you to install and operate your own container orchestration software by managing and scheduling containers on a scalable cluster of virtual machines.
Version release notes
VERBIS GRAPH ENGINE - GraphRAG Knowledge Retrieval Engine AWS Marketplace Container Edition - Initial paid listing . Licensing via AWS Marketplace.
VERSION: 1.0.0 (Initial Release) RELEASE DATE: December 2025 PRODUCT SKU: VG-GRAPHRAG-PRO-V1 LICENSE: Apache License 2.0 PRICING: FREE (AWS infrastructure costs apply)
================================================================================
- EXECUTIVE SUMMARY ================================================================================
We are pleased to announce the initial release of Verbis Graph Engine PRO on AWS Marketplace. This free container-based product brings GraphRAG (Graph-enhanced Retrieval Augmented Generation) technology to developers and enterprises evaluating next-generation AI knowledge retrieval solutions.
Unlike traditional vector-only RAG systems, Verbis Graph combines semantic vector search with knowledge graph traversal, enabling retrieval of information that spans multiple documents, entities, and relationships. This hybrid approach is ideal for complex queries requiring reasoning beyond simple similarity matching.
================================================================================ 2. WHAT'S NEW IN VERSION 1.0.0
This initial release establishes the foundation for GraphRAG-powered knowledge retrieval.
2.1 Core GraphRAG Engine
o Proprietary Knowledge Graph Retrieval: Combines dense vector embeddings with structured knowledge graph traversal for better accuracy compared to vector-only RAG systems o Multi-Document Reasoning: Retrieves and synthesizes information across document boundaries, capturing entity relationships that span your entire knowledge base o Workspace-Scoped Isolation: Each user or project gets an isolated workspace with independent document staging, GraphRAG indexing, and chat sessions o Async Indexing with Locks: Per-workspace locks prevent indexing conflicts, ensuring data integrity during concurrent operations
2.2 FastAPI Backend
o RESTful API: Integrated FastAPI backend with OpenAPI (Swagger) documentation available at /docs, accessible via a dedicated tab in the Streamlit web interface. o Streaming Chat: Server-Sent Events (SSE) for real-time streaming responses from the GraphRAG retriever o CORS Enabled: Cross-Origin Resource Sharing configured for browser-based clients and single-page applications o Health Check Endpoint: GET /_stcore/health for container orchestration and load balancer integration
2.3 Document Processing
o Multi-Format Support: Upload and process PDF, TXT, DOCX, and CSV files for knowledge extraction o Document Staging: Stage documents before indexing to review and validate content o Data Retention: Application data is retained within the container runtime and associated workspace for the lifetime of the container and user account. Data is not guaranteed to persist across container termination or redeployment. External durable storage is not included in this version.
2.4 LLM Integration (only paying subscriptions)
o LiteLLM Gateway: Unified interface supporting 100+ LLM providers including OpenAI, Anthropic Claude, and AWS Bedrock o AWS Bedrock Native: First-class support for Amazon Bedrock models with configurable region and model selection o Model Flexibility: Switch between Claude, GPT-4, Llama, Mistral, and other models via environment variables
2.5 Developer Experience
o Bearer Token Authentication: Secure API access with SERVICE_AUTH_TOKEN for service-level auth plus user bearer tokens for workspace access o Streamlit GUI: Helper interface for visual document management and chat testing
2.6 AWS Marketplace Ready
o Metering Hooks: Built-in hooks for AWS Marketplace metering (ingestion, indexing, chat, translation) o Container Compliance: Meets AWS Marketplace container requirements for security scanning and deployment
================================================================================ 3. TECHNICAL SPECIFICATIONS
3.1 Container Details
ECR Image URI: 709825985650.dkr.ecr.us-east-1.amazonaws.com/prodigy-ai-solutions/verbis-graph-engine-free:metering_v20 Port: 8080/tcp (HTTP) Health Check: GET /_stcore/health Logging: stdout/stderr (CloudWatch compatible) Base Image: Python 3.11 (Debian-based)
3.2 Resource Requirements
vCPU: 1 vCPU 2 vCPU Memory: 2 GiB 4 GiB Storage: 1 GB Depends on workload
3.3 Supported Services
o Amazon Elastic Container Service (Amazon ECS) - Managed container orchestration on AWS o Amazon ECS Anywhere - Run on your on-premises or hybrid infrastructure while managed by ECS o AWS Fargate - Serverless container execution (ECS launch type) o Docker-compatible runtimes - Any Docker-compatible environment for evaluation
3.4 Environment Variables
SERVICE_AUTH_TOKEN (Required): Service-level auth token (X-Service-Token header)
================================================================================ 4. DEPLOYMENT GUIDE
4.1 Prerequisites
o AWS Account: Active AWS account with appropriate permissions o IAM Permissions: ecr:GetAuthorizationToken, ecr:BatchCheckLayerAvailability, ecr:GetDownloadUrlForLayer, ecr:BatchGetImage o Networking: VPC with subnets and security group allowing inbound traffic on port 8080/tcp
4.2 Quick Start (Docker)
For local evaluation, run the container directly with Docker:
docker run -p 8080:8080
709825985650.dkr.ecr.us-east-1.amazonaws.com/prodigy-ai-solutions/verbis-graph-engine-free:metering_v20
4.3 Amazon ECS Deployment
- Subscribe to the product: Click 'Continue to Subscribe' on the AWS Marketplace listing page
- Accept terms: Review and accept the Apache License 2.0 terms
- Create ECS cluster: Use existing cluster or create new via ECS console
- Create task definition: Define container with ECR image URI, port mapping (8080)
- Configure networking: Assign VPC, subnets, and security group with port 8080 open
- Launch service: Create ECS service with desired task count
- Verify deployment: Access /_stcore/health endpoint to confirm container is running
4.4 Amazon ECS Anywhere (Hybrid/On-Premises)
For on-premises or hybrid deployments using ECS Anywhere:
- Register external instances: Install ECS agent and SSM agent on your on-premises servers
- Create ECS Anywhere cluster: Configure cluster with EXTERNAL capacity providers
- Deploy task: Use the same task definition with your registered external instances
Note: ECS Anywhere could incur additional costs (eg $0.01025 per instance-hour).
================================================================================ 5. API REFERENCE
Key API endpoints available in this release:
Method Endpoint Description
GET /_stcore/health Health check endpoint POST /api/auth/register Register new user account POST /api/auth/login Authenticate and obtain bearer token POST /api/docs/upload Upload documents to workspace POST /api/docs/index Trigger GraphRAG indexing POST /api/chat Query GraphRAG (streaming SSE) GET /docs OpenAPI/Swagger documentation
================================================================================ 6. KNOWN LIMITATIONS & SCOPE
6.1 Usage Limits
o Single-Tenant: One container instance per deployment; not designed for multi-tenant production workloads
6.2 Technical Limitations
o Region Availability: Available in all AWS regions supported by AWS Marketplace container products o Data Retention: Application-level data retention during the lifetime of the container and user account. Persistence across container termination or redeployment is not guaranteed. o No High Availability: Single container deployment; no built-in clustering or failover
6.3 Not Included
o Enterprise SSO/SAML authentication o Role-Based Access Control (RBAC) o VPC Private Link deployment o SOC 2 compliance features o Dedicated support SLA
Enterprise features are available in Verbis Graph Professional and Enterprise editions.
================================================================================ 7. SECURITY CONSIDERATIONS
7.1 Authentication
o Two-Layer Auth: SERVICE_AUTH_TOKEN for service access + workspace operations o Token Security: Always used strong, randomly generated tokens;
7.2 Network Security
o TLS Termination: Use ALB or API Gateway with HTTPS for production deployments o Security Groups: Restrict ports access to trusted IPs or internal VPC only
7.3 Data Security
o LLM queries are sent to configured providers
================================================================================ 8. TROUBLESHOOTING
Issue Solution
Image pull failures Verify ECR pull permissions and AWS Marketplace subscription is active Container OOM killed Increase memory allocation to 4 GiB; large documents require more memory for indexing
================================================================================ 9. UPCOMING FEATURES (ROADMAP)
Q1 2026: o Native MCP Server for AI Agent integration (Claude, GPT) o LangChain and LlamaIndex framework integrations o Professional tier launch on AWS Marketplace
Q2 2026: o Enterprise SSO/SAML authentication o Role-Based Access Control (RBAC) o SOC 2 Type II certification (in progress) o Amazon Bedrock Agents native integration
Q3-Q4 2026: o VPC PrivateLink deployment option o Multi-tenant SaaS deployment
================================================================================ 10. SUPPORT & CONTACT
10.1 Support
support is provided as follows: o Response Time: Within 1 business day during EU business hours o Channels: Email and documentation only o Scope: Deployment assistance and basic troubleshooting
10.2 Contact Information
Email: support@verbis-chat.com Documentation: https://docs.verbisgraph.com Website: https://verbisgraph.com
================================================================================ LEGAL NOTICE
This product is provided 'as is' under the Apache License 2.0. ProdigyAI Solutions makes no warranties regarding fitness for a particular purpose. AWS infrastructure costs are the customer's responsibility.
2025 ProdigyAI Solutions. All rights reserved. Verbis Graph is a trademark of ProdigyAI Solutions.
Additional details
Usage instructions
Quick start (Docker)
Pull the image (replace with your Marketplace image/tag if required): docker pull <MARKETPLACE_ECR_REPO>:<TAG>
Run the container: docker run --rm -p 8080:8080 <MARKETPLACE_ECR_REPO>:<TAG>
Open the UI: http://localhost:8080
Health check: curl -f http://localhost:8080/_stcore/health (expects HTTP 200)
AWS ECS Fargate (high level)
Create an ECS task definition using this image.
Container port: 8080/TCP
Recommended task size for demo: 1 vCPU / 2 GB RAM
Assign public IP (for direct access) or place behind an ALB.
If using an ALB, set target group health check path to: /_stcore/health.
Configuration
No mandatory environment variables required.
Default port is 8080. Logs go to stdout/stderr (view via docker logs or CloudWatch Logs on ECS).
The container runs as a non-root user.
Troubleshooting
If the UI is not reachable, confirm the port mapping/security group allows inbound TCP 8080.
If health checks fail on first start, allow a warm-up period (start-period ~60s recommended on ECS).
Teardown / cost control (ECS)
Scale ECS service desired tasks to 0 and delete the service to stop charges.
Remove associated load balancer/resources if created.
Launch the container using Amazon ECS or Amazon EKS.
Ensure port 8080 is open in the security group.
Access the application After deployment, obtain the public IP or load balancer URL.
Open your browser and navigate to:
http://<server-ip>:8080 First use Upload a document for indexing.
Wait for the indexing process to complete.
Ask questions about the document.
Explore the generated knowledge graph. (points 6 to 9 are also covered in this video https://www.youtube.com/watch?v=JhXqYwpJHlE )
API access (optional) Open Swagger UI:
http://<server-ip>:8080/docs
Resources
Vendor resources
Support
Vendor support
Support
Support for Verbis Graph - GraphRAG Knowledge Retrieval Engine (Container Edition) is provided via email, personal developer sessions, professional implementation services, and self-service resources.
Contact Support
| Channel | Details |
|---|---|
| support@verbisgraph.com | |
| Support Hours | Monday - Saturday, 09:00 - 21:00 CET |
| Response Time | Best-effort, typically within 24-48 hours |
1-to-1 Developer Session (30 Minutes - Free)
Not sure where to start? We offer one complimentary 30-minute session with our engineering team for every new paid user.
In this session we can help you with:
- Container deployment on Amazon ECS or Amazon EKS
- Security group and port configuration
- SDK integration (Python or JavaScript)
- First document upload and knowledge graph setup
- REST API and Swagger UI walkthrough
Book your session: by mail support@verbisgraph.com
This is a one-time complimentary session per customer. Additional developer sessions and full installation services are available as part of our Professional Services offering - contact us for a quote.
Professional Installation Services
Need full hands-on deployment in your environment? Our engineering team offers paid professional services including:
- Managed container installation on ECS or EKS
- Network, VPC and security group configuration
- Full SDK integration and testing
- Custom architecture and enterprise deployment planning
- Knowledge graph setup and document ingestion pipeline
Request a quote: https://aws.amazon.com/marketplace/pp/prodview-flluxucqh3ktc
Self-Service Support
24/7 AI-powered assistant and documentation available at:
Video Setup Guide: Demo Deployment Walkthrough
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.