Listing Thumbnail

    AnythingLLM - Hardened Self-Hosted Private ChatGPT for Documents (RAG)

     Info
    Sold by: Lynxroute 
    Deployed on AWS
    Free Trial
    This product has charges associated with it for hardening, security configuration, and support. AnythingLLM is a self-hosted ChatGPT-style workspace for chatting with your documents using any LLM provider, with built-in RAG, AI agents and vector search in a single hardened container. Unlike bare AnythingLLM AMIs that ship without TLS, no admin auth, and the server on 0.0.0.0:3001, this Lynxroute build is ready out of the box: admin user with unique password at first boot, server bound to loopback behind Nginx TLS, embedded LanceDB and native embedder pre-configured, on a CIS Level 1 hardened Ubuntu 24.04 LTS base. MIT license - fully auditable, no vendor lock-in.

    Overview

    This is a repackaged software product wherein additional charges apply for hardening, security configuration, and support.

    WHAT IS ANYTHINGLLM

    AnythingLLM is an open-source, self-hosted ChatGPT-style application that turns any document collection into a private, queryable workspace. The server is a Node.js application that ingests PDFs, DOCX, HTML, plain text, audio transcripts, and code, chunks and embeds the content into a vector database, and serves a multi-user web UI for chat and AI agents. It supports any LLM provider through a single switch in the admin UI - OpenAI, Anthropic Claude, AWS Bedrock, Azure OpenAI, Google Gemini, Cohere, Mistral, Groq, OpenRouter, Ollama, LM Studio, vLLM, LiteLLM, and many more. AI agents can browse the web, run SQL, call tools, and execute multi-step workflows. Operators get team workspaces with per-workspace document scope, role-based access, public chat embeds, REST API for programmatic access, an OpenAI-compatible developer API, and a built-in admin UI. Persists users, workspaces, chats, documents, and vectors in embedded SQLite and LanceDB - no external database required. MIT license, no vendor lock-in.

    WHAT THIS AMI ADDS

    Security hardening:

    • Admin user with unique password (EC2 Instance ID) generated at first boot - never baked into the AMI
    • JWT signing key, encryption signing key, and salt (>=32 chars each) generated at first boot
    • AnythingLLM container bound to 127.0.0.1:3001 only - reachable only through the Nginx reverse proxy with TLS
    • Multi-user mode enabled at first boot - the bootstrap auth window closes immediately after admin creation
    • No provider API keys baked in - operator configures OpenAI, Anthropic, Bedrock, Ollama, etc. in the admin UI after login
    • Anonymous telemetry disabled by default
    • Nginx reverse proxy with TLS, HTTP-to-HTTPS redirect, WebSocket support for streaming chat, security headers
    • UFW firewall pre-configured - only TCP 22, 80, 443 are exposed
    • fail2ban, AppArmor
    • CVE scan - every image is scanned for vulnerabilities before release

    Out of the box, with no external services:

    • Embedded LanceDB vector database - file-based, no separate container or network call
    • Native embedder (Xenova/all-MiniLM-L6-v2) running inside the container - no external embedding API key required for RAG
    • SQLite storage for users, workspaces, chats - no external database to provision

    OS hardening (CIS Level 1):

    • CIS Ubuntu 24.04 LTS Level 1 benchmark applied via ansible-lockdown
    • auditd, SSH hardening, kernel hardening, IMDSv2 enforced

    Compliance artifacts:

    • SBOM - CycloneDX 1.6 at /etc/lynxroute/sbom.json
    • CIS Conformance Report at /etc/lynxroute/cis-report.html
    • CIS Tailored Profile at /usr/share/doc/lynxroute/CIS_TAILORED_PROFILE.md

    Highlights

    • Security baked in: admin user with unique password at first boot, JWT and encryption keys per instance, container bound to 127.0.0.1 behind Nginx TLS - unlike bare AnythingLLM AMIs that ship without TLS, with port 3001 open to the world, and with no admin auth.
    • RAG works out of the box - no external embedding service or vector DB to provision: ships with the native embedder (Xenova/all-MiniLM-L6-v2) inside the container and embedded LanceDB on a persistent host volume. Just upload documents and chat. Add provider API keys later in the admin UI.
    • CIS Level 1 hardened Ubuntu 24.04 LTS: auditd, fail2ban, AppArmor, SSH key-only, IMDSv2 enforced. CVE-scanned before every release. SBOM (CycloneDX) and CIS Conformance Report included. MIT license - fully auditable, no vendor lock-in.

    Details

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 24.04

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Free trial

    Try this product free for 5 days according to the free trial terms set by the vendor. Usage-based pricing is in effect for usage beyond the free trial terms. Your free trial gets automatically converted to a paid subscription when the trial ends, but may be canceled any time before that.

    AnythingLLM - Hardened Self-Hosted Private ChatGPT for Documents (RAG)

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Usage costs (5)

     Info
    Dimension
    Cost/hour
    t3.medium
    Recommended
    $0.02
    t3.large
    $0.03
    t3.small
    $0.02
    m6i.xlarge
    $0.05
    m6i.large
    $0.03

    Vendor refund policy

    We do not offer refunds for this product. AWS infrastructure charges (EC2, EBS, data transfer) are billed separately by AWS and are not refundable by us.

    How can we make this page better?

    Tell us how we can improve this page, or report an issue with this product.
    Tell us how we can improve this page, or report an issue with this product.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Version release notes

    Version 1.12.1 - Initial release (May 2026)

    • AnythingLLM 1.12.1 single upstream Docker image (mintplexlabs/anythingllm:1.12.1) on Ubuntu 24.04 LTS
    • CIS Level 1 hardening applied (ansible-lockdown/UBUNTU24-CIS)
    • CVE-scanned before every release
    • Admin user with unique password (EC2 Instance ID) generated at first boot via POST /api/system/enable-multi-user
    • JWT signing key, encryption signing key, and salt (>=32 chars each) generated per instance
    • Container bound to 127.0.0.1:3001 and reachable only through Nginx with TLS
    • Embedded LanceDB vector store and native embedder pre-configured - RAG works out of the box
    • No provider API keys pre-configured - operator configures OpenAI, Anthropic, Bedrock, Ollama, etc. in the admin UI
    • Anonymous telemetry disabled by default
    • Persistent storage at /opt/anythingllm/storage (SQLite + LanceDB + documents) - can be moved to a dedicated EBS volume
    • UFW firewall pre-configured (TCP 22, 80, 443 only)
    • fail2ban, auditd, AppArmor pre-configured
    • SBOM (CycloneDX 1.6) at /etc/lynxroute/sbom.json
    • CIS Conformance Report (OpenSCAP) at /etc/lynxroute/cis-report.html
    • IMDSv2 enforced

    Additional details

    Usage instructions

    1. Launch instance (t3.medium recommended; t3.small works for light loads)
    2. Open Security Group - allow TCP 443 from your IP
    3. SSH: ssh -i key.pem ubuntu@<PUBLIC_IP>
    4. Read credentials: sudo cat /root/anythingllm-credentials.txt
    5. Open https://<PUBLIC_IP>/ in your browser - accept the self-signed certificate warning
    6. Log in with admin credentials from the credentials file
    7. Navigate to Settings -> AI Providers and configure your preferred LLM (OpenAI, Anthropic, AWS Bedrock, Ollama, etc.)
    8. Create a workspace, upload documents, and start chatting

    Admin password equals the EC2 Instance ID. Credentials are saved to /root/anythingllm-credentials.txt at first boot. RAG works out of the box with the built-in native embedder and LanceDB - no external embedding service required. Replace the self-signed TLS certificate with a CA-signed certificate for production use.

    Resources

    Vendor resources

    Support

    Vendor support

    Visit us online: https://lynxroute.com 

    For AnythingLLM documentation: https://docs.anythingllm.com  For AnythingLLM upstream issues: https://github.com/Mintplex-Labs/anything-llm/issues  For AWS infrastructure issues:

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.