← Back to Blog
Vertical Series

Air-Gap Deployment: Running HammerLockAI in Government and Defense Environments

HammerLock Research Desk 6 min read

The most sensitive environments in the world — classified government networks, defense intelligence systems, critical infrastructure operators — have a category of security requirement that goes beyond encryption and access controls: air gapping. Complete physical isolation from the public internet.

No data in. No data out. The network doesn't exist to the outside world.

Air-gapped environments are fundamentally incompatible with cloud AI tools. By definition, a system that sends queries to OpenAI, Anthropic, or any other external API cannot operate on an air-gapped network. The cloud model is the antithesis of air-gap security.

HammerLockAI's local-first architecture, built on Ollama's local model runtime, is designed to operate fully air-gapped. This article covers how to deploy it.

What Air-Gap Deployment Actually Means

Air gapping is often used loosely to mean "more secure than normal." In its strict technical sense, it means physical isolation: a system that has no network interface connected to an external network, no wireless capability, and no mechanism by which data can traverse to or from an external network without physical media transfer.

True air-gap environments include: - Classified government networks (SIPRNet, JWICS in US military contexts) - Nuclear facility control systems - Election infrastructure in some jurisdictions - Critical infrastructure control networks (power grid, water treatment) - High-security financial clearing systems

In these environments, every piece of software must operate without any external dependency. Models must be downloaded and transferred via physical media before the system goes air-gapped. All processing must happen locally. No telemetry, no update checks, no license validation servers.

HammerLockAI in air-gap mode meets these requirements.

Pre-Deployment: Downloading Models Before You Go Dark

The only step that requires internet connectivity in a HammerLockAI air-gap deployment is the initial model download. This must be completed before the system is air-gapped, or the models must be transferred via approved physical media to the air-gapped system.

Step 1: Download Ollama on an internet-connected system. From ollama.com, download the Ollama runtime for your target OS. This is a single binary (on macOS/Linux) or installer (on Windows).

Step 2: Pull models while connected.

ollama pull llama3.1
ollama pull mistral
ollama pull phi3

Pull every model you may need. Models are stored at ~/.ollama/models on Linux/macOS and %USERPROFILE%\.ollama\models on Windows. These directories contain everything needed to run the models offline.

Step 3: Download HammerLockAI. Download the HammerLockAI desktop application. Verify the download hash against the published hash before transfer.

Step 4: Transfer to air-gapped system. Transfer Ollama, the models directory, and HammerLockAI via approved physical media (USB, DVD, or secure file transfer system depending on your environment's approved procedures).

After transfer, no internet connection is required for any operation. The entire stack — HammerLockAI, Ollama runtime, model weights — operates locally.

Configuring HammerLockAI for Air-Gap Operation

In HammerLockAI's settings, configure the following for air-gap deployment:

Disable all cloud providers. In Settings → API Keys, remove or don't configure any cloud provider keys. With no cloud providers configured, HammerLockAI routes exclusively to Ollama. There will be no attempted outbound connections to cloud APIs.

Set Ollama as the sole provider. Confirm in Settings → Models that only Ollama/local models are available. The interface should show only local models in the model selector.

Disable web search. Brave-powered search requires internet connectivity. Disable it in settings. Search queries in an air-gapped environment will use only document context provided within the session.

Verify no telemetry. HammerLockAI does not collect telemetry or usage data by default. In air-gap environments, verify through network monitoring that no outbound connections are attempted. If your environment requires formal verification, this can be confirmed by running packet capture during a session.

Set a strong encryption password. In air-gap environments, your local vault's encryption password is the primary access control. Use a strong passphrase that meets your organization's password policy. There is no remote reset mechanism — losing the password means losing access to vault contents.

Recommended Models for Air-Gap Deployment

Model selection for air-gap environments involves balancing capability against hardware constraints. Common configurations:

High-capability systems (GPU server, 48GB+ VRAM): - Llama 3.1 70B (Q4 quantized) — near-frontier capability for complex analysis and research - Mistral 8x7B (Mixtral, Q4 quantized) — strong for diverse task types

Mid-tier systems (workstation with 24GB VRAM or Apple Silicon 48GB unified memory): - Llama 3.1 8B — strong general capability at this tier - Mistral 7B — fast, capable, efficient - Phi-3 Medium — Microsoft's efficient model, strong on reasoning tasks

Lower-resource systems (CPU-only or 8GB VRAM): - Llama 3.2 3B — capable at smaller context tasks - Phi-3 Mini — very efficient, deployable on constrained hardware - Gemma 2B — small but useful for targeted tasks

For intelligence analysis, document synthesis, and structured research workflows — the primary use cases in government/defense environments — Llama 3.1 8B or larger is the recommended minimum. The 70B model on appropriate GPU hardware delivers output quality competitive with cloud providers for most analytical tasks.

Multi-User Air-Gap Deployment

For organizations deploying HammerLockAI across multiple users on an air-gapped network:

Option A: Individual workstation deployment. Each user runs their own HammerLockAI and Ollama instance. Simple, fully isolated. Each user has their own encrypted vault. Model weights are duplicated across machines (storage cost, but maximum isolation).

Option B: Shared Ollama server. A central server runs Ollama and serves models to multiple HammerLockAI instances across the local network. HammerLockAI can point to a remote Ollama endpoint (configured in Settings → Providers → Ollama URL). Individual users run HammerLockAI locally, model inference runs on the shared server. This is the preferred architecture for organizations with a GPU server already in the environment.

Option C: Full server deployment. HammerLockAI and Ollama both run on a server, with users accessing through a web interface. This reduces client requirements to a browser. Contact info@hammerlockai.com for enterprise server deployment configuration.

Security Hardening for Classified Environments

Standard HammerLockAI deployment is already local-first and encrypted. For classified environments, additional hardening is appropriate:

Physical security. The device running HammerLockAI should be physically secured consistent with the classification level of the data being processed. This is a facility/operations security requirement, not a software configuration — but it's the foundation.

Full disk encryption. HammerLockAI's vault is AES-256 encrypted, but the vault file still exists on your disk. Enable full disk encryption (FileVault on macOS, BitLocker on Windows, LUKS on Linux) to protect against physical media theft or unauthorized access to the device.

Session management. Configure auto-lock on inactivity. HammerLockAI sessions lock when the application is closed or when your device locks. On shared systems, configure short auto-lock timeouts consistent with your organization's policy.

Audit logging. HammerLockAI's local session logs provide a record of queries and responses. In environments requiring audit trails, configure log retention consistent with your organization's requirements. Local logs are your audit record — there is no server-side log.

Model integrity verification. Before deploying models to classified systems, verify model file hashes against published hashes from Ollama's model registry. This ensures model files haven't been tampered with during transfer.

The Capability Argument for Local AI in Sensitive Environments

The argument for local AI in classified environments sometimes focuses exclusively on security — what you're preventing. It's worth also making the capability argument: what you're enabling.

Intelligence analysts, defense researchers, and government officials working on air-gapped networks currently have limited AI access. Most of the productivity gains from AI are unavailable to them because their tools can't reach cloud providers.

HammerLockAI on an air-gap network gives classified environment users: - AI-assisted document synthesis and research - Structured analytical frameworks through the Analyst and Researcher agents - SOP development and operational planning through the Operator agent - Drafting and writing assistance through the Writer agent - PDF analysis of classified documents within the vault

These are substantive productivity capabilities that classified network users currently lack access to. Local AI doesn't just solve the security problem — it opens a capability door that the cloud dependency has kept closed.

The On-Premises Enterprise Path

For large organizations needing formal on-premises deployment with dedicated support, SLA guarantees, audit log infrastructure, SSO integration, and custom deployment architecture, HammerLockAI offers enterprise contracts.

Enterprise deployment includes: - On-premises or private cloud server deployment - Dedicated support engineer - Custom SLA and uptime guarantees - Audit logs and compliance reporting - SSO integration for identity management - Custom model configuration

Contact info@hammerlockai.com for enterprise and government deployment inquiries.


HammerLockAI supports fully air-gapped deployment. Built on OpenClaw, MIT licensed. View source on GitHub →