The artifact your security team needs to approve a Nocturne deployment.
What gets installed, where it runs, what access it requires, what data it touches, and how it is removed cleanly when a contract ends.
This guide describes the boundary, not the mechanism. How Nocturne produces reviews is proprietary and contractually protected. What matters for your approval is what crosses the boundary, and the answer is: read and write to your git host, and nothing else.
Contents
- Overview
- Architecture at a glance
- Deployment playbook per environment
- Security boundaries
- Uninstall and data handling at contract end
- FAQ
- How to start a deployment
1. Overview
Nocturne is a managed AI code review service that stays quiet when there is nothing grounded to say. As your engineering organization adopts AI coding assistants, the volume of machine-written pull requests grows, and the volume of low-value review noise on those pull requests grows with it. Nocturne comments when something warrants a comment and stays quiet when it does not. Clients measure their savings in reviewer hours reclaimed per month and in dollar reductions on their current AI review vendor invoice.
Nocturne runs inside your private cloud tenant or your own data center. GPOut Labs deploys it there, operates it for you, tunes it for your codebase, and reports monthly on the savings it generates. You host the compute. We run the service on top of it. Your code never leaves your network.
The core security claim of this product is simple and non-negotiable: your source code and your pull request diffs never transit to GPOut Labs infrastructure. Nocturne reads from your git host, posts to your git host, and writes its operational state to disk on the instance you own. There is no external inference endpoint involved at review time. There is no telemetry. There is no phone-home.
What you buy is a measured reduction in review noise, delivered by a service that sits entirely inside your perimeter.
2. Architecture at a glance
Nocturne is deployed as a single long-lived compute instance inside your environment. That instance:
- Runs on a GPU-equipped host (cloud VM or physical server with 16 GB of VRAM or more).
- Performs all inference locally on that GPU. No external API calls are made at review time.
- Connects outbound only to your git host (GitHub, GitHub Enterprise, GitLab, Bitbucket, Gitea, or a self-hosted equivalent) to fetch pull request diffs and post review comments.
- Writes its operational state (verdict logs, review records, monthly savings reports) to a directory on local disk inside the same instance, inside the same tenant you own.
- Is deployed, operated, and tuned remotely by GPOut Labs through an access path you control and can revoke at any time.
That is the complete external shape of Nocturne. Everything inside the instance is proprietary and out of scope for this document. What matters for your approval is the boundary, and the boundary is: one instance, one GPU, read/write to your git host, local state, no other network access.
3. Deployment playbook per environment
Each sub-section below describes a full deployment path. Read the one that matches your environment. A security architect should be able to use it to produce a decision in roughly ten minutes.
3.1 AWS
Prerequisites. A dedicated VPC or subnet in the client's AWS account with a network path to the git host (public endpoint, VPC endpoint, or peered VPC, your choice). An instance type available in the chosen region: g4dn.xlarge (T4 GPU, 16 GB VRAM, lower cost) or g5.xlarge (A10G GPU, 24 GB VRAM, faster reviews). A small EBS volume (100 GB gp3 is typical) for operating system and local state.
Access requirements. A scoped IAM role or IAM user that grants GPOut Labs the minimum to deploy and operate a single named instance: ec2:RunInstances, ec2:TerminateInstances, ec2:DescribeInstances, ec2:CreateTags scoped to a resource tag (for example nocturne=true), ssm:StartSession for the specific instance ID, and iam:PassRole for the instance profile only. No account-wide admin. No access to any other VPC, subnet, or service. We provide a ready-to-apply IAM policy JSON during the intake call.
Deployment steps. During a scheduled deployment window, GPOut Labs uses the scoped role to launch one instance in the agreed subnet, attach the instance profile, configure the security group (egress to git host only, no ingress from the public internet), install and configure Nocturne on the instance, verify that it can read a test pull request and post a test comment, and hand off to your pilot repository.
Time to live. Roughly four hours from deployment kickoff to first live reviews.
Estimated compute cost. $200 to $400 per month on your AWS bill, depending on instance family, region, and whether you choose on-demand or a savings plan. This cost is separate from the Nocturne service fee.
Ongoing client operational burden. None beyond paying the monthly AWS bill. GPOut Labs monitors the instance, applies updates, tunes the service, and handles incidents.
3.2 Azure
Prerequisites. A resource group in the client's Azure subscription, a VNet and subnet with a network path to the git host, a GPU VM size available in the chosen region (NC4as T4 v3 for cost, NC8as T4 v3 or an NCas v4 series SKU for more headroom). A managed disk (128 GB standard SSD is typical) for operating system and local state.
Access requirements. A service principal scoped to the single resource group with Virtual Machine Contributor on that resource group and Network Contributor on the specific VNet and subnet used for the deployment. No subscription-wide roles. No Key Vault access. No access to storage accounts outside the resource group. We provide a ready-to-apply Azure role assignment script during the intake call.
Deployment steps. During a scheduled deployment window, GPOut Labs authenticates as the service principal, creates one VM in the agreed resource group, attaches the managed disk, configures the Network Security Group (egress to git host only, no public inbound), installs and configures Nocturne on the VM, verifies a round-trip review, and hands off to your pilot repository.
Time to live. Roughly four hours from kickoff to first reviews.
Estimated compute cost. $250 to $450 per month on your Azure bill, depending on VM SKU, region, and reservation choice. Separate from the Nocturne service fee.
Ongoing client operational burden. None beyond paying the monthly Azure bill.
3.3 GCP
Prerequisites. A GCP project owned by the client with a VPC and subnet that has a route to the git host, a GCE region that offers T4 or L4 GPUs, and GPU quota approved for that region (this is often the slowest prerequisite and is worth confirming before the deployment call). A persistent disk (100 GB pd-balanced is typical) for operating system and local state.
Access requirements. A service account scoped to the single project with roles/compute.instanceAdmin.v1 conditional on an instance name prefix (for example nocturne-*), roles/compute.networkUser on the specific subnet, and roles/iap.tunnelResourceAccessor for the single instance if you prefer IAP-based remote access. No project-wide owner. No access to Cloud Storage, BigQuery, or any other GCP service. We provide a ready-to-apply gcloud script during intake.
Deployment steps. During a scheduled deployment window, GPOut Labs authenticates as the service account, creates one GCE instance in the agreed zone with the GPU attached, configures the firewall (egress to git host only), installs and configures Nocturne, verifies a round-trip review, and hands off.
Time to live. Roughly four hours from kickoff, assuming GPU quota is already approved. If quota is still pending, time to live is gated on Google's quota response.
Estimated compute cost. $200 to $400 per month on your GCP bill, depending on GPU choice, region, and committed-use discounts. Separate from the Nocturne service fee.
Ongoing client operational burden. None beyond paying the monthly GCP bill.
3.4 On-prem (Linux, self-managed hardware)
Prerequisites. A Linux server (Ubuntu 22.04 LTS or RHEL 9 are the primary supported distributions) with an NVIDIA GPU providing at least 16 GB of VRAM, CUDA drivers installed or installable by the deployment account, at least 100 GB of free disk space for operating system and local state, and a routable network path to the git host.
Access requirements. SSH access to the host for a dedicated deployment account (we suggest a local user named nocturne-deploy with sudo limited to the package manager and the Nocturne service unit). The client's existing bastion, jump host, or VPN arrangement is fine for us to use as long as the account is named and its access is logged. No root login required after initial provisioning. No access to any other host on the client network.
Deployment steps. During a scheduled deployment window, GPOut Labs connects via the agreed SSH path, installs Nocturne under the deployment account, configures the systemd unit so the service restarts on reboot, configures the host firewall if the client has not already done so, verifies a round-trip review, and hands off.
Time to live. Roughly six hours from kickoff. The extra time versus cloud covers driver verification and any small host configuration discoveries.
Estimated compute cost. Zero recurring cost beyond electricity, since the hardware is client-owned. Clients in this category typically already run GPU servers for other workloads and are adding Nocturne as an additional service on an existing host or on a spare machine.
Ongoing client operational burden. None beyond the usual hardware care (replacing failed disks, applying OS security updates, renewing TLS certificates on the git host). GPOut Labs handles all Nocturne-specific operations.
3.5 Air-gapped or regulated (SCIF-adjacent environments)
Prerequisites. A pre-provisioned Linux host inside the air gap matching the on-prem specification (Linux, 16 GB of VRAM or more, 100 GB of free disk, network path to the internal git host). A client escort authorized to accompany GPOut Labs personnel (physically or via a screen-sharing session controlled by the client) for the installation window. An existing inbound package delivery procedure for bringing in the Nocturne installation media, typically an encrypted drive or a signed archive cleared through the client's existing software intake process.
Access requirements. A deployment account on the host with sudo scoped to the package manager and the Nocturne service unit, consistent with the client's baseline for third-party software. Read-only escort for any remote session. No internet access from the host is required at any point. The host can remain fully isolated from the public internet for the entire lifetime of the deployment.
Deployment steps. The Nocturne installation media is delivered through the client's normal software intake procedure. During the installation window, under the escort, GPOut Labs installs Nocturne on the host, configures the service to connect only to the internal git host, verifies a round-trip review on a test pull request, and hands off. If the client requires all future tuning to be performed by the escorted path, GPOut Labs agrees to that arrangement during contract negotiation.
Time to live. Roughly eight hours from kickoff, absorbing the extra overhead of escorted access and media intake.
Estimated compute cost. Same as on-prem. Zero recurring cost beyond electricity, since hardware is client-owned.
Ongoing client operational burden. The client provides an escort whenever GPOut Labs needs to perform tuning or incident response. Cadence is typically quarterly or on demand.
4. Security boundaries
Nocturne is designed to give a security team the shortest possible list of things to worry about.
| Question | Answer |
|---|---|
| What Nocturne can access | Pull request diffs from your git host (read), the review comment endpoint on your git host (write), its own local state files on the instance it runs on, and the GPU on that same instance. Nothing else. |
| What Nocturne cannot access | The rest of your cloud account, subscription, or project. Any repository or service other than the ones its credentials are explicitly scoped to. Any network endpoint other than your git host. Any personal device, personal account, or corporate identity system belonging to your employees. No shared storage, no message buses, no secrets managers outside the credentials it was given for its own operation. |
| Data handling | All operational state Nocturne produces (verdict logs, review records, monthly savings data, tuning feedback the service uses to improve over time) is stored inside your tenant in a directory Nocturne owns on the deployment instance. No data leaves your network during normal operation. There is no telemetry stream. There is no phone-home. GPOut Labs does not keep a copy of your code, your pull requests, or your review history in our infrastructure. |
| Remote management access | GPOut Labs deploys, operates, and tunes Nocturne through a defined access path: SSH key for on-prem and air-gapped, scoped IAM role for AWS, service principal for Azure, service account for GCP. That path is issued by you, scoped by you, and revocable by you at any time without coordination with us. If you use a session recording tool (Teleport, StrongDM, CyberArk, AWS Systems Manager session logs, Azure Bastion logs, or GCP IAP logs), our sessions are recorded by your tool on your infrastructure. We do not require session encryption we control. We do not require a tunnel home. |
| Network egress | Nocturne makes zero outbound connections except to your git host. The deployment instance is safe to run in a private subnet with no NAT gateway and no public IP. If you prefer to run Nocturne behind an internal-only egress proxy that permits the git host and nothing else, that configuration is explicitly supported and we will help you verify it during deployment. |
What this means in practice. A determined security reviewer should be able to confirm Nocturne's network behavior with a packet capture on the deployment instance during a working day. The expected result is traffic to your git host and nothing else. If you want that test performed as part of the pilot, tell us during intake and we will plan for it.
5. Uninstall and data handling at contract end
Nocturne is built to leave cleanly.
Uninstall process. On contract termination, GPOut Labs schedules a wind-down window. During that window, we stop the Nocturne service, remove the Nocturne binaries and configuration files from the deployment instance, and terminate or hand back the compute resource at the client's preference. The access path (IAM role, service principal, service account, SSH key) is revoked by the client at that point.
What the client keeps. The verdict log, the monthly savings reports, and any assessments Nocturne has produced during the contract period are yours. They are on your disk, in your tenant. We do not take them with us. If you want a summary export of the full savings history for your records, we produce it during the wind-down window.
What departs with GPOut Labs. The Nocturne service itself (binaries, configuration, and any internal service state that is part of the product) is removed from the instance. None of it is proprietary to your business, and none of your business data is embedded in any part of it that leaves.
Verification. At the end of the wind-down, GPOut Labs provides a written confirmation that uninstall is complete, and we walk through a verification checklist with your team (service stopped, files removed, credentials revoked, compute resource status). Your team signs off.
Retention window. The client has 30 days after termination to retrieve any operational data that still lives on the deployment instance. After 30 days, if the compute resource is still running and still owned by the client, the data remains under the client's control indefinitely. GPOut Labs does not access the instance after the revocation date.
6. FAQ
How much of our engineering time does deployment take?
One intake call (one hour), one deployment window (four to eight hours, no client attendance required for most of it beyond a kickoff and a handoff), and a pilot review period. Most clients report spending less than one engineering day end to end.
Does Nocturne integrate with our existing CI/CD?
Nocturne integrates at the git host level, not the CI level. It reviews pull requests directly. It does not require changes to your CI pipelines, your build system, or your deployment tooling.
What happens if the Nocturne instance crashes?
The service is configured to restart automatically on reboot. GPOut Labs monitors the instance and responds to incidents under the SLA defined in your contract. If a hardware or cloud provider failure takes the instance out for an extended period, we redeploy on a fresh instance using the same playbook.
Can we see Nocturne's reviews before they go live during the pilot?
Yes. During the pilot, Nocturne can be configured to write its review output to a private channel (an internal repo, a draft comment mode, or a dedicated review queue) for your team to inspect before any comment is posted on a real pull request.
Can we audit what Nocturne sees and does?
Yes. Every pull request the service touches is recorded in the local verdict log with timestamps, pull request identifiers, and the action taken. Your team has read access to that log at all times.
How do we know the service is actually producing savings?
GPOut Labs delivers a monthly savings report generated from your own verdict log. The report quantifies reviewer hours reclaimed and shows the methodology behind the number. You are free to audit or reproduce the calculation from the same log.
What happens to our data if GPOut Labs goes out of business?
Your data is already on your own infrastructure. It does not depend on our survival. In the event that GPOut Labs ceases operations, you keep the deployment instance, you keep the log history, and the contract includes a source escrow provision covering the operational runbooks needed to keep the service running or to wind it down on your own timeline.
Does Nocturne work with our existing git provider?
Nocturne supports GitHub (cloud and Enterprise), GitLab (cloud and self-managed), Bitbucket (cloud and Data Center), Gitea, and most self-hosted git servers that expose a standard pull request API. If your provider is unusual, raise it during intake.
Can we deploy multiple Nocturne instances for multiple repos or teams?
Yes. Multi-instance deployments are supported and are quoted per instance. Each instance is a separate deployment following the same playbook.
What is your SLA for deployment and for monthly reporting?
Deployment kickoff within one week of contract signature. First reviews live within the time to live listed above for your environment. Monthly savings reports delivered within five business days of month end.
7. How to start a deployment
The kickoff process is:
- Open an intake request via the contact channel listed on nocturnehq.com.
- Book a one-hour intro call to confirm your environment, your git host, and your pilot repository.
- Sign a pilot agreement covering scope and term.
- Schedule a deployment call for the following week.
Before the deployment call, have the prerequisites from your environment's section above ready: the target subnet or host, the scoped access credential, the chosen instance type or hardware, and the pilot repository identifier. GPOut Labs handles everything from there.
For the full pricing structure, contract options, and savings calculator methodology, see the services overview in the same documentation set.
Ready for the security review?
The deployment path for your environment is summarized above. Book an intro call and we will send the scoped policy template for your cloud provider before the deployment window.