Security

Last updated: Mar 17, 2025

Change Log

See what happend, when it happend, how it happened.

Securing your source code and development environment is a top priority for us. This page details how we handle security for NonBioS. If you identify any potential vulnerabilities, please report them to our email at security@nonbios.ai. For any queries related to security, feel free to reach out to us at the same address.

While many large organizations already trust NonBioS, we are still in the process of enhancing our product and strengthening our security measures. If you operate in a highly sensitive environment, exercise caution when using NonBioS or any other AI tool. We hope this page provides valuable insights into our progress and assists you in conducting a thorough risk assessment.

Certifications and Third-Party Assessments

NonBioS is NOT SOC 2 Type II certified as of this date, but is under process. Once certification is ready, you will be able to visit trust.nonbios.ai to request a copy of the report.

We are working on doing at-least-annual penetration testing by reputable third parties. Once it is underway, you will be able to visit trust.nonbios.ai to request an executive summary of the latest report.

Infrastructure Security

We rely on the following subprocessors, organized from most to least critical. Note that code data is sent to our servers to enable all of NonBioS's AI features (refer to the AI Requests section). Code data for users in privacy mode is never stored (see Privacy Mode Guarantee section).

As of Mar 17, 2025, Privacy Mode is NOT yet enabled as a feature in our Beta.

  • Akamai sees code data: Our infrastructure is mainly hosted on Akamai. Most servers are in the US, with some latency-critical servers in Akamai regions in Asia (Tokyo) and Europe (London).
  • Fireworks sees code data: Our custom models are hosted with Fireworks in the US, Asia (Tokyo), and Europe (London). If privacy mode is disabled, Fireworks may store some code data to speed up model inference.
  • OpenAI sees code data: We use many of OpenAI's models for AI responses. We have a zero data retention agreement with OpenAI.
  • Anthropic sees code data: We use many of Anthropic's models for AI responses. We have a zero data retention agreement with Anthropic.
  • OpenRouter.com sees code data: We rely on many OpenRouter models for AI responses. We have a zero data retention agreement with OpenRouter.
  • Exa and SerpApi see search requests (possibly derived from code data): Used for web search functionality. Search queries, derived from code data (e.g., when using "@web" in chat), allow Exa/SerpApi to see the resulting search query.
  • MongoDB sees no code data: Used for analytics data for users not in privacy mode.
  • Datadog sees no code data: Used for logging and monitoring. Logs related to privacy mode users do not contain code data.
  • Databricks sees no code data: Used for training some custom models. Data from privacy mode users never reaches Databricks.
  • Foundry sees no code data: Used for training some custom models. Data from privacy mode users never reaches Foundry.
  • Voltage Park sees no code data: Used for training some custom models. Data from privacy mode users never reaches Voltage Park.
  • Slack sees no code data: Used for internal communication. Snippets of prompts from non-privacy users may be sent for debugging.
  • Google Workspace sees no code data: Used for collaboration. Snippets of prompts from non-privacy users may be sent for debugging.
  • Sentry sees no code data: Used to monitor errors and app performance. Code data is never explicitly sent, but may appear in reported errors. Data from privacy mode users never reaches Sentry.
  • Amplitude sees no code data: Used for analytics; only event data like "number of NonBioS Tab requests" is stored, not code data.
  • HashiCorp sees no code data: Used to manage infrastructure with Terraform.
  • Stripe sees no code data: Handles billing and stores personal data (name, credit card, address).
  • Vercel sees no code data: Used to deploy our website, which cannot access code data.
  • WorkOS sees no code data: Handles authentication and may store personal data (name, email address).

None of our infrastructure is in China. We do not directly use any Chinese company as a subprocessor, and to our knowledge, none of our subprocessors do either.

Infrastructure access is assigned to team members on a least-privilege basis. Multi-factor authentication is enforced for AWS, and access to resources is restricted using network-level controls and secrets.

Application Security

Our app will make requests to the following domains to communicate with our backend. If you're behind a corporate proxy, please whitelist these domains to ensure that NonBioS works correctly.

  • ‘api.NonBioS.ai’: Used for most API requests.
  • ‘nonbios-1.NonBioS.sh’: Used for nonbios-1 API requests (HTTP/2 only).

Privacy Mode Guarantee 

Privacy mode is not enabled in NonBioS Beta version as of Mar 17, 2025. The following text represents the functionality of Privacy mode once it is ready. 

Privacy mode can be enabled during onboarding or in settings. When it is enabled, we guarantee that code data is not stored in plaintext at our servers or by our subprocessors. Privacy mode can be enabled by anyone (free or Pro user), and is by default forcibly enabled for any user that is a member of a team.

With privacy mode enabled, code data is not persisted at our servers or by any of our subprocessors. The code data is still visible to our servers in memory for the lifetime of the request, and may exist for a slightly longer period (on the order of minutes to hours) for long-running background jobs, KV caching, or temporary file caching. For file caching specifically, all data is encrypted with client-generated keys that are only retained for the duration of the request. The code data submitted by privacy mode users will never be trained on.

Account Deletion

You can delete your account at any time by sending an email to hello@nonbios.ai. This will delete all data associated with your account, including any indexed codebases. We guarantee complete removal of your data within 30 days (we immediately delete the data, but some of our databases and cloud storage have backups of no more than 30 days).

It's worth noting that if any of your data was used in model training (which would only happen if you were not on privacy mode at the time), our existing trained models will not be immediately retrained. However, any future models that are trained will not be trained on your data, since that data will have been deleted.

Vulnerability Disclosures

If you believe you have found a vulnerability in NonBioS, please submit the report at security@nonbios.ai.

We commit to acknowledging vulnerability reports within 5 business days, and addressing them as soon as we are able to. We will publish the results in the form of security advisories on our GitHub security page. Critical incidents will be communicated both on the GitHub security page and via email to all users.

V 1.0
Published