Skip to content
Assessing AI-Enabled Biological Threat Creation Risk: An Overview

Assessing AI-Enabled Biological Threat Creation Risk: An Overview

At ToolPilot.ai, we review innovative AI tools, methodologies, and their possible implications. Today, we're providing an overview of OpenAI's research on AI-enabled biological threat creation risks.

As AI systems' capabilities grow, so does their potential for both beneficial and harmful uses. A potential harmful scenario that has attracted the attention of researchers and policymakers alike is the possibility of AI systems assisting malicious actors in creating biological threats. Understanding the scope of these uncertain risks requires careful examination. OpenAI leads the way in exploring these issues through its Preparedness Framework.

One thought-provoking question involves a highly capable AI model being used for developing a step-by-step procedure for creating substantial biological threats. Alarmingly, these threats range from troubleshooting wet-lab procedures to completing steps of the biothreat creation process using cloud-based labs.

Researchers have been developing methodologies to empirically evaluate these risks to shed light on the current state and potential future developments. A recently conducted study included 100 participants with varying backgrounds in biology. Half of the participants were experts while the others were students, and they were assigned to either a control group with internet access only or a treatment group with access to OpenAI's GPT-4 model.

The study found minor performance increases for those with access to the language model. However, the study results emphasized the need for more research into these performance thresholds and how they might indicate an increased risk.

Assessing biological threats related to AI systems typically focus on two primary areas: increased access to information on known threats and increased novelty of threats. The recent evaluation prioritized evaluating increased access to information.

An effective evaluation requires testing with human participants to reflect the different ways a malicious actor might leverage access to a model. To ensure the participants could utilize all the model's capabilities, a training was provided on best language model elicitation practices, failure modes, and so on.

What’s more, the risk from AI should be compared to the risk from existing resources. Even if an AI model can be manipulated to share information related to biological threat creation, it doesn’t necessarily mean it increases the accessibility of that information beyond what’s already available through other resources like the internet.

Further research is essential to fully understand the potential threat of AI-enabled biological warfare. The implications are far-reaching and complex, necessitating rigorous and ongoing study.

Disclaimer: The above article was written with the assistance of AI. The original sources can be found on OpenAI.