September 16, 2024, 6:57 pm

AI’s Dark Side: New Warnings About Its Potential to Create Deadly Viruses

Sajjad Mahmud 
  • Update Time : Wednesday, August 28, 2024

As AI technology advances, specialized models designed to handle biological data are making significant strides. These models have the potential to accelerate vaccine development, combat diseases, and even engineer drought-resistant crops. However, the same capabilities that make these models beneficial also pose serious risks. To create a vaccine, a model must first understand what is harmful, leading to potential dangers.

Call for Oversight:
A new policy paper, published on August 22 in the journal Science, calls for mandatory government oversight and regulations for advanced biological models. While current AI models may not significantly contribute to biological threats, future systems could be used to create new, pandemic-capable pathogens. The authors, including experts from Stanford, Fordham University, and Johns Hopkins, emphasize the need for governance systems to mitigate these risks.

The use of biological agents as weapons is not new. From the 14th-century Mongol forces to World War II, history is replete with instances of biological warfare. The 1972 Biological Weapons Convention aimed to eliminate such threats, but the risk has not disappeared. The potential for AI to create or modify pathogens has raised fresh concerns.

AI could lower the sophistication required for malicious actors to do harm. Even well-intentioned researchers could inadvertently create dangerous pathogens. The ease of accessing biological materials online, coupled with uneven enforcement of safety measures, further exacerbates these risks. Mandatory screening and robust oversight are crucial, though not foolproof.

The global concern over bioterrorism is growing, with voices like Bill Gates and U.S. Commerce Secretary Gina Raimondo highlighting the issue. A significant gap exists between virtual blueprints and physical biological agents, but AI could narrow this gap. The paper recommends rigorous testing and the establishment of standards for the responsible sharing of sensitive biological data. International collaboration is essential to managing these risks, though harmonizing policies globally may be challenging. The authors suggest that countries with advanced AI technology should prioritize effective evaluations, even at the cost of international uniformity. They warn that biological risks from AI could manifest within the next two decades, or even sooner, if not properly managed.

Sajjad Mahmud is a contributor to TPW and Sarakhon

Please Share This Post in Your Social Media

Leave a Reply

Your email address will not be published. Required fields are marked *

More News Of This Category