On the razor’s edge: biological risk in the 21st century

Three years ago, the world we live in changed seemingly overnight: as the Covid-19 pandemic spread, people stayed locked indoors, and the economy ground to a halt. Buzzwords such as “biosecurity” and “pandemic preparedness” were thrown around, with promises to address these issues once everything had settled back to normal. But as normalcy returned, vigilance has begun to waver – despite a near certainty that another, potentially far worse, pandemic will hit within our lifetime.

Many researchers and organisations now believe that the largest threat comes not from “natural” pathogens, but from engineered ones. The Covid-19 lab leak theory is still under debate, but accidental lab leaks do happen, and they happen with unnerving regularity. The SARS virus that emerged in 2003 escaped from at least 3 laboratories in the years following its discovery. Only last year, France issued a ban on research into the prion protein that causes the degenerative brain disease Creutzfeld-Jakob disease after two researchers working on the disease developed symptoms. And these are only some of the leaks that have been made public: most biosafety officers admit that they don’t report accidents to anyone outside their institution. Rather than helping to mitigate this issue, the Covid-19 pandemic is likely to perpetuate it, as governments race to set up more high-security labs. Almost half of the 59 maximum containment labs around the world (known as biosafety level 4, or BSL4, labs) have been established in the past decade, and more than three quarters are in cities, where a leak could spread rapidly through a large population.

If Covid-19 taught us anything, it’s that a disease can be a perfect weapon

The other worrying scenario is that of an engineered pandemic. If Covid-19 taught us anything, it’s that a disease can be a perfect weapon, causing more deaths and economic damage than most wars. With advances in genetic engineering, researchers are able to design organisms and bring them to life more easily than ever before. Although it’s still out of reach, it may soon become possible for people to design pathogens that are more contagious or deadly than natural ones. Weaponised contagious diseases may sound a bit dystopian, but they were very real threats during the Cold War, with the USA and Soviet Union both working hard to develop them. At the height of the Soviet program, there were around 40’000 scientists and engineers working in 60 facilities across the country. At the time, their methods were thankfully limited to what we now consider rudimentary molecular biology, but now that tools are more precise and easier to use, biological weapons may become a worthy defence investment for states at war. Thankfully, there is currently no way to make pathogens selective for a specific population, meaning that any country deploying a disease against another would be placing themselves at risk. The Biological Weapons Convention of the 1970s also technically binds countries to not using or developing biological weapons. While this means we may not see bio-warfare on a national scale for a while yet, it does not mitigate the risk from terrorist groups with no population to protect and less to lose.

Creating a deadly disease is now almost certainly within reach of even a poorly-equipped team. In 2017, researchers from Canada announced that they had synthesised horsepox, a relative of smallpox, from pieces of DNA ordered in the mail. While horsepox itself isn’t dangerous to humans, the technique could easily be adapted to recreate smallpox, a horrible disease that was only eradicated in the 1980s thanks to a global vaccine effort. Researchers estimate that bringing smallpox back would be feasible for a small team with little knowledge about viruses in just half a year, costing less than $100,000.

Given both the risks from lab leaks and those from maliciously engineered pandemics, the threat posed by human-made diseases is grave. So what can we do about it? It turns out the same advances in biology that make engineering diseases possible may also help us fight them off. Mass genetic sequencing could be used to analyse genetic fragments from all over our environment, alerting us to new sequences that we don’t recognise and could be from a novel pathogen. Additionally, mRNA vaccines – such as those developed for Covid-19 – have the potential to rapidly speed up vaccine development. They can instruct our body to make any protein we choose, meaning that as soon as we know which proteins are on a pathogen’s surface, we could design a vaccine to target it. With faster vaccine delivery, the threat from pandemics could be markedly reduced, although issues with production and equal distribution remain.

While engineered pandemics may seem like the work of the future, past bio-terrorism attempts have used toxins, such as nerve agents, to achieve their aims. Perhaps the most recent example is one close to home, with Russian agents accused of the attempted murder of a former double agent using the nerve blocker Novichok. Currently, there is a limited list of toxic substances that are deadly to humans even in small doses. This is because chemistry is complicated (even for chemists). In order to find a molecule that does what you want, you need to first find the right chemical structure, then design a sequence of chemical reactions that will link up atoms in the desired way. This process used to be one of trial and error, with many attempts and years in the lab needed to refine both the molecule itself and the steps needed to build it. Artificial intelligence (AI) is starting to make this process faster, easier and cheaper – no doubt a relief to chemists around the world. Machine learning algorithms can learn from past experiments and use the pattern of successful and unsuccessful attempts to predict potentially useful new molecules. One tool developed in Germany was able to come up with a pathway to synthesise a molecule 30 times more quickly than humans can.

These algorithms are also of great interest for drug development. Currently, pharmaceutical companies store libraries of chemicals that they then screen for potential disease-fighting activity. This screening process is slow and inefficient, with only few of the substances yielding a hit and libraries only storing a fraction of the 1030 possible molecules. Generative learning tools can be trained on existing drugs to then predict libraries of new molecules that have similar properties and could work to treat diseases. These technologies bring hope to the field of drug discovery, but also have the potential to be used for nefarious purposes.

One company has developed an AI drug discovery algorithm that, like most in the field, aims to maximise pharmacological activity at the desired target while minimising predicted toxicity. This means that if a molecule generated by the algorithm is similar to a substance that is known to be toxic in humans, it will discard it from its library. As a thought experiment, the researchers decided to test what would happen if they instructed the model to reward toxicity. The results shocked them: within only 6 hours, the model had generated 40,000 toxic substances, including VX, the most toxic nerve agent ever developed. Some molecules were predicted to be more poisonous than any currently known chemical warfare agent. The researchers say that it is fairly easy for anyone with access to a similar algorithm to replicate their work, and raise alarm bells that the technology has the potential to do serious global harm.

Although AI will no doubt lead to advances in many fields, there is a growing call for researchers to think about the implications of their research – not only in the area of biosecurity. Currently, the field is poorly regulated by both governments and researchers themselves. That may soon change: more and more AI experts are expressing concerns about the risks of AI and urging governments to take action. For researchers, simple steps such as making code and data used in publications on AI available on request only would enhance control over their use

It has been said that as the 20th century was the century of the atom, the 21st century will be that of biology. So far, this prediction seems to be holding: while the threat of nuclear war loomed over the late 1900s, we now live in an era where pandemics are our greatest menace. Despite this, most countries lag behind in their biological security strategies, preferring to ignore the warning shot of Covid-19 now that the danger has temporarily passed. For the moment, our advances in biology and AI leave us poised on the razor’s edge between progress and peril.

Photo credit: CDC on Unsplash