Pandemics aren’t usually Martin Landray’s job. A physician and researcher in the University of Oxford’s Nuffield Department of Population Health, Landray designs clinical trials–cardiology, mostly, the kind of industry-funded studies that suck in tens of thousands of people and extrude new drugs or procedures. But in early March, Landray and his colleagues could see what was coming. People were dying in Wuhan, China; the reports trickling out of intensive care wards in Italy were horrifying. In about two weeks, fighting a new coronavirus was going to be everyone’s job, including Martin Landray’s.

So what would that job actually entail? “We had to make some fairly fundamental choices. We couldn’t see any one treatment that was going to be a cure,” he says. “We knew there were a number of treatments that had some evidence of benefit. … We knew that none of these were proven–a lot of drugs that could work, but none that we knew did work.”

So Landray and his colleagues set about creating a new kind of a drug trial. The gold standard for testing medical therapies today is the double-blinded, randomized controlled trial, which pits a treatment against a placebo given to a control group. But other options exist; Landray recalled that in the 1980s, people tested a bunch of different options for treating heart attacks against each other in a sort of randomized multiplayer death match. Landray’s team realized they could try the same thing here, testing a half-dozen contenders to treat Covid-19.

person lathering hands with soap and water

How Long Does the Coronavirus Live on Surfaces?

Plus: What it means to “flatten the curve,” and everything else you need to know about the coronavirus.

In just nine days, the team put together the Randomised Evaluation of Covid-19 Therapy Trial, or “Recovery” for short. In 160 hospitals across the UK, they started recruiting patients presenting with Covid-19, who consented to be randomly assigned to get one of several drugs: the HIV antivirals lopinavir and ritonavir, the anti-inflammatory steroid dexamethasone, the antimalarial immune suppressant hydroxychloroquine, or the antibiotic and antiinflammatory azithromycin. They’d later add another anti-inflammatory called tocilizumab. Patients might also be randomized to get the standard of care–none of those drugs. An independent data monitoring committee would keep track of who would get better, who ended up on a ventilator, and who died. Unless one of the options looked wildly good or terribly bad, even Landray wouldn’t see data until it showed useful outcomes.

There is, however, a catch: Marginal, hard-to-perceive effects only show up in a really big study population. “One does need large numbers, because the numbers you need are driven by how effective you think the drugs are going to be,” Landray says. Today, five weeks after the first patient got randomized, his team has nearly 7,000 people across all the arms of the study, with about 2,000 more signing up every week. “I have no idea whether any of these treatments are effective, no idea at all,” he adds. “They all have a reasonable chance. None is likely to be stunningly effective.”

Broadly, this kind of trial design is called “adaptive,” an idea that many researchers hope will be a hyperdrive for the search for drugs and vaccines against Covid-19. The World Health Organization has launched a multidrug trial similar to Recovery, as has continental Europe. The US National Institute of Allergy and Infectious Diseases is spinning one up too, starting with the drug remdesivir and a placebo, with plans to add new drugs as they become available. They’re happening for vaccines too.

The problem is too many things to test, and a too-slow method of testing them. At least 180 potential drugs are in some stage of trials, and 78 vaccine candidates are in exploratory or preclinical tests. (Six are in early-stage human safety trials). But a spreading pandemic means a need to find and deploy the things that work, like, now. That’s where trial design comes in. Clever statistical and methodological approaches could produce a winner in months instead of years.

A New Kind of Clinical Trial

A couple of decades ago, the agencies and regulators responsible for getting new drugs made and sold noticed that they weren’t getting much return for the amount of money and time researchers and pharmaceutical firms poured onto the problem. It took too long to develop the drugs and recruit enough people to study them properly, and even if you spent the years necessary to do all that, 90 percent of all vaccine candidates failed. One recent estimate put the cost of getting a successful vaccine against an epidemic-causing germ at upward of $1 billion. So planners started coming up with approaches to the development process that might speed things up, or at least remove some friction from the pipeline. Adaptive or flexible trials were one of the ideas–studies that would still tease apart safety and efficacy, but with twists allowing for greater speed. They have lots of subtypes. Researchers can add in or delete drugs on the fly, as Recovery is planning. A “platform trial” tests lots of candidates against a shared control group. A “core protocol” can stop and start, adding new people or whole groups. The list goes on.