Over the next decade, according to certain forecasts, there’s a one in four chance of another outbreak on the scale of Covid-19.
It could be influenza or coronavirus – or something completely new.
Covid-19, of course, infected and killed millions of people worldwide, so it’s a frightening prospect.
So could AI help to alleviate it?
Researchers in California are developing an AI-based early warning system that will examine social media posts to help predict future pandemics.
The researchers, from University of California, Irvine (UCI) and the University of California, Los Angeles (UCLA), are part of the US National Science Foundation’s Predictive Intelligence for Pandemic Prevention grant programme.
This funds research that “aims to identify, model, predict, track and mitigate the effects of future pandemics”.
The project builds on earlier work by UCI and UCLA researchers, including a searchable database of 2.3 billion US Twitter posts collected since 2015, to monitor public health trends.
Prof Chen Li is spearheading the project at UCI’s Department of Computer Science. He says they have been collecting billions of tweets on X, formerly known as Twitter, over the past few years.
The tool works by figuring out which tweets are meaningful and training the algorithm to help to detect early signs of a future pandemic, predict upcoming outbreaks, and evaluate the potential outcomes of specific public health policies, says Prof Chen.
“We have developed a machine-learning model for identifying and categorising significant events that may be indicative of an upcoming epidemic from social media streams.”
The tool, which is targeted at public health departments and hospitals, can also “evaluate the effects of treatments to the spread of viruses”, he says.
However, it’s not without problems. For example, it is reliant on X, a platform not accessible in some countries.
“The availability of data outside the US has been mixed,” admits Prof Chen.
“So far our focus has been within the US. We are working to overcome the data scarcity and potential bias when we expand the coverage to other regions of the world.”
Developed by Harvard Medical School and the University of Oxford, an AI tool called EVEScape is making predictions about new variants of coronavirus.
Researchers are publishing a ranking of new variants every two weeks, and they claim that the tool has also made accurate predictions about other viruses, including HIV and influenza.
“One of the unique strengths of our approach is that it can be used early in a pandemic,” says Nikki Thadani, a former postdoctoral research fellow who was involved in the development of EVEScape.
“It could be good for… vaccine manufacturers, and also for people trying to identify therapeutics, particularly antibodies to get some insight early on into which mutations might arise even a year in the future.”
It’s a point picked up by AstraZeneca’s vice president of data science and AI R&D, Jim Wetherall.
The pharmaceutical giant uses AI to help speed up the discovery of new antibodies. Antibodies are proteins used by the body’s immune system to fight off harmful substances. They can be used to make new vaccines.
Mr Wetherall says the firm can “generate and screen a library of antibodies and bring the highest quality predictions to the lab, reducing the number of antibodies that need to be tested, and cutting the time to identify target antibody leads from three months to three days”.
This is helpful for pandemic preparedness, he says, “because as we have seen with Covid-19, the potential volatility of viruses means that we need quicker ways to identify candidates to keep up with rapidly mutating targets.”
The Oslo-headquartered Coalition for Epidemic Preparedness Innovations (CEPI) – which funded EVEScape – views AI as a tool to help in its goal of preparing for and responding to epidemics and pandemics.
“We just have to be as broadly well prepared as possible,” says Dr In-Kyu Yoon, director, programmes and innovative technology at CEPI.
“And what AI does is it speeds up that that preparation process.”
However, he says AI still needs to develop and mature. “AI still depends on the information that is inputted, and I don’t think anybody would say that we actually have all the information.
“Even if the AI then can try to evaluate, analyse it and predict from that, it’s based on information that’s put in. AI is a tool and the tool can be applied to various activities that can increase the quality and speed of the preparation for the next pandemic.
“[But] it would probably be wrong to say that AI can slow down or prevent the next pandemic. It’s up to people to determine where to apply it.”
At the World Health Organization (WHO), Dr Philip AbdelMalik also highlights the role people play in AI’s efficacy.
As the WHO’s unit head of intelligence, innovation and integration, he says AI has definite value. It can pick up on chatter around particular symptoms, for example, and spot potential threats before governments have officially announced them.
Also, it can pick up if people are advocating potentially dangerous treatments online so the WHO can step in.
However, while he can see its benefits, he is quick to flag up the challenges.
He says he’s always careful to say AI is not going to generate decisions for us. Plus, Dr AbdelMalik is concerned about the issues surrounding ethical use of AI and equitable representation.
“If I’m feeding it a lot of information that I’m not reviewing, and so it contains a lot of misinformation, or it’s representative only of certain subpopulations, then what I’m going to get out is also going to be representative of just some subpopulations or contain a lot of misinformation.
“So it’s that old adage of, you know, garbage in, garbage out.”
But overall, experts believe we’re in a better position for the next pandemic, partly because of the progress made in AI.
“I think this pandemic was kind of a wake-up call to a lot of people who think about this space,” says Nikki Thadani.
“Our model [AI tool EVEScape], and a lot of other efforts to really refine how we think about epidemiology, and how we think about leveraging the sort of data that you can have before a pandemic, and then integrating it with the data that’s coming in through a pandemic, that does make me feel better about our ability to handle pandemics in the future.”
But, she says, there’s a long way to go both on more of the fundamental biology and modelling she has worked in, but in epidemiology and public health more broadly, to help make us more prepared for future pandemics.
“We’re much better off now than we were three years ago,” says Dr AbdelMalik.
“However, there’s something more important than technology to help us when the next pandemic hits, and that’s trust.
“Technology to me is not our limiting factor. I think we really have to work on relationships, on information sharing and building trust. We keep saying that, everybody’s saying that, but are we actually doing it?”