>> Ann Aikin: Good day. I'd like to call
the federal officer for NVAC. There are a few things you should know before we get
started today. First, this is a public meeting and it's being recorded. All statements
made today are made on the record. Second, this advisory committee is governed by the Federal
Advisory Committee Act, or FACA for short. FACA provides rules about the circumstances by which
agencies or officers of the federal government can establish or control committees or groups l
ike
this one to obtain advice or recommendations. The voting members are special government employees
and are therefore subject to conflict of interest laws and regulations, as well as all of the
members who work for the federal government. These members previously provided information
about their personal, professional, and financial interests. Each voting member's financial
interest and outside affiliation has been carefully screened, and we do this each year to
ensure that they comply w
ith federal ethics law. The liaison representatives are non-voting
members of the Advisory Council and are not subject to the same FACA rules as
the voting members. Additionally, the information provided at this meeting
does not necessarily represent the official position of the National Vaccine Advisory
Committee or the U.S. Department of Health and Human Services. Mention of products,
processes, services, manufacturers, companies, or trademarks does not constitute its endorsement
or reco
mmendation by the U.S. government, the Department of Health and Human
Services, or NVAC. And so, with that, I will go ahead and turn it over to our wonderful
chair, Dr. Bob Hopkins, to get us started. >> Robert Hopkins: All right, good afternoon
or morning, wherever you're calling in from, and welcome to this second day of our February
2024 NVAC meeting. I want to thank you, Anne, for getting us started and to the NVAC team
for its work in planning this meeting and supporting the committee
throughout the year.
I also want to express my appreciation to the NVAC members and subcommittee members for
their dedication to further the work of this committee. I look forward to the presentations
and discussions today. I'm going to begin as usual with a few housekeeping items, followed
by a brief review of the messages I took away from yesterday's sessions, and then a
high-level overview of today's agenda. In terms of housekeeping, I want to make sure
that everyone's aware this is a p
ublic meeting, and it's being webcasted on the HHS website.
I wanted to let everyone know that we have sign language interpreters available on the
HHS live stream. We appreciate their work to make our broadcast more inclusive. We ask
that everyone speak slowly and clearly to assist them in supporting our meeting. For
our virtual participants, please remember to mute yourself when not speaking, and please do
not use your camera unless you're presenting, asking questions, or answering a quest
ion.
During discussions, I ask that all members and speakers identify themselves before
speaking if I did not acknowledge you by name and giving you the floor. This helps
the note taker and others to follow along. Throughout the day, there will be opportunities
for committee discussion. If you'd like to ask a question or provide a comment, please send me
a message through the chat feature. As always, we will have time for public comment at the end
of our session today. This is planned at a
bout 5 p.m. Public comments are not a question and
answer session. They represent an opportunity for individuals who'd like to make a statement
to do so. The deadline to request a space for verbal public comment during the meeting has
passed. However, anyone can submit a written comment up to three pages in length to NVAC at
HHS.gov. This concludes the housekeeping session. So, for highlights from yesterday, we had a superb
series of panels and presentations. We started with a panel speakin
g to the recent increase
in measles in the U.S. and internationally. Inadequate measles vaccination is a major causal
factor for outbreaks. My three take-home points from this panel were The impact of the hesitancy
hangover from COVID vaccination on measles vaccination leaves us in a situation where measles
and misinformation together put communities, particularly those with low vaccination
rates, at risk for outbreaks. In my view, this particularly is timely in the context
of the current
Florida measles outbreak and concerning communications about these management.
The presentation about the recent bio report on the vaccine pipeline and industry investment
was notable for the broad number of vaccine candidates in phase one to three evaluation,
but with most vaccine-preventable diseases being addressed by only one to two candidates
and only roughly 10 percent reaching approval, this pool is not very deep. There are also
a number of notable monoclonal products in the pipeline
, importantly four for diseases for
which we do not have currently available vaccines. Following a brief break, we had an important
panel with experts speaking to aspects of federal, from FDA, ASPR, and BARDA, and CDC, as well
as private industry, working collaboratively to create and maintain a strong supply chain for
vaccine manufacturing and to minimize the risk of future shortages. Our two late-day panels focused
on different aspects of childhood immunizations. The first reviewed state
policies on immunization
for school entry. changes in the environment related to school or state legislation, the impact
of a legal decision in Mississippi that opened up religious vaccine exemptions, and options for
exemption processes to mitigate their potential adverse impact on school-age vaccination
rates. The second discussed the 30-year history and impact of the Vaccines for Children
program, as well as recommendations to enhance the program going forward. Finally, we heard
an updat
e from the Innovation and Immunization Subcommittee on their report. Work continues
with robust support from Anne and OIDP staff and contractors. We will hear more at our June
meeting, and we close the day with public comment. We will open today's meeting with expert
insights on artificial intelligence and its use in vaccine development and immunization
efforts. Then we'll hear from a panel of speakers on innovative approaches to improve adult
immunization efforts. After a short break, we'l
l hear from another panel on vaccinating
pregnant women. Our last panel for the day will focus on practices for immunization of special
needs individuals, and the day will conclude with our federal agency and liaison member updates
and public comment. Finally, as a reminder, please hold our remaining 2024 meeting dates
on your calendars, June 13th to 14th, 2024, and September 12th to 13th, 2024. Please refer
to the NVAC website for final details on these upcoming meetings. Let's get started
with
our first panel of the day. This panel is entitled Artificial Intelligence, Real Uses in
Vaccine Development and Immunization Efforts. With unparalleled scientific growth,
emerging diseases, and new technologies, the medical and healthcare sectors have
expanded markets, including for vaccine development post COVID-19 pandemic. One of these
newer technologies, artificial intelligence or AI, has already shown to provide huge benefits
for public health, but also presents concerns over d
ata credibility. In this session, we'll
examine guidance from HHS, as well as explore some broad real-world examples for the use of
AI to support vaccine development and other immunization efforts. This work supports the NVAC
charge on vaccine innovation, as well as potential new solutions to improve equity by reducing human
bias and lays a strong foundation for advancing precision vaccines. In this session, we'll hear
from Greg Singleton from the U.S. Department of Health and Human Service
s, Dr. Justin Mathew
from the U.S. Food and Drug Administration, Ted Schinkelberg from the Human Immunome Project,
Mark Langowski from the University of Washington, Dr. Jimmy Golohar from Houston Methodist, and
Dimitris Zambus from Pfizer. Mr. Singleton, we have your slides up. And I
see you on. You have the floor. >> Greg Singleton: All right. Thank you, Dr.
Hopkins. First of all, I want to start off introducing myself. I am Greg Singleton,
the Chief Artificial Intelligence Officer for He
alth and Human Services. And of course,
thank you to Dr. Hopkins for that introduction and stage setting. And thank Anne and Rebecca
for inviting me and helping coordinate with the committee to help present today. I want
to talk about a few things today about how we as a department are thinking about AI from
just the initial outset kind of top level. I want to talk about how we are approaching AI
as a US government, talk about some principles of trustworthiness that are really important
to
us and the nation as we deploy artificial intelligence, and then talk about some of our
applications, some areas that we are exploring artificial intelligence. And then really
hoping to set the stage for the rest of the panel that's going to talk a lot about really the
innovative possibilities and the potential for AI in vaccine development. With that, we can go to
the next slide. So here, two simple pictures. And I really ask the question, why artificial
intelligence? The first picture we
have on the left is data generation. This is a graph of the
volume, the quantity of data that we as a species are generating and transmitting and capturing
yearly. You see it's relatively exponential growth pattern there. Right hand side, we
see the In this case, it's a graph of the federal workforce as a percentage of the American
population. Well, why do I put these two together? The simple answer to why AI is we are generating
tremendous volumes of data and information and material on a
n ever-increasing basis. Our
population, our workforce, the number of human beings on the globe is not increasing. So our
ability to process and manage that data, that data volume, that communication is not really keeping
up. So AI comes in as a tool and a capability to help us bring human-like capabilities to this
data generation problem. And this problem exists in imagery, it exists in music, it exists in
scientific literature, it exists in vaccine literature. The volume is just increasin
g. So, we
have the option of either we can take more time to do the work we're doing, we can skip stuff and
make an abbreviated job of it, or we can adopt new tools to approach the challenges in front of us.
And that's what we're looking at for AIs. How do we use it as a new tool to manage these challenges
that we have all across society? Next slide. So when I think about AI and how to frame it,
it's important to know AI technologies are not entirely new. The capabilities have increased
an
d the attention and literature has increased, I think. It was the most popular term in
headlines, I think, until very recently. Taylor Swift probably bumped that out
of the top contenders. But with the development of vast data centers, vast storage,
vast communication capabilities and algorithms, we now have the capabilities to harness
these powerful techniques for good. The air purchase again allows to manage that core
challenge that tension between data generation and honestly the limits
of human attention.
We only have so many human brains that can work on these challenges. That gets to one of my
favorite definitions of artificial intelligence, and that is, artificial intelligence brings human
insights at machine speed. If we can do that, we can accomplish a lot of things and focus on
the core challenges for society and science. I'd like to say AI applications are differentiated
by how you're using them, the application, not necessarily the technique. So when we look
at r
isk management for artificial intelligence, what's mostly important is what are you doing
with it? How are you controlling it? How are you managing it? It's not necessarily that we
care so much about whether or not you're using natural language processing or tree-based methods
or adversarial models or something like that. It's not the technique, it's the application that we
care about. And then as Dr. Hopkins framed out, we are challenged to deal with the practical
present of artificial int
elligence and the theoretical future. A lot of the news
media, popular media, television, TV shows, and movies think about AI in the 10, 15, 20,
year out timeframe, what's going to be possible then when we're worried about vastly capable,
almost human-like artificial intelligence. A lot of the artificial intelligence we're dealing
with now is painting pictures, generating videos, doing genetic simulations, responding to
chatbots. In many cases, it's finding the right bit of information, tak
ing it and putting it
in the right place at the right time. And that's very different from the AI that we see portrayed
in media. So that's what we're trying to deal with as we think about the AI challenges. So for
the department, we've been working on AI for many years to advance the health sector. But
as we've seen, the pace and the capabilities of AI have accelerated a lot recently. So
we are taking a renewed look and renewed emphasis on the AI challenges as they present
themselves to t
he department, to the sector. We really hope we have the opportunity to be a
catalyst for successful advances in the adoption of AI, but we need to ensure that we're matching
the pace and scale of AI developments. A lot of this comes out in the artificial intelligence
executive order signed at the end of October. And through that, we are developing new AI
strategy. We're working on implementation roadmap and really looking at risk management of
AI across the department and then thinking of
it from a holistic sector-based approach. So I do
want to talk a little bit about the executive order 14-110 on the safe, secure, and trustworthy
development and use of artificial intelligence. This is, I think, among the more comprehensive
executive orders that I've seen in a long time. where it really works to manage and grapple
with the many facets of artificial intelligence, the concerns, the opportunities, and issues
that are really important to us as a nation. You can see on the right
-hand side here, the
sections of the executive order deal with safety and security, citizens' privacy, equity and civil
rights, consumers, patients and student rights, workers and workforce, innovation, competition,
and American leadership abroad. And then for the government, our use of AI. And what's important
here is while many of these seem and are risk management approaches, it really is an embodiment
of American values and how we want to ensure that we are able to safely and responsibl
y use these
technologies while harnessing the advances for the nation and our citizens. And it's really that
balance, that risk management and harnessing opportunity, that's what we're working to
accomplish through this executive order. I want to turn a little bit to the HS Trustworthy
Artificial Intelligence playbook and really speak briefly to this. This is a document that we
came out with in 2021. And it really precedes a number of the subsequent developments
on artificial intelligence.
But the goal here and what we sought to do is lay out the
principles and concerns and things that we recognize are important for these technologies
in order to use them in responsible manner. So, you know, you see at the top trustworthy
AI talks about design, development, acquisition, and use of AI in a manner
that fosters public trust and confidence while protecting privacy, civil rights,
civil liberties, and American values. And that's really what we're trying to do here,
ensure that as
we're deploying these systems, we're ensuring that they are fair and impartial,
that they are transparent and explainable. You understand what they're going to be doing.
They're responsible and accountable. And, you know, they aren't autonomously operating,
doing things that we don't like. Want to ensure they're robust and reliable. They
produce what you want them to produce with respect to privacy of individuals and groups.
And data is not being used in ways it's not intended. They should
be safe and secure, you
know, not just from cyber or malicious action, but also safe and secure for people that use
these systems. While we take it together, the goal is to use these principles
to ensure that we have AI that, again, fosters public trust and confidence so we can feel
good with the use of these tools and confident that the risks are managed and we're appropriately
deploying these systems for the government and out in the healthcare sector. I do want to highlight
that these
principles are not mutually exclusive and it probably is not possible to have 100
percent on all of these at the same time. And so it's really an important question
of risk management balance and responsible program design and implementation. Turning from
the trust with playbook. I want to speak very briefly about some of the uses that we as an
apartment are are leveraging AI for so talk about some use case and many of these are
Trial research experimental as systems that we're looking to s
ee if these are useful that we
would consider for further deployment. But we're using AI for things like virtual animal models for
toxicology testing. So obviating the need to use, or potentially obviating the need to use real
animals, real cell cultures for toxicology testing. Can you use virtual systems to
do that? That would be great. Can we use artificial intelligence to identify molecules for
drug repurposing candidates for leverage managing pandemics or toxins or weapons or new pathog
ens
that emerge in the environment. Can we use AI to help with tuberculosis detection and chest
x-rays when we look at border control concerns? Can we use AI systems for looking at feedback
analysis and reviewing public comments because we get vast volumes of information in
from the public and it's important that we appropriately characterize and categorize
and understand what the public is saying to us? Can we predict research categories in stem
cell research looking at research applicati
ons and documentation? Can we use it to look at text
documents and pull out important factors that our analysts or investigators need to consider when
they're looking at documents? All these are use cases that we are looking at as a department
and hoping to see if there are opportunities to deploy these safely and responsibly. And then
broadly, you know, we as a department have 163 use cases in the last year and we continue to
develop more, but they really fall in a number of categories aro
und information management,
a lot of biology and fundamental research, chatbots, natural language processing, and then
detection and device processing through data, images, material, and things that come in
to detect features, facets, or important elements. And that's what we see is a lot of
these use cases are really on data processing, data management, and we are continuing to work
on AI applications and the business operations, business applications, and other areas
of just fundamental
government work. So with that brief overview, again, I want to
talk about how we think about AI, talk about how we're approaching this government, leveraging
guidance from the White House, covered a little bit of the trustworthy principles that are very
important so that as we use these technologies that they're trusted and we can have confidence
in them and talk about some of the applications as an apartment. You know, as I said, AI applications
have the potential to improve care, address
health inequality, qualities inequity, sorry, accelerate
innovation and increase market competition. But one, ensure we are approaching AI would kind
of risk minimizing approaches that rely on core principles of trustworthiness. It's vital for the
nation to both seize the promise and manage the risk to enable progress. Ultimately, we're hopeful
that with this careful measured approach, one day AI will just be another tool in the toolbox
that we can use to make progress and improve citizens'
lives. With that, I will turn it back
over and thank the committee for your attention. >> Robert Hopkins: Thank you very much,
Mr. Singleton. Our next presenter is Dr. Justin Mathew from FDA. For Mathew, your
slides are up. I see you on the line. >> Justin Mathew: Thank you, Bob.
Can everyone hear me and see me? >> Robert Hopkins: Yes. >> Justin Mathew: Perfect. I want to thank the
panel members for inviting me and giving me the opportunity to speak. My name is Justin
Matthew. I'm a polic
y analyst within the Office of Medical Policy at the Center for Drug
Evaluation and Research at the FDA. And today, we'll be discussing responsive regulation of
artificial intelligence in drug development. Before I get started, I want to state that
any views expressed in this presentation do not necessarily represent the policies of the
FDA. Any mentions of specific names and brands are not endorsements. And finally, I don't
have any particular disclosures to state. So I know Greg touched o
n this, but I just want
to set a baseline. So let's just start with the working definition that FDA currently is
using to define AI. And this is derived from the White House executive order published
back in October 23. Artificial intelligence is a machine-based system that can make predictions,
recommendations, or decisions influencing real or virtual environments. AI systems use machine and
human-based inputs to perceive real and virtual environments, abstract such perceptions into
model
s through analysis in an automated manner, and use model inference to formulate options
for information or action. Next slide, please. So what are the drivers behind the growth in
AI health applications? The main takeaway from this slide is the point to the fact that
in the last 15, 20 years, we have gathered a ton of diverse healthcare data that can be
linked. With the increase in computing power and breakthroughs in methods of algorithms, all
this data can be put to good use to train thes
e models and gain new insights using artificial
intelligence. And you can see the growth in AI and healthcare research on this slide. The
graph was taken from an article published in the journal Nature back in September 2020, a little
dated, but still relevant. On the line graph, the authors performed a query search in PubMed
using terms machine learning or deep learning and choosing a specific year in the advanced search
field. And you can see the exponential growth in the recent years wit
h 2019 returning search
results of over 12,500 articles. And on the right side, it's the same method, but combining
all the years and they broke it down by medical specialty with pathology, radiology, and
surgery accounting for the top three. And now this growth is reflected in FDA.
Specifically, the Center for Device and Radiological Health, CDRH, has been leading the
way. CDRH has authorized nearly 700 AI-enabled devices, and there was a steady uptick
in AI-enabled devices starting in 20
15 through 2022. More than three-fourths of
the devices authorized were in radiology. AI use in drugs and biologic development
landscape occurs throughout all phases, starting from discovery into preclinical
research. into clinical research and eventually in the post-market safety surveillance
and even in manufacturing. Some of the uses in the discovery phase include drug target
identification, compound screening, and design. In the pre-clinical research phase, some
of the main application
s for AI use are in drug dose range finding, PKPD type of
modeling, and then in the clinical research, AI may be used for site identification for
recruitment in clinical trials, as well as enhanced medication adherence and study retention,
which is really applicable for vaccine trials, especially with multi-dose scheduling. Or, for
example, in the COVID-19 vaccine development, pharma companies use graphical-based knowledge and
image analysis to get new insights into illnesses and detect bio
markers 30 percent faster than human
pathologists could. Staying within the clinical research phase, we see applications that are
related to use in digital health technologies and clinical trial. And some applications that overlap
with real world data analysis or even creation of digital twins where you're simulating placebo
response of specific participants. And finally, in the post-market side, AI and machine
learning are seen in pharmacovigilance, identifying and evaluating safety
signa
ls and adverse event reports. So, you can see, again, applications
of AI are across the development cycle, but I want to emphasize that some of the
use cases may be outside of FDA's oversight, but we still encourage and hope applicants
will include that they've used AI in their development programs, even if it's outside of FDA
oversight. So within CDER, we're starting to see an increase in submissions with AI. This table
is from an article published by a colleague, Dr. Chi Li from FDA, and
it was published back
in April 2023. You can see in the year 2021, there were a total of 132 submissions that had
an AI component compared to only one submission in 2016. with 118 of the 132 submissions being
in the clinical research stage. So this just reiterates the growth and use of AI in the drug
development. So all these opportunities will have to be balanced against some of the known
challenges with using data-driven technologies. Fundamentally, as with any data-driven technology,
it
's only as good as the data that goes in. This is especially important for certain contexts
of use, where we need to understand the data that was used to train these models. Are the data
used for AI development of high quality? Are there any inherent biases within the data? Is the data
representative of the intended population? With regards to opacity, you may sometimes run into
the black box paradox of AI algorithms. and trying to understand how the model produced a particular
output. I al
so want to touch on the necessity for transparency and not towards us regulators, but
also the end users, the providers, the patients for whom the algorithm had direct impacts. And
now having all this data on individuals now opens up data privacy concerns and having robust
security infrastructures are key and important. A particular challenge for FDA is faced with is
its oversight and governance of AI as it pertains to adaptive algorithms after approval and once
deployed to the public. As I
mentioned previously, my colleagues at CDRH have been leading
the way with regards to having experience in submissions containing AI. They have
had several workshops and published some guidances for use of AI in medical devices.
And I want to highlight that in fall of 2021, FDA's CDRH, in collaboration with Health
Canada and the United Kingdom's Medicines and Healthcare Products Regulatory Agency,
jointly identified 10 guiding principles that can inform the development of good machine
lea
rning practices for medical devices. I'm not going to read all 10 guiding principles,
but I just want to touch on some key aspects. The first principle I want to highlight is the
incorporation of a multidisciplinary approach from the onset of model development. It's not just
important for data scientists and biostatisticians and epidemiologists to be involved in the
initial phase of the model development. But we should be including clinicians,
pharmacologists, other impacted parties, and th
inking around developing these models so
that you're developing the models on robust data from the onset. And what I mean by robust data is
it's a well representative of the intended patient population. Are there any hidden biases? Because
the algorithm developed on the training data set may produce excellent results initially, but once
deployed to the wider intended patient population, the algorithm may not work as well. And finally,
maintaining a human in the loop approach during validati
on is important, especially when AI
model has high level of influence and the decision consequence is high as well. Now,
I want to emphasize engagement is key here. These are new frontiers, and as we work on
navigating through the processes of how best to implement and regulate AI in the development
of drugs and biologics, as well as within the lifecycle of medical products, we want to promote
a mutual learning on three main core principles, which are, one, human-led governance
with accoun
tability and transparency, Two, high quality, reliable, well-represented
data. And three, the AI models themselves, from development to performance
testing to monitoring and validation. So, what's next? The White House Executive
Order establishes a government-wide effort to guide responsible AI development and
deployment through federal agency leadership, regulation of industry, and engagement with
international partners. CDER, specifically, is specially tasked with developing a
strategy f
or use of AI and AI-enabled tools in drug development. We in CDER in collaboration
with the Center for Biologic Evaluation Research, CBER, with help from CDRH, are developing a
guidance for use of artificial intelligence to support regulatory decision-making for drugs
and biologic products. With regard to advancing safety and security, we see utility in the use of
AI in pharmacovigilance and post-market safety, and FDA will continue to use grants and
other funding for demonstration projects
. And just like everybody else, the FDA is
looking to use AI to help us internally increase productivity and efficiency as well.
like I mentioned in the previous slides, we want to promote industry engagement. That
is why we hold public workshops. And in fact, CDER will be organizing a public workshop with regards to the use of AI in the drug
development cycle in the summer. So, look for further details to come out soon.
And on that note, I will pass it back to Bob. >> Robert Hopkins: Thank
you
very much for your presentation, Dr. Mathew. Our next presenter is Ted
Schenkelberg from the Human Immunome Project. I see you on the camera and
your slides are up. You have four slides. >> Ted Schenkelberg: Great. Thank you. And great
presentation, Justin. Super interesting here with the FDA is doing. My name is Ted Schenkelberg.
I'm actually managing partner at Next Frontier Advisors. We're a network of consultants that help
NGOs, companies, and philanthropy with vaccine development
from design through manufacturing. And
former co-founder of the Human Immunome Project, a global NGO focused on understanding the immune
system. We're going to move pretty quickly here through some technical material, but I want
to give you a sense of what's happening in biomedical research and the application of AI
advanced computing machine learning. And in biomedical discovery, we're really starting
to see changes and impact driven by AI. One example is computer vision, which was
reall
y optimized on the internet. Retinal scans are helping identifying underlying disease
just from the image of the retina and with high degree of accuracy. So, the retina scan is
able to predict kidney disease, Alzheimer, cardiovascular diseases. And again, I'm just
going to go through a couple examples where AI has had an impact or is having an impact
on biomedical research. In the protein design and prediction area, Google's spin out DeepMind
developed a program or algorithm called AlphaFol
d, which is able to predict protein structures
from amino acid sequences. And they do this now for 200 million known proteins
with a reasonable degree of accuracy. Previously, it would have taken up to a decade
sometimes to develop the structure of a single protein. So this is really accelerating how
we're understanding core components of our biology. And the last thing I'll just
highlight here to give you context of, you know, generally how AI is impacting
biomedical research. Recently at
MIT, a deep learning algorithm identified the first
new class of antibiotics in a generation, and it did so relatively rapidly,
completely different context than sort of the laboratory-based chemical process that we've
been using for antibiotic discovery. So, that's a brief snapshot. There's other areas we could
obviously go into, but I want to keep moving. In the history of vaccine development,
technological advances have driven our advances in our ability to design vaccines. And this goe
s back
to the 18th century through the 21st century. And this is a slide that's adapted from Stan Plotkin's
vaccine book. And even in the first couple decades of our own century, structural biology, mRNA, and
synthetic biology have yielded vaccine for RSV, COVID-19 with new platforms, and new ways to
design adjuvants which give us greater potential of protecting individuals. So, technological
advances often are the drivers of ability to design new vaccines. I'm not going to dwell
on this,
but there's a lot of investment, a lot of hype, and a lot of activity
that's going on in the biomed space. This slide's probably already outdated, but
this gives you a sense of what's going on now and kind of the hope of AI within biomedicine on
the product development side. We're going to turn now to vaccine development, and we're going to
use this very broadly. to think about how AI and approaches in advanced computing can help solve
some of the problems or hurdles that are hindering effe
ctive vaccine development. And I'll give you
sort of quickly some examples in each of these areas, but they relate to our ability to better
understand human immunity. and have interventions that are able to protect key populations,
our ability to design vaccines better, and our ability to optimize how the vaccines are
developed as well as tested. So, a number of these areas include kind of modeling and predicting of
both the systems, prediction within and modeling within key populations, un
derstanding how to
design or identify key antigens or receptors where it's really where the rubber hits the
road of immune protection, and optimization. So, we'll go through each of these. And
I want to just, you know, offer a little bit of temperance here hype and reality, AI and
advanced computing offers tremendous potential we saw that on the first slide. But we, this is
far from realizing most of these technologies are still nascent particularly in the vaccine
area. And they need to re
ally demonstrate impact, efficiency, effectiveness, and efficacy.
And we're not there yet, but it's starting to change things. And I just have a brief quote
from Jim Froh, who runs the Vanderbilt Vaccine Center. which basically says a lot of the current
laboratory approaches are still more efficient and effective than a lot of the approaches in
AI. That may not be true in the coming years, but it's true right now and AI really needs to
demonstrate and scientists need to demonstrate the effe
ctiveness in real world settings of
ability to protect people and extend lives. So, what's driving all this? And we saw in
previous slides from the other speakers that, you know, data, the ubiquity of data and
which is really the fuel for AI. And in the biomedical side, we're seeing new types and
depths of data that we've never seen before. And a lot of this data can be generated at
a lower and lower cost, particularly in the genomic setting. From small drops of blood in
systems biology, w
e're basically able to look at almost every component in the body from our
genome, transcriptome, proteome, microbiome, et cetera. We can generate a lot of data about
how biological systems are working. And with that, we are able to take advantage of some of
these accelerated advances in computing. If we combine that within population or
clinical studies, longitudinal studies, that looks at populations over time taking
samples. We really are able to get a dynamic look at individuals and how
their biology
changes either by infection or vaccination and understand drivers of better protection,
better vaccination versus non-response. And we're having an increasing amount of structural
molecular data which is going to help us design vaccines. So let's look at a couple problems
that are underlying our ability to develop both therapeutic as well as prophylactic vaccines.
I'm also going to throw in immunotherapies because we kind of look at the immune system
as one system that helps
us fight disease. We don't really understand how the immune
system works. We don't really understand the drivers of effective vaccination other than
maybe antibody titers and some, you know, some levels below that. But this is a complex
distributed system that has memory and changes over time. And if we were able to understand
at a systemic or component level how to model or create predictive structures that tell us
how the immune system works, it would really accelerate our ability to des
ign and vaccines
or other interventions to fight disease. So, I'm just going to highlight, without going into a
lot of details, some advances that are occurring within the field of immunology underlying future
vaccine or future immunological interventions to protect us from disease. This is a study
out of the Sanger Institute in the UK. And basically, they developed a proof of
concept mathematical model which looks at the immune system as a distributed system,
and it predicts interactions
from protein molecules to multicellular behavior across
organs. And this is a breakthrough study in terms of understanding the immune system as a
system. It's the first step. It's early. But it's super interesting to see what mathematics
and modeling can do. This is a study that we did at the Human Immunome Project where we
looked at, we took systems biology and small drops of blood to understand and parse out who
responds to vaccination and who doesn't. And this was the HBV vaccine. And lo
oking at early
immune signatures in the innate immune system, we were able to predict who was going to
respond and who wasn't prior to vaccination based on immune signatures, as well
as the level of antibody response. And so, this was a very small cohort, but it
starts to show the power of machine learning and AI in terms of predicting a system
and really parsing out the immunological drivers of protection, which we don't yet
understand for vaccination. Another major, major problem in our
ability to protect
individuals is the variation of immune responses across populations. Our immune systems
are really different at the beginning of life, pregnancy, infants to end of life, older
adults. Those are immunocompromised and those who are living environments that are
highly stressed like those in the low and middle income countries. These groups make up the
largest burden of disease in the world and yet we're not so good at generating effective immune
response or vaccine response
s in these groups. This is a study done out of Stanford which takes
a systems biology approach with a longitudinal study. And it starts to look at immunosenescence
and the drivers of our declining immunity as we age. And it developed predictive signatures from
blood samples and analysis of gene expression, cytokine, cellular phenotyping of immune age.
And this was highly correlated to mortality, even more so than certain risk factors
in the Framington Heart Study. This is getting at the dri
vers of the decline in our
immune system. And if we can understand that, we might be able to engineer better vaccines
to protect an aging world. Another problem, major problem, particularly for complex infectious
diseases as well as non-communicable diseases is the identification of antigens and their immune
receptors, which are protective against disease. This is where the rubber hits the road
in vaccination, either therapeutic or prophylactic. This is where we develop antibody
or effecti
ve T-cell responses. We don't really know how to model this. This is a super
interesting study in cancer immunotherapy, which used a deep learning algorithm to identify
specific T-cell receptors that were associated with response for immunotherapy. So, which
T-cell receptors protected people from or help clear cancer cells in their body. This is
a general concept that could be applied to many different vaccines and could help us design
interventions to not only have solid antibody responses
to vaccines, but effective T-cell
responses to vaccines if we can understand the specific receptors or markers which are associated
with response to vaccination or immunotherapy. So it's a very interesting, powerful study.
at least in concept using deep learning. This is a study which used a general language
model to help accelerate the natural antibody evolution or affinity maturation. This is the
process by which an antibody is evolved in the body to bind more and more, better and better
to antigens and basically help more effectively neutralize infections. And so, This was a study in
which, in two rounds of computational evolution, they were able to evolve better antibody
effectiveness and binding for coronavirus, Ebola, and influenza. Super interesting study
because this is hard to do in the lab, and this gave a computational approach for modifying
antibodies and underlies their ability to design maybe future vaccines or monoclonal antibodies
through a general language
model approach for antibody evolution. So, another problem is a
lot of our vaccine development and platforms aren't optimized. And this hinders our ability to
have effective vaccines, to distribute vaccines, and our ability to respond to pandemics. And
particularly, mRNA, which is a very powerful tool for responding to pandemics rapidly, has a number
of issues around stability, manufacturing, cost. And so, one of the, next slide please, one of
the questions that was asked is can we improve
stability of protein expression for mRNA by
applying a computational approach which was used in the linguistics. And surprisingly,
this algorithm suggested new ways to tweak mRNA molecules for COVID-19 vaccines and design
vaccines which have better chemical stability, protein translation, and immunogenicity.
Again, this was done computationally, and these are kind of in-lab markers, but it
pointed to a way to rapidly design and optimize this platform better. So, another question
which I th
ink was, you know, teased out slightly, you know, a little bit in the previous
presentation is can we optimize clinical studies? Currently, for vaccines, clinical trials are
hugely expensive, time-consuming, and not really that predictive in the earlier stages. So, can we
combine learnings from systems of biology where we're understanding the drivers of immunization or
safety much better, particularly in key subgroups, with algorithms that help us parse out
the selection of participants? An
d so, in the future, you could start to see
efficacy trials that instead of being in the tens or hundreds of thousands of
individuals, in the hundreds or thousands, which are more stratified, driven by biology,
and driven by the identification or knowledge of predictive signatures, this would give
faster results, greater probability of success, and include a lot of different biological markers
that are potentially associated with protection of populations or safety. And way off in the
futu
re, maybe, maybe, maybe, could we move towards AI-simulated vaccine trials? This may
or may not be on the regulatory path, but it certainly would help us to have a better idea of
derisking a product before it goes in the clinic. Could we run a simulated trial based on what
we know more about human immunology using supercomputing, using AI to sort of say this is a
good target or this isn't a good target and have predictive outcomes through complex models of the
immune system or populations a
bout how vaccines, how, whether vaccines will or won't work. This is
off in the future, but you could see this really changing the process of vaccine development and
clinical studies. This is the last slide I just want to summarize. New technology has always been
a driver of our advances in vaccines. AI probably won't be any different. It's already changing
many other industries from investment banking, security, imaging, media. But
we're really, really early on. We're likely to see new too
ls like large language
models, which are really good at complex, noisy data systems that aren't annotated. But
the proof is in the pudding. We got to show clinical efficacy. We got to show improvement
in efficiency over current clinical or lab approaches. And we have to start showing
these really interesting concepts that are in the lab to see if they actually work
in protecting people, extending people's lives and improving our ability to combat
diseases. And I'd just like to acknowledge
a number of institutions and individuals who
helped contribute ideas to this presentation. >> Robert Hopkins: Thank you very
much, Mr. Schenkelberg. Our next presenter is Mark Langowski from
the University of Washington. Mark, your slides are up. See your face. You look
like you're still muted. You have three. >> Mark Langowski: Can you hear me? >> Robert Hopkins: Yes, you're loud and clear. >> Mark Langowski: Awesome, thank you. Hi, I'm
Mark. I'm a senior graduate student in Neil King's la
b here at the IPD. Neil couldn't be here
today because he had a conflict that came up, but I'm going to talk a little bit about some of the
AI assisted vaccine design that we're doing here in the lab. So previously, people here at the IPD
have developed these self-assembling nanoparticles using computational methods, using some of these
biophysical-based models. So this example right here is in the gray and in the purple are two
existing proteins that you can dock together. And then you can
, in some sort of symmetric
arrangement, in this case it's an icosahedron, and then you can design interfaces, de novo
interfaces, between them so that they will, when mixed together, form this nanoparticle
every single time. And next slide, please. And so what you can do with this is, so this
same nanoparticle that was designed was used as a scaffold to display the SARS-CoV-2 RBD, which
you can see on the left, and was genetically fused. And this was done in collaboration with
David Veesl
er's lab here. In the center here, this vaccine is called Skycovion. In the center
here, it has a higher neutralizing titer than Atrazineca's SARS-CoV-2 vaccine. And so, you know,
this is proof here that this is the first, again, computationally designed protein
medicine, and that this can work, right? And this was, I think, approved
about a year and a half ago in Korea. But now, okay, so these are the old
methods, but what can we do now, right, with AI-assisted protein design? And so we ki
nd of
think of this as a pipeline. So on the left here, we have protein backbone generation, right? So
what features do we want, right? So, like, we need to make the skeleton of the protein. And then
the next step is sequence design. Okay, so how do we make a sequence that folds it into this shape
every single time? And then the actual structure prediction networks, right, like alpha fold to
say, okay, feed it that sequence and see does it actually fold into the design model? And the last
step of all this is after we've designed all this in the computer to do experimentation with it,
right, and see, you know, determine structures in real life and see if they actually match
the design models that we made in the computer. And so David Baker, the head of our institute, has
been at the forefront of this. And so these are a lot of machine learning for each of these steps,
right? So in the first box here, hallucination, inpainting, and RF diffusion are ways to generate
these backb
ones using these AI-assisted networks. In the second box in the center, protein MPNN is
another machine learning algorithm that designs sequences for these backbones. And then third, as
has been mentioned before, but AlphaFold and what was developed here is RosettaFold to actually
predict from sequence highly accurate or have highly accurate predictions of structures. So as
an example here, protein MPNN. So I can take in a backbone. So the way protein MPNN has been trained
is that it's been
trained on the PDB, so from all existing structures. And basically it's learned
features about what makes like a good sequence. And so what you can do here on the left, or for
this alpha helix in pink, is that you just get a backbone. So maybe you generate something
from the methods that I mentioned previously, and then you can feed it through the protein
MPN network. And then protein MPN is going to give you a sequence that it thinks is really
good for that specific backbone. And then you
can take that sequence and predict it and see
how it works. And then next slide, please. So when you use protein MPNN to apply sequences to
newly designed backbones, so on the left here, what we're showing is soluble yield. So MPNN
designed proteins have a much higher soluble yield compared to alpha-fold hallucinated proteins.
And on the right is CD spectroscopy data. And this is basically just looking at the
secondary structure content of these proteins, but at 25C or 95C, the spectra is
nearly
equivalent, right? So these proteins are really, really stable across a huge or a large
temperature range. And so going back to the backbone generation, so at the cutting edge
is RF diffusion, right? And so at the bottom here, this method's been inspired by deep learning
methods to make synthetic images like Dolly, maybe some of you have heard about. And so the
way those models work is that you train it on a bunch of images and basically you noise those
images so you can imagine tha
t picture going in reverse. You make it more staticky, more noisy,
and then you're training the model to be able to progressively, step by step, eventually
generate something into, you know, that looks like an actual image. So what you're seeing
at the bottom here is actually on the right, that picture is of somebody that doesn't exist.
This was generated by a dog, right? And again, this is just from what it's learned about images.
And we can do the same thing for proteins. So at the top he
re in the upper left, you
can see that you have this like noisy little cloud of atoms that eventually form into
this backbone that looks like a protein, right? And so we've trained, or people here in the
Baker Lab have trained, these networks on the PDB, so on existing structures you noise those
structures, you teach the model how to like re-noise it or de-noise it back into something
that looks like a protein and then we can use these to generate actual protein backgrounds. Next
slide ple
ase. And so RF diffusion can accommodate you know a bunch of different tasks so you can
make just proteins unconditionally as shown in the upper left you start with a cloud of atoms and
then it forms into denoises into something that looks like a protein. On the right, you can make
a protein binders towards something. So say you have some sort of receptor you want to target
in green there. You can choose a spot on the protein and say, okay, diffuse a backbone that
will fit into that spot on
the target protein. In the lower left, you can make symmetrical
ligamers. So this could be like a nanoparticle or it could just be like a dimeric or trimeric
protein, whatever you want. And you can diffuse symmetric oligomers that way. And on the lower
right, at least in the context of maybe like vaccine design, you could do functional motif
scaffolding. So you could take an epitope, right, a known epitope that maybe you want
to, you know, focus things towards. You can pull that out and ac
tually tell diffusion to,
you know, build something completely new that will support it. And this is one example.
And so going back to kind of like the very first slide is that before we had to use existing
proteins doc them together to make a nanoparticle, which maybe isn't necessarily the most custom
way to do things but obviously it can work. But here now we can actually diffuse these
nanoparticles completely de novo and so Helen. in the lab made this particle. So she diffused
this icos
ahedron shell on the left here. So it has 60 copies of a protein. So basically we told
diffusion to diffuse something that looks like icosahedron, right? Generates this backbone on
the left, use protein MPNN to design interfaces between everything. And then on the right is
an actual cryo-EM micrograph. And those are the averages that are shown there. And below
it is the cryo-EM density. And so when you fit the design model, into the density, you can see
that it matches really, really well,
right? And so this is kind of the new way that we're making
nanoparticles now. And it's much easier compared to, or in theory, compared to the old methods that
I showed at the beginning of the presentation. And so an actual example for an antigen, in
this case, this is HIV envelope trimer. So there are two parts to it. There's GP120 and
GP41. And so there's been a lot of work done in the past couple decades to stabilize these
HIV envelope trimers. And so on the left here, it's showing that
there is a metal stable
core. It's not fully resolved. And a lot of the stabilization mutations that have been
applied to this protein to make it more happy and actually keep it as a trimer for vaccines
are towards this, the part that's called GP41, which is like the trimerization interface. And so
the thought here is that kind of like, we hit our limit in putting band-aids on it to try to fix it
and make it stable. So, the question was, okay, how can we use these new models that we've come
up with in the last few years to make a completely new core and take away the problem, right? So,
basically, the outside of the HIV envelope trimer is going to look exactly the same, but the inside
now is going to be a completely de novo interface. And so, on the right, you know, the thought here
is that, okay, can we remove the GP120 core, you know, maybe we'll have to do some stuff to
play with the GP120 permutation and provide a new supporting structure. And one advantage of
this is th
at there are some germline targeting vaccines for HIV that only show a monomeric
or just one copy that will actually fold a full symmetric representation of this trimer.
And that potentially could lead to off-target responses. So the hope here is that you make
something that's higher fidelity that can elicit the vaccine response that you actually want. And
so this is work all done by a visiting postdoc, Naimouan Aldong, who's at Amsterdam UMC.
And so he used some of these AI-assisted method
s to generate these new cores for the GP120
trimer and also directly fuse it into an existing nanoparticle. And so at the top here is showing
you this is a two-component AI particle called IF53DN5. In blue is the trimeric component,
which we want to scaffold the trimer on. So in the lower left, IF53DN5B, one of
these components was used as the base. We used oligomeric hallucination, one of these
networks, to generate a helix bundle. So you extend it out. And then the next step to this now
is you dock the GP120 trimer and find the best orientation. And then you in-paint. So it's
another model, another deep learning model, to actually connect the GP120 trimer to this
helix bundle. And then you use protein MPNN to design everything to make sure that the inside of
the GP120 in this newly designed de novo protein bundle are, you know, actually make a proper
interface. And so this actually works. It's a pretty incredible result, but this is cryo-EM
structure of this GP41 free nati
ve-like trimer. So on the left here is the design model. And
so what's highlighted in pink there is what the de novo backbone that's been generated
at the core of the protein. And then it was designed with protein MPNN. And to the right
of it is this GP41 free trimer overlaid on a SOSSIP trimer. So like a native-like or
a native on envelope trimer. And this is in complex with VRC01 class antibodies. And
you can see that there's almost no difference between these two things. They're pretty m
uch
identical to each other. And then on the lower left is the actual cryo-EM data. And you can
see the little helix bundle at the bottom that was diffused out. And then the antibodies
bound to the trimer. And then to the right of that is this is just showing three trimers
on the nanoparticle. That's what's sitting below at the soccer ball below it. you can see
that there's antibodies bound to it and that, you know, it looks like it's in the proper
orientation as it was intended to be desi
gned. And so when you have a fully occupied particle,
you have the 20 copies of envelope trimer and you have 60 fabs binding each binding site
that is available. And so this is going into, currently it's going into mice to be tested.
And we have some other examples, which I, so I work on malaria. which I couldn't show
today, but we are testing these things out for stuff that's been generated with all these deep
learning protein design methods. And so just to summarize, right, so these compu
tationally
designed protein vaccines are a reality, right? So even with the old methods, right,
you can make these protein vaccines. And AI has changed a lot of how we've you know,
design proteins, you know, before we used, at the start of my PhD five years ago,
what we started with biophysical models. And now, you know, we still use some of that,
but these deep learning methods are way faster and way better. It's not perfect by any means,
right, and there isn't proof yet that they'll, you
know, necessarily work, or I guarantee that
they'll work. But it's been amazing what has been done in the last few years. And then the last line
is ambiguous and a bit vague, right? But I think, you know, there are concerns, I'm sure people
have seen the news, right, about AI. But at least in the context of protein design, we think
it's a really positive force in designing these better medicines, right, and vaccines. And there's
a bunch of work coming out of the Baker Lab that you can see,
whether for biologics and therapies
like that, that seem to be really, really promising. And again, same thing in our lab. for
the production of actual vaccines. And next slide. Just to acknowledge the people that have worked on
all this stuff, the data I presented here, Neil, of course, my PI, Helen, and Yuan, and thanks
to all our funders. And thank you so much. >> Robert Hopkins: Thank you very much, Mark.
Very good presentation. Our next presenter is Dr. Jimmy Gollihar from Houston Met
hodist. Dr.
Gollihar, your slides are up. I see your face. >> Jimmy Gollihar: Okay, great. Thanks. First of
all, thanks for having me. It's an honor to speak with you all and share how we're beginning to use
artificial intelligence and machine learning to design and validate immunogens to different
viruses. I'm also excited to share a little bit about how we safely generate data in the
laboratory to enhance these models and make them better over time. So I thought we should start
by taking
a look at traditional vaccine design, which sets the stage for appreciating what AI
can do. So first of all, traditional vaccine development follows a sequential and structural
workflow, which we dissect into four key stages. In the first stage, this is antigen design, where
we rely on rational design, directed evolution, and computational methods, mostly physics or
evolution based. These methods then attempt to craft antigens that resemble some confirmation
of a protein, usually a pre-fus
ion for viruses, and then hopefully elicit a desired immune
response. Moving to the second stage, we must test these in in vitro experiments to assess
the various properties of the vaccine candidates, such as stability, epitope presentation,
expression levels, antibody binding, and even structural integrity, like the cryo-EM
images you just saw. These experiments are vital for down-selecting designs. The third stage
involves pre-clinical work where we're working in small animal models to un
derstand toxicity and
then moving into larger, more physiologically similar animals to challenge the vaccine's
efficacy and further assess safety profiles. And then finally, the best candidates might
make it to the clinical stage where we're going into humans to assess efficacy and safety
as well. Throughout all of these stages, the traditional approach has always been
about abstracting concepts from the data and optimizing those candidates through trial
and error. And as you can see with
the arrows, it goes in both directions. So, as we
move from stage to stage, we learn more, which enables us to design or engineer
better immunogens. And as you can imagine, this is an incredibly time consuming effort that
often takes years, decades to be successful. As we'll see in the next slides, we believe
that machine learning has the potential to revolutionize this traditional workflow,
making it faster and more efficient. So, moving beyond the traditional vaccine
engineering, our goa
l is to build a platform that integrates these specialized tools into a
comprehensive framework for immunogen design. We build purpose-built tools that are trained to
abstract very specific types of information. and, in turn, designer-constrained vaccine
candidates based on that information. So, first, we start with antigen identification and
pre-processing, which is currently our in-house genomic surveillance efforts and even links to
public databases such as VIPR, run by BDVRC. Here, we d
etermine the type of antigen, identify
potential antigens to use as immunogens, and then prepare them for our other
modules. Once we have antigens identified, our AI-driven tools then analyze
sequence and phylogeny to understand the evolutionary relationships and
sequence variability of pathogens. The structure and stability tools use modules
to model the three-dimensional structures and then predict their stability and enhance their
presentation of protective epitopes to the human immune
system. And then, of course, immunological
profiling is another critical component where we're expanding the tools to understand
and predict how human immune systems might interact with different antigens. All of these
purpose-built tools feed into our antigen design model where we are testing different multitasking
architectures to synthesize data from the various models to propose the most promising vaccine
candidates. Now, I'm going to go through each one of these and provide a single ex
ample of a tool
for each of these modules. So next slide, please. Okay, so we're starting with sequence and
phylogeny. Here we're developing models for very specific viral families to inform design. This
particular module is broken into three phases, training, generation, and folding. In the
training phase, we use a large language model, which you've heard about, adapted for proteins
to learn the sequence distribution of a viral family. This training involves understanding
the complex lang
uage of proteins. So those are the amino acid sequences that determine the
structure and function. The generation phase is where the trained model becomes an architect.
It auto-generates a diverse set of new protein sequences adhering to these learned patterns.
These sequences are not just random guesses. These are hypothesized to maintain a
biological structure and function that we want. We further refine these processes
through techniques like mean cooling and k-means clustering to select
sequences that
represent our desired traits. And lastly, we have the folding phase. So once we've
generated a new sequence, we want to put it into a three-dimensional structure that we
can look at. And you just heard about AlphaFold, RosettaFold. We use these tools, OmegaFold,
OpenFold. And this allows us to predict the structures of our generated sequences. And
this is a really important step for proteins that don't have structures. And we can also
take this output and pipe it directly i
nto our structure and stability tools that I'll
talk about a little bit later. So here is an example of using sequence and phylogeny. This
is an example, a subset of paramyxoviruses. To the left, we're looking at genetic
diversity within the Henipavirus branch. The genetic relationships between these
viruses are shown by the color code, and the highlights are the geographical
spread of the HVV genomes. On the right, we dive into the structural biology, where we're
examining how far the sta
bilizing mutations, such as the disulfide, make it along the
evolutionary tree. So understanding the sequence and phylogeny of a particular virus
enables us to make faster predictions and in some ways is automating the process that was used
for the COVID-19 vaccines. As you probably know, the McClellan Laboratory had previously
stabilized the MERS spike protein. I think they published that back in 2016. And then
when the Wuhan virus sequence was released, they were able to solve the pre-fus
ion stabilized
structure of that within about 12 days. So these pre-fusion stabilized mutations are also found
in all US approved vaccines. So structure and function and sequence and phylogeny are all
very closely intertwined. Next slide please. Next up is our structure and stability
algorithms where we use computer vision to engineer proteins with desired
conformational stability. For most cases, we're using these algorithms to stabilize the
pre-fusion conformation of viral protein to eli
cit protective monoclonal antibodies.
In this module, we're teaching our neural networks very specific chemistries that immunogen
designers like Jason McClellan or Andrew Ward or Sapphire or Peter Kwan would use to rationally
design variants. So we're teaching them how to look at a protein structure and make decisions on
how to best mutate the protein to lock it into a particular confirmation. This includes teaching
the network to put proline caps, disulfide bonds, cavity filling substituti
ons, locking key
mutations, salt bridges, and even indels. So this slide illustrates our cavity filling
mutation module. The process begins with scanning the protein to calculate solvent accessibility,
identifying residues that are buried within the protein structure. These residues depicted here
with varying degrees of solvent accessibility are critical targets for stabilization
mutations. Once the targets are identified, our net is designed to recognize and down-select
specific amino aci
d substitutions. In this case, isoleucine, leucine, methionine, phenylalanine,
which are all known to influence protein core packing and stability. The net effect of these
mutations has been analyzed through another program that we've developed to build variant
structures and calculate changes in cavity volume. So as you see in the transition between
valine to leucine. Our goal is to ensure that these mutations lead to a reduction
in cavity size which will then correlate with increased stru
ctural integrity
and therefore potentially greater stability of vaccines. By meticulously
optimizing these structural parameters, we are able to design proteins that not only
meet our stability criteria, but are also more likely to retain their shape and function when
introduced as images. Another critical component of our immunogen design process is immune
repertoire profiling. So, it's very important to understand how the immune system responds
to immunogens as well as natural infections
. And so, on the left side of your screen, I'm
showing you B cells. These are where your neutralizing antibodies come from, and we want to
catalog the types of antibodies and epitopes they recognize. for natural viruses and immunogens
that we make mimicking those viruses. We're looking for protective epitopes, and we want to
avoid and even mask nonprotective immunodominant epitopes in our vaccine designs. We also care a
lot about how T cells are elicited by immunogens, and so we want to stu
dy the natural response
and protection afforded by cellular immunity. So this slide is a little bit deeper dive
into T-cell immunology and tools that we're developing. This is a joint project with the
J. Craig Venter Institute as well as the La Jolla Institute for Immunology. So thanks
to Jean Tan, Alba Grafone, and Alex Settee. Here we're focusing on the identification of
conserved immunodominant T cell targets. The crucial task here is selecting taxonomic groups
and determining the conse
rved regions of these viruses. And these become priority targets as
indicated by the sequence alignment on the left. In the central part of the slide,
we're outlining the methodology. So we perform a meta-analysis of known T-cell
epitopes using the immune epitope database, an analysis resource that's been curated by Dr.
Sutti for many, many years now. This feeds into a machine learning algorithm. where we look
at conserved regions of the antigen and design new immunogens aiming to elicit cr
oss-protective
immune responses against viruses that are closely related. Step two is the integration phase. So
this is where we bring together results from epitope analysis and predictions. This allows us
to select a set of candidate epitopes that we can use for experimental evaluations and then maybe
use in our immunogens. This pipeline allows us to prioritize epitopes with immunogenicity and high
conservation as these are likely to produce the strongest immune responses. However, we're a
lso
looking at those with moderate conservations that they exhibit high immunogenicity. This ensures
a robust and comprehensive immune defense. So we're also teaching our neural networks how to
target specific immune populations with mRNA. So as I'm sure you're aware, mRNA is an
elliptic nanoparticle that gets into the cell, is translated into a protein, and then either
makes it to the surface, in the case of B cell presentation, or gets processed by cellular
machinery, in the case of CD8
presentation, that would go through the proteasome and MHC1
molecules. We also have pathways to hit CD4s, which would go through the lysosome. And we
believe this strategy will create a robust immune response by activating both arms of
adaptive immunity. By targeting these very specific pathways, we can stimulate a more
comprehensive and perhaps more durable immune response. Right. So, from silicon to carbon.
So, computational predictions are great, but they really don't mean anything
unti
l they're tested in the real world. Putting them in carbon is what we say in our
laboratory. For the base cell-targeted antigens, we employ mammalian display, which is a technique
that we developed during the COVID-19 pandemic to understand how mutations in the spike protein
were impacting therapeutic monoclonal antibodies. This platform enables the rapid characterization
of surface expressed viral glycoproteins. We use the platform to understand expression, stability,
antibody binding, and
even host receptor interactions. We also designed the system where
we could cut the glycoproteins off the surface and structurally characterize them with negative
stain. And of course, this first generation could be done in hundreds of variants of glycoproteins
in a matter of days after we had sequence verified DNA. The second generation of the platform
saw improvements in scale where we built stable cell lines. This allowed us to perform
library-based approaches like deep mutational scann
ing in mammalian cell lines. And that's where
we can put all 20 amino acids at each position of a protein and determine what those mutational
effects have on stability, antibody binding, or other properties that we may be interested in.
We also increased the number of variants that we could test by several orders of magnitude
by moving to this library-based approach. It was also at this point where we realized
that the platform could be used for engineering and showed that we could replicat
e
pre-fusion stability across coronaviruses, which is shown at the bottom. So this detailed
workflow really epitomizes the transition from theoretical design generated by AI algorithms
to tangible experiments in mammalian synthetic biology that allow us to validate and refine these
designs. We can also use this platform to train models for very specific tasks, expressions,
stability, and to genicity. And this really creates a feedback to improve our prediction
algorithms. So moving beyond
coronaviruses, we've also begun using this platform for many
other viral families. Here I'm showing some Arenavirus glycoprotein data, and using
our mammalian synthetic biology platform, we also use conformational antibodies
as probes to report on the structure on the surface. So it's unrealistic to test the
confirmation of all of your AI-designed proteins. But using conformational antibodies as
probes allows us to test a million designs in parallel. In this example, we're showing lots
of
fever virus glycoprotein binding to two known neutralizing antibodies. These glycoprotein
variants were predicted and validated within a couple of months. And you're looking at
the combination of AI validated mutations. The original best in class is shown in
pink here. So in just a couple of months, we were able to increase binding to known
neutralizing antibodies that took about 10 years to originally The spheres in the blue
represent GPC variants that warranted further characterization, a
nd I'll show that on
the next slide. So this table shows the fold change in binding affinity of the various
AI-generated immunogens relative to the previously engineered version. Each row represents a
unique antibody that binds to a particular epitope class. The heat map indicate the fold
change in binding. The darker blue, the better. This allows for a rapid visual assessment, and as
you can see, by looking at all of the known or as many of the known antibodies that bind the loss
of GPC,
we can start to see where synergy happens from these mutations and where also where we're
breaking interactions and those are shown with less binding in the red. This forms the basis for
selecting candidates with enhanced antigenicity profiles for further development, streamlining the
process of image and optimization, and allowing for more targeted and effective vaccine design.
These data, again, can also be used to strengthen our models for future predictions. And so on
the next slide, I'
d like to thank my CIPI-funded collaborators. This truly takes a village. Dr.
Jason McClellan is our structural virologist. Dr. Scott Weaver and Alexander Freiber from UTMB
Galveston handle a lot of our BSL-4 work. T-cell immunologist Alessandro Setti and Albert Buffoni
from La Jolla Institute of Immunology helps out quite a bit. Jean Tan, a virologist from the J.
Craig Vintner Institute, as well as Jim Davison Arvind Ramanathan from UChicago or Argonne
National Labs with help with bioinfor
matics and some of the large language model that you see.
And finally, I'd like to thank the team leaders of my group, Antibody Discovering Accelerated Protein
Therapeutics, for all of their help. Everything that you have seen is due to all of their hard
work in leadership lab. And I will stop there. >> Robert Hopkins: Thank you very much,
Dr. Gollihar. And our final presenter on this panel is Demitris Zambas
from Pfizer. Your slides are up. >> Demitiris Zambas: Thank you very much. I'd
li
ke to start by extending a big thank you to the organizing team and back and HHS for inviting
me to present this use case. It's a very different type of use case from the last three that we
heard. I think the whole industry appreciates in those very scientific detailed use cases,
you obviously need that detail in defining your approach. In the operational space, we have fallen
victim over a number of years in designing our use cases to be very broad, in a sense looking for
some magic bullet
that would somehow accelerate the clinical development process. This use case
is very specific, and I think it's one of the reasons that it was successful is because we
approached it not looking at the entire static development or clinical development continuum,
but looking at very specific components within that continuum that could have a meaningful impact
in the overall timelines. Go to the next slide. The heavy lift, if you will, of executing
a clinical trial is the processing of the m
assive amounts of COVID data that are generated
and brought into the sponsor for assimilation analysis and reporting. In the case of the COVID
vaccine study, simply looking at the inflection point where we executed the primary analysis
based on having 90 positive cases as designed, as defined by the protocol. At that point
alone, we already had collected 105 million data points in a four-month window. And within
all of those casebooks from all the subjects, there were 46,000 subjects, One m
illion free text
phrases that were inserted by study coordinators, nurses, and physicians throughout the case
books that were generated in the trial. And even that free text is critical because they
could be hidden unreported at various events, concomitant medications, other issues that we
have to have individuals manually reading through every one of those to interpret whether or not
that there was a missing adverse event or other detail that should have been collected in the form
of data
and not text. So six months prior to this, we had initiated a project with the next
slide. We had initiated a project using this use case to find a way to leverage a machine
learning algorithm to predict that anomalous in our data. It's a part of the clinical development
process that unless you're directly involved, most people are not familiar with. But in generating
these millions of data points, there are a lot of erroneous data reported that we need to go back
to the clinic, the clinic
ian, the nurse, and query them to resolve a data anomaly, two things that
just simply could actually do not make sense. So, in order to train the model to detect these, we leveraged a massive amount of historical
data that we have access to. In this context, it was about 400 million data points generated
from clinical trials and about 100,000 queries and responses that were issued to clinics to
resolve those data anomalies. In addition to using that to trade the algorithm, we also do the
th
ings that were very clear and direct and didn't necessarily require an A algorithm but could
be more prescribed. An example here is where, and it's a very common one where we're looking
for recombinant medications that are reported in clinical trials that may suggest that an AE was
not reported. For someone, a simple example of acetaminophen, there's no clear AE associated
to acetaminophen, and we would require why. And in some cases, in a vaccine study especially,
that could result in a ne
w adverse event that was previously reported of reactogenicity,
injection site reaction, and so on. So, the technology that we designed actually looks up
approved labels for different drugs that may have been reported in the common medication section
of the study and then prompts users to assess if there's a possibility that something was not
reported to justify that concomitant medication and vice versa, an AE that should have had a
common medication with the death. If we go to the next sl
ide. So the way we approach this
actually has been the most significant part of the learnings because we have applied this
methodology to predicting protocol deviations, automation of coding of medical terms and
who drugs, even now looking at the potential of this kind of capability to identify the
conditions that can predict a safety signal. So in this example, we call it a hackathon
involving internal and external teams, where we allow teams to develop the models, train the
models with t
his very large amount of historical data, and then provide them with a new study that
was being conducted to see if they could predict the same errors that humans have predicted in
the same studies. And in full transparency, this was done in partnership with an organization
called SAMA. And we were able to predict about 50 percent of the anomalies that the humans were able
to predict with all the tools at their disposal. Very sophisticated algorithms, reporting tools,
and even direct listin
g reviews. And once we reached that 50 percent, the winner had achieved
that 50 percent accuracy in their predictions. And then we took that version of the algorithm
and initiated a human in the loop learning process. These original phases were without a
human in the loop or unsupervised learnings. From there, we were able to let take those
algorithms, apply them to a number of ongoing trials where humans would execute their work in
the traditional means and then review the outputs from thi
s technology and give it feedback. And the
feedback was very simple. It was in the form of a thumbs up where the prediction was accurate or, if
you will, consistent with the human prediction. a thumbs down if it was incorrect, and there was an
option of a sideways thumb for scenarios where the anomaly detection was correct, but the predictive
query that should be issued was not using a clear human question that a doctor or a nurse could
appropriately respond to. And a big factor here, becau
se of the context of a clinical trial, was
the avoidance of false negatives. False positives may mean a little more work for someone in
reviewing the output. But a false negative means that you missed a potentially anomalous
data point that will be more troublesome than downfield. So we intentionally had to leave the
algorithms had to allow for more false positives to avoid having any false negatives. If
you could go to the next slide, please. So, the timeline that was set for this program
was quite challenging, quite complex. And another aspect that allowed the acceleration besides some
of this technology was the fundamental design of the protocol itself. It was a dynamic design
starting with in the same protocol, starting with a initial phase to select the optimal
sequence that would progress into phase two, phase two to identify the optimal dose, and
phase three that would be powered adequately to demonstrate safety and efficacy in a
large population, in this case, 46,000
subjects. And as you can see the timeline here
from the phase one start, until the submission to the agency in November. So it was by far
the fastest timeframe we had ever worked in. And it wasn't just simply the technology. It
was the open line of communications between the company and the regulators and very open line
of communications between the different functions within the organization. And, of course,
the technology. You go to the next slide, please. So, at a very large level summa
ry, 46,000
subjects, 154 investigators, 1,000 internal colleagues that were working on this across
all the different divisions and departments, focused on bringing one vaccine to the market.
And the important part to note here is how much time is beyond the science. When you start
dosing a subject until the endpoint of a study, one could argue, is the execution of the science.
Any of the time leading up to being ready to dose that first subject And I'm not talking about
the research aspect
s of designing the vaccine, but more the operational aspects, as well as the
components from when you have that last subject, last visit, or your primary endpoint until
you're able to summarize the results. Those are very large amounts of time during
which whatever you're trying to treat is still occurring, whether it's in this context
of pandemic or in many other chronic cases, you're preventing that therapy from reaching
the ultimate customer. So our battle cry in the development organiza
tion is every minute counts.
And especially those overhead minutes where you're really just executing process and red tape
versus the science of the protocol and our primary targets to reduce. So if you go to the next slide,
please. So as I said, in this scenario, our goal was to minimize time, maximize quality, and in
that traditional speed, quality, cost triangle, willing to accept the cost without willing to
accept any compromises in time or in quality. The fundamental exchange of inform
ation
between the organization and the regulators, our ability through this technology and the
process to deliver weekly data monitoring committee reviews on the data really had a huge
impact. So these are all the different teams and organizations that participated or contributed
to the project. One apology for one error, the safety group is listed twice. So one of those
circles should read legal because these kind of efforts do require attention to privacy and legal
concerns, because when
you're using data in the context of training models, you have to assure
that you have the permission from the data that those individuals belongs to, to use for this
purpose. If you go to the next slide, please. So, this across the board, this reflects the
amount of time that was reduced in each of the core components in executing a clinical trial
from the randomization system's timeline was reduced by 70 percent. The protocol design and
finalization reduced by 75 percent. The setup of the
patient-reported outcome devices, these are
the devices where the subjects report back to us Things like reactogenicity or QOL measures,
75 percent. The timeline to submit the IND, 80 percent. The setup and initiation of all of
our clinics around the world that participated, 85 percent. And the database activation
setup, which my group is responsible for, is reduced by 90 percent. The end outcome
of this was from the point that a clinical center entered a data point into the
data collecti
on tool for the study, until the time that that piece of information
was validated, queried if need be, and frozen, went down from 25.4 days to 1.7 days. And
this was during the execution of the study. In the last days of the study, as we
approached the target number of positive cases, that number was below one. And because of
this, when we reached that 90th case and we proceeded to finalize the database and make it
available to our statistics group for analysis, That time window, the indus
try median is about
a month and a half. The best in class industry performance in this context is about four
weeks. In this trial, it was one day. And it was one day because of all of these, not only the
technology, but all the factors that I mentioned. But there was a huge amount of data review and
reconciliation that the technology was able to basically support the clinical data manager
in resolving and finalizing that otherwise would not be possible without a technology
like this. Go to
the final slide, please. So, some of the components that both were
leveraged to make the technology efficient but to also make the human sufficient are things
like standards, both in the design of the study and in the design of data collection tools and
in the design of the outputs that were used to generate the tables, lists, and figures for the
CSRs and the submission. and really leveraging our regular engagement opportunities to keep
open lines of communication and designing the program
. For this specific tool, I visited
the agency in two different occasions to go over our plans and our approach and validation
plans. And we've archived every single learning cycle of data to be able to demonstrate
why the tool is giving the outputs that it's giving and not treat it as a black box. And
that, with that, I'll conclude my presentation. >> Robert Hopkins: Thank you very much, Mr.
Zambas. It's a very fascinating panel. Clearly, many applications of artificial intelligence
acros
s all of the steps of the process and the regulatory piece is certainly a critical
one as well. We're a bit over on time, but I think it's worthwhile that we open up
for 1 to 2 questions or comments. Daniel. >> Male Speaker: Thanks, Bob, and thanks
to the panel. That really was a great set of presentations. My question is, I guess
following up on I think something that Dr. Mathew said about the AI analysis is only as good
as the data you put in. And I'm wondering about specifically with imm
uno, immune profiling
when you're trying to look at human immune responses and what's predictive of protection,
what vaccines predict that they can induce them. There's a lot of variables there that have to
probably be corrected for and considered. And so, my kind of simple question is, are there any
basic guidance you can give us in terms of what size of sample size that you need to
be able to have meaningful predictions for a population that you're trying to model for in
AI? And maybe Dr
. Schenkelberg and Dr. Gullohard, based upon what you presented, might be
the people I'd like to hear from first, but anyone that has any ideas would be great. >> Justin Mathew: Yeah, there's a number of
things, questions embedded in there. I mean, there's variation. So I think we're, at least
we saw in terms of looking at the immunome, which we described as all the components
and linkages of the immune system, Is these longitudinal studies over time of specific
populations where you can do
an intervention you sample before at a baseline you vaccinate and then
you you or it could be an infection to you could do a chimp study. And then you look at changes
both biologically as well as clinically in terms of outcomes. You have variation over times in the
samples. You have variations between individuals. And so, there's becoming more and more ways,
I didn't show the papers, but where you can correct with some of that particularly through
single cell analysis combining with deep l
earning where you can of these batch variations. The cost
of our studies were up to 128,000 per participant, which is way too much. Those costs are
coming down. And we really struggled with like what size you need and what are the
minimal number of samples. The early studies, I think you got to sample more to start
getting an indication of where you look at. And then you would build, you would start
to build in, you know, eliminate assays that you don't need and you kind of start with
a ge
neral like somewhat hypothesis neutral in terms of where you're sampling identify where
you see signatures and then expand studies. I think you really have to work on, you know,
efficiency, cost controls. This is not, you know, these are really initial studies that are giving
you initial, like, insights, but not, like, more predictive signatures. And I think over the
next five to 10 years, we'll get better, things will get cheaper, and we'll start to know where to
look at. Does that start t
o answer your question? >> Male Speaker: Yes. Yes, it does. >> Jimmy Gollihar: I'll just add that, you know,
the serology portion is a little bit easier to do, high throughput. It's the T cell side of the
house and the durability that we often don't find out is a problem until many years later or
when something is reintroduced and challenged. So I can't give you a number, but we should invest
a lot more in understanding both the B cell, T cell side, as well as serology and what
is impacting
long-lived durable protection. >> Male Speaker: Thank you. >> Robert Hopkins: Well, again, I want
to thank all the members of our panel. You've given us a lot of things to think about
around the AI spectrum. And we'll now turn to our next panel. Our next panel is innovative
approaches to improve adult immunization. The COVID-19 pandemic provided many lessons as
well as a stark reminder of the importance of equitably vaccinating people across the
lifespan. Vaccines play a key role in health
y age protect those who may have a higher chance of
serious illness or vaccine-preventable disease. In this session, we'll review data on the impact
COVID-19 has played on some routine vaccines for adults generally, as well as those specifically
recommended for pregnant people. We'll also explore innovative policy and legislative options
and post-pandemic innovations to address systemic issues and delivery in long-term care settings.
Our first presenter is Nandini Selvam from IQVIA, followe
d by Markeisha Jones from
the Center for American Progress, and then Elizabeth Sopcich from
AMDA, Society of Prostitute and Long-Term Care Medicine. Ms. Selvam, I
see you on the, and your slides are up. >> Nandini Selvam: Thank you. Thank
you for the opportunity to present. So, as I was just introduced, I'm going to be
talking about adult and maternal vaccination trends in the US. So we actually took this on,
and I'll explain a little bit about the data, but to really understand the impact
of COVID
on routine adult and maternal immunizations, specifically influenza, pneumococcal shingles,
and then Tdap in the maternal setting. We use patient level data representing both private as
well as public data sources and insurers across all 50 states of the US. What this really
means is we used administrative claims data that we have access to in all 50 states of
the U.S. It's a fairly large population, so we started with approximately 260 million
adults, and we defined that as 18 p
lus. And when we were done with inclusion and exclusion
criteria, it left us with about 60 million for tracking. And vaccination rate was
calculated as the number of adults who received each vaccine out of eligible adults,
and it was aligned with the U.S. population. So this is really generalizable and representative
of the U.S. I'm going to actually skip the key findings and insights here because I'll go
through it in the subsequent slides. So can we move on to the next slide, please? So t
his gives
you a good bird's eye view of the summary across each of these vaccines. So when you look at this
influenza, as you can see, the green line is for the 65 plus. Across the board, what you're seeing
is, at least in flu, In the last couple years, we saw a peak in the 20 to 21 period, and then it's
declined slightly in the older age groups, and then a little bit more in the younger age groups.
Later on, after I finish up this review of this particular set of data, I do have a more sup
ply
chain level data view that I can also present. When we look at shingles, we're looking at this
is not an annual vaccine, right? So this is only for the 50 plus, And so when you look at this
rate, it may look smaller than you've seen in other settings, but that's because we're only
looking at the eligible 50 plus year olds. So if they're already vaccinated or if it's ongoing,
then we don't see them in the denominator of this population. Pneumococcal was interesting. So
it did improve in
the last couple seasons, last couple of years. And I think that's
primarily because of shared decision-making, but also because of supply chain
issues being resolved. And finally, we'll look at Tdap in pregnancy. And what we're
seeing is it's pretty much hovering in the same spot over the last couple of years. And it's
just about 50 percent, depending on how it's broken out. And we look at this amongst 18 to
49-year-old women and based on year of delivery. So each of my subsequent slides a
bout
the vaccines, the individual vaccines, is going to be oriented this way, so you'll see
it by age group, by race, ethnicity, in a fairly crude fashion, so you're looking at Asian and
others, white, Hispanic, and black. payer channel, urban rural status. And I do have a big asterisk
for this particular field because it's only about 5 percent of our cohort had that status
indicated. So this is a very small group. So all these shifts you see, and I know that in
this setting, it seems like
rural is better than urban and I've been asked about it because it's
different than other national averages. So that is my caveat for this particular strata as well
as household income. And when you look at flu, what you see is that there were declines and it's
pretty obvious across the different age groups, across the different race ethnicity groups. What
you see here is that it's much better among Asians versus the other race ethnicities followed
by white and then Hispanic and then black
. And then the payer channel, it's better in the
public versus the private. And then urban rural, as I just said, is sort of a set aside.
And then you have the highest income groups actually being more adherent and taking
influenza vaccines versus the others. This is actually quite a neat view. What
it does is it gives you a geographical perspective of how this looks The. Map on
the left is for the 18 plus age group and the right is 65 plus. And what you see
is the vaccination rates compar
ed to, so it's October 22 through September 23 compared
to the same period of prior year. Anything that's red and the darkest red show you sort of the
worst declines versus greens where you've seen improvement in flu vaccine uptake. And
what you see that in the 18 plus, you know, there were decreases in about 39 states. versus
in the 65-plus age group, about 20 states versus a much larger population in the younger age group.
Next. We're looking at shingles here, and it does seem like it's i
mproving in the different strata
as well as just over the years. And we're still, you know, there are a number of reasons for
it, but what this really does tell you, though, is still there are many opportunities to improve
vaccination across the board. So, while this is an improvement, I think there's still a large gap in
how much we should be covering with vaccinations. Same thing here, so you're looking at the
50 plus on the left, 65 plus on the right, and what you're seeing in 22 states
in
the 50 plus, there were improvements, but then decreased in about 48 states for the
65 plus in looking at the 22 to 23 versus the 2019 data. I know I'm flying through this, but I
know we're behind, so I'm trying to keep on time, and I'm happy to take questions for more
details. So, this is looking at pneumococcal, and the recommendations, just for
reminder, is 50-plus for Pneumovax, 65-plus for Prevnar. Here, we have seen sharp
increases, and again, this is, we believe, as a result of s
hared decision-making as well
as supply chain issues being resolved. So, we're hoping that this will be a continued trend up for
pneumococcal. But again, the black and the white population seem to be the least vaccinated as
compared to the other populations we've looked at. And then we're looking at this for the 65-plus
age group versus the 50 in the prior slide. Again, similar sorts of trends, same sorts of reasons, we
believe. in terms of just improvement. But again, you know, I think my
overarching thought
and sort of thought process on this is there's still a lot that can be done.
I think with more improved programs, I think a programmatic approach, grassroots
effort, perhaps we could, there's definitely lots of room for improvement in terms of
vaccine uptake, routine immunization uptake. So, you can see here there's a lot of green.
There are just a couple states with some red, but the vast majority of the country looks like
it's improving. A couple really red states, an
d then there are some that are pink in the
65-plus group. So, there's some declines, but overall, it seems to be a better
story for pneumococcal than some others. So this is looking at Tdap in pregnancy. And
then again, when we look at it by age group, it does seem like the older, it does seem like
the older population, the 35 to 49-year-old women are more likely to get Tdap versus the
younger ones. When you look at the 18 to 24, it's the lowest vaccination rate, but Even
this, right? So w
e look at this and we say, wow, this is great. I mean, women are getting it,
but not really. So if it's 50 percent, it hovers about 48 percent when you look at it combined as
a cumulative look, that's still more than half the population of women not getting vaccinated.
From a race-ethnicity perspective, Asian seems to be higher than white versus Hispanic
and then followed by the black population. And here we see that privately insured patients
are higher than the public insured. Urban-rural
, same caveat. And then household income, again,
the highest to lowest is what we're seeing here. And then you'll see some grayed-out
areas in this particular graph or map, sorry. That's because we didn't think we
had sufficient volume in those states to make a real decision on how the trends were
going, so we actually excluded those states from our analysis. And then in the rest of the
states, what you're seeing is that overall, the vaccination rate has improved in 28
of the states when c
ompared to 2019. for the 18 to 49 ERH group. I believe that's my
last slide. Is there another slide? Okay. So before I actually stop talking about this, one
of the other things I wanted to do, which there is no slide for, is to present a view of sort
of big picture, really New York time results on what's happening with this current season. We have
what's called the National Prescription Audit, which is a supply chain patient level, but
looks at the sellout data from pharmacies. So, when we
looked at the 2024 season versus
the exact same time in 2023, right? So, it's from August 22 to February 3, 2023. and then
again looked at August 2023 through February 3, 2024. We looked at six different vaccines.
I looked at COVID, which is something that we haven't talked about here yet, but I
thought you might be interested in seeing. In the 24th season, we had 31.1 million
Individuals who did get COVID vaccines versus 42.3 in the prior year, and so there was a decline of
27 percent in
COVID vaccinations. Influenza, there was a decline of 11 percent from the prior year.
So it's the same exact timing in terms of what's considered a season. TDAP, that was actually an
improvement of 13 percent. Shingles, a 1 percent increase. Pneumococcal saw a 37 percent increase.
And then RSV, it's just we've got 9.6 million as the number of individuals who have taken RSV
vaccine in this since it's been launched. So, this is just some extra information that's out
there that I wanted to pre
sent to this committee, and if there are any questions, I'm happy to
take it, but thank you for the opportunity. >> Robert Hopkins: Thank you very much, Ms. Zava.
We'll turn to the next presenter, Marquisha Johns from the Center for American Progress. Johns,
your slides are up. I see your face on the line. >> Marquisha Johns: Great,
thank you. Before I get started, just to give a kind of brief introduction
about the Center for American Progress or CAP as we like to say in case folks don't
know, CAP is a DC based policy think and action tank. So we create and advocate for
progressive policy ideas through our research and by working with legislators and those
in the administration to push forth those ideas. Very brief high level summary of the
organization just to kind of ground everyone. So, today I'd like to talk about some work
that we have done on adult vaccine access, specifically around some advocacy
we've done on a coverage program for uninsured adults. If we
could go
to the next slide. Great. So just to kind of set the
scene for vaccine coverage in the U.S., so thinking about the different groups that have
coverage for vaccines, so private insurance, the ACA has set it so most folks under
private insurance have coverage for recommended routine vaccines without any
cost sharing. With the passage of the IRA, we now have closed a lot of the critical cost
sharing gaps within Medicaid and within Medicare, brought a lot of equity to vaccine coverage
in these
programs, eliminated the cost sharing for adult vaccines under Medicare Part D, and
then also made it so the folks who were in the non-expansion population in Medicaid were
able to get vaccines without cost sharing. And so the IRA brought a -- like I said, IRA
brought a lot of equity to vaccine coverage in those programs, so those groups have coverage.
And then obviously, Vaccine for Children, like most people know, very, very significant in
changing the tide for childhood vaccine rates, c
overs almost half U.S. children for getting
their routine vaccines, and a major reason why we have high vaccine rates -- the high vaccine
rates we have today for childhood vaccines, although there is still quite a bit of work to be
done on the childhood vaccine war front. So this leaves about 24 million non-elderly uninsured
adults without comprehensive, no-cost vaccine access. This is a significant population who don't
have access to vaccines without any cost sharing. The Biden administrat
ion has included in both
their 2023 and their 2024 budget proposals a -- implementing a program called Vaccine for
Adults, which would be modeled after Vaccine for Childrens, that would establish a program
that would cover vaccines for uninsured adults, routine vaccines for uninsured adults. If
we could go to the next slide? Great. And I'll breeze past this slide pretty quickly
since my co-panelist was able to give you guys some really great updates on vaccine rates
among adults, but one t
hing that I just want to note since I am focusing specifically on
uninsured adults with this presentation, they pointed out quite a bit quite a number
of disparities in different subgroups, whether racial, geographic, things like that. I
just want to note we would expect a lot of gaps in rates to be even bigger with uninsured folks,
because cost would be an additional barrier. So we can go to the next slide. And then this is
-- you guys already know this as physicians and vaccine advocates,
so these facts are not new
to you, but I felt it important to emphasize how significant the impact of vaccine-preventable
diseases is in society, causing many preventable diseases and many preventable deaths and costing
society billions of dollars each year -- so a pretty big, significant problem that we need
to be moving the needle on. Next slide. Okay. So this is where I want to get into the program.
And so currently, what we have established is section 317, which is a fixed discretionar
y
funding program. So this means that it can get cut each year. The funding levels are flexible and
can fluctuate, and actually, estimates from the CDC show that it's already severely underfunded
based on the many uses that it already has. But this program is already in place. It is meant
to cover public health infrastructure need, which as you see this list, this is only a short
list of some of the different funding uses for the program, but covering vaccine education
and communication, c
overing data systems, administration, distribution, outbreak monitoring
and response, research on vaccines and safety and effectiveness, and then also some very
limited uninsured adult vaccine purchasing. So it is covering a very wide array of different
uses, and so because of this, very little of that section 317 funding is going towards uninsured
adult vaccine purchase. In fact, estimates are showing less than five percent is actually going
towards uninsured vaccine purchase, which is rea
lly setting it so it's a limited amount of
vaccines that are actually able to be purchased and accessible for the uninsured population. So
alternatively, a Vaccine for Adults program, which it would be dedicated solely to the purchasing
of uninsured adult vaccines, it would be much more expansive, allowing us to purchase a wider
variety of vaccines, larger supply of vaccines, and making it so it's that wide variety and that
larger supply is available in every jurisdiction, but then also mak
ing it so that we can do
advanced contracting with manufacturers as well. Vaccine for Adults would also help
facilitate expanding provider networks, making sure that we're able to capture
the different care settings that are more regularly accessed by adults, and so thinking
about urgent care facilities, pharmacies, things like that. But then most importantly,
this would be a mandatory funding model, and so funding would be consistent and it could be
relied on to serve the needs of uninsur
ed adults, and so we could make sure that
everyone has access. Next slide. And this is where I want to spend quite a bit of
time kind of setting the scene for the political landscape for Vaccine for Adults. And so what is
the -- that's probably the question many folks have -- what is the political reality for this? So
like I mentioned earlier, the Biden administration has included this in their budget requests for
the last couple of years, so this signifies its importance to the administrat
ion, but of course,
congressional action is needed. And so there are several congressional champions for this work, and
there is in fact draft legislation in development. We have -- CAP has provided some comment
on that. It just has not been introduced or released publicly. And so that is
still in development, but one of the big things that we're consistently hearing
of what’s hindering the progress in Congress on this is folks don't understand why we
need Vaccine for Adults. The question
is, we have 317 -- isn't 317 already doing this?
And one thing that we're trying to do is make the message clear that 317 is not in fact
already doing this, for some of those reasons that I already outlined for you guys in the last
slide, the fact that it's discretionary funding, so it's limited funding as well. Those funding
levels don't meet the needs of the community. Especially it doesn't meet the needs
given the fact that the funding has so many different uses involved in it,
being va
ccine education, research, all of those things beyond just the purchasing
of the vaccine, and then also the fact that, you know, it doesn't -- the different levers of
authority within section 317 doesn't allow for it to capture those non-traditional providers such
as, like I mentioned, urgent care, pharmacies, things like that. And so there are some
key distinctions between a Vaccine for Adults program and 317 that makes it so that
we need VFA. And so that message needs to be dispersed thro
ughout Congress, but it is such
a nuanced and difficult message to describe. And so we're trying to help make that message
as clear as possible. We even put together a congressional memo explaining the differences side
by side and distributed that out to folks, but more work is needed there. One of the other things
that is really holding up progress on this is just a general lack of appetite for vaccine work. I
mean, we've all seen the rise in anti-vaccine rhetoric, both in the public but a
lso in the
legislature. And so that is really -- there's not really a lot of appetite for movement on this type
of work, but then beyond that, as I mentioned, this would be a mandatory spending program,
which is central to this being successful. We need consistent funding for tis to be
successful, but there's also not appetite for more mandatory spending. We're already seeing
the difficulty to getting a full-year spending package passed in Congress. We've been seeing
that fight happen for
-- play out for a while, and so another mandatory spending program,
there's not a ton of appetite for that as well. And it would need to be -- this is the
kind of program that would need to be attached to another moving policy vehicle. There was hopes
that it would be attached to vehicle such as the Pandemic All Hazard Preparedness Act. That has
not panned out, but it would need to be attached to another major moving policy vehicle, and at
the current moment there's not one that is ideal. A
nd so that's another issue, but then beyond
that -- and we all know this -- there's just generally -- in public and then
also in policymaking, there is a lack of appreciation for preventative services,
especially for legislators who are making the hard and difficult decisions on funding. They're
deciding on funding decisions between programs, pulling money from one budget to the next. It's
hard for them to really grasp the long-term effects of preventative service, but also the
wide-reachi
ng effects of preventative services, that spending money on vaccines not only saves us
money in the long run, but it saves us money in other parts of the budget in ways that's just not
always so easy to quantify. And so that is another challenge towards communicating the message
on why we need this and why it's critical. And then the last thing I'll say on here is
also there's a bit of narrative work -- quite a bit of narrative work -- that we need to do on
rebuilding trust and authority fo
r public health, and also for the CDC as an institution.
We've all seen the congressional hearings and the backlash following the pandemic,
and because VFA would obviously be a CDC program, we need to -- in order to gain
congressional support for the program, we also need to build -- rebuild CDC
authority and support. And so those are kind of -- that's kind of a lay of the
land of the political landscape for VFA. We continue to push forward, but need additional
support and need to continue
to build champions. There are a couple of different ways I think
providers can be especially impactful with building narrative and explaining why this program
is needed. It -- providers are trusted and expert messengers, so it's important that you're
integrated into policy advocacy efforts to expand vaccine access. Using patients' stories
are extremely powerful. Explaining the impact of preventative services, explaining why the
current system isn't working is important. But then something
else that I also want to
make sure that I want to be clear about is that Vaccine for Adults is only one part of a
larger strategy to boost adult vaccine rates, because cost is only one barrier to vaccine
uptick. While having insurance coverage removes the financial barriers, there are other
barriers that still exist -- most importantly, vaccine education, which is what I mentioned
earlier. So section 317 tries to address that, and so I have in a little -- on the side here
a couple of diffe
rent policy options that need to be -- actually need to be done in concert
with expanding -- with establishing Vaccine for Adult. And so expanding 317 funding would
allow for more improvement to vaccine education. Also, we need to address vaccine mis- and
dis-information, working in partnership with social media companies. And then I'd be remiss
to not mention the CDC Bridge Access Program, which is covering COVID vaccines for the uninsured
population through end of the year. The program is
serving as a important opportunity to test some of
those key mechanisms of a future VFA program such as working with pharmacies. That would be really
impactful for making sure we boost -- we access -- get access to vaccine where people actually
need them and would be able to get them. And so that concludes my presentation. I will hold for
questions at the end, but that's it. Thank you. >> Robert Hopkins: Thank you very much,
Ms. Johns. Our final presenter of this panel is Elizabeth Sobczyk
from AMDA, the
Society for Post-Acute and Long-Term Care Medicine. Elizabeth, your slides
are up. I see you on the line. >> Elizabeth Sobczyk: Thank you. It's great to be
with you all this afternoon. Next slide, please. What I'd like to do today is provide some context
for immunization in a long-term care setting. Over the last few years, we've seen how important it
is to protect both the residents and the staff in that setting. I'd like to share a project overview
of our Moving Needles P
roject, the findings, and progress that we've made, specifically
our quality improvement pilots, our frontline staff survey, and some EHR IIS interoperability
efforts. And finally, I'd like to identify some key opportunities for improving rates among
both staff and residents. Next slide, please. So it's really critical to understand
the environment that we work in in long-term care. It is one of the
most heavily regulated industries, and there are different regulations for skilled
nursing
facilities, assisted living facilities, and home-based care. So you need to know
which type of facility you're working in to understand if you're at federal or state level
legislation or regulation and which part of CMS you're working under for those regulations.
As you all know, there is very short staffing, generally low-wage workers who are working
with high-need residents. The shortage across the health care system certainly exists and is
exacerbated in long-term care specifically. Thos
e who do stay are burned out more quickly,
and so we have very high turnover rates. We also have more complex resident needs.
The individuals that are in assisted living now are ones who used to be in skilled nursing
facilities. The ones who are in skilled nursing facilities currently have a higher level
of care need than we have seen in the past. The other piece that's helpful to understand is
that we have a number of real estate investment trusts that are purchasing buildings, and so
it'
s no longer the clinical component that's driving the decision-making. Profit margins are
very slim, and often as you're paying others to run the building and lease the building from
them, those financial dynamics have changed. And then finally, immunizations are
dependent on leaders championing and setting the vision as well as directors
in nursing and/or infection preventionists executing amidst many other immediate job
needs. The challenges that they're dealing with are extensive. The cl
inical topics
that they're dealing with are extensive, and so really having a champion and a vision
can make a big difference. Next slide, please. To level set a little bit with AMDA, we are the
only medical specialty society representing the community of medical directors, physicians,
nurse practitioners, physician assistants, and other practitioners working in various
post-acute and long-term care settings. We have about 3,500 members currently. We have a board
that offers a certificate
of medical direction, and we received the Moving Needles
cooperative agreement in fall of 2021, which is what I'd like to talk about
next. Next slide, please. Next slide. The goal of this five-year cooperative
agreement from the CDC is to make routine adult immunizations a standard of care for
post-acute and long-term care residents as well as an expectation for employees. We have
several components that we're focused on through the cooperative agreement, both directly with the
facilities
through quality improvement programs and addressing things at a more systematic
level for all facilities -- for example, integrating routine immunization
reporting to state IISs. We'll also be working on a cost-benefit analysis very
shortly. Next slide, please. Next slide. So we are in the second round of two rounds
of quality improvement pilots. We have the final data from our first round. These are
the rates for the average vaccination in all nine of the facilities that participated
in t
he first round. We have an upward trend for all vaccination rates during the period
of the project, even for TDAP and shingles. Not all of our facilities focused on those,
and so what you see is really representative of the ones -- only the ones that did, and we
still made a fairly significant improvement. You see the zeroing out of rates in September
for both COVID and flu. We had a new vitalant booster when this -- when we did this round with
the pilots, and then a monovalent booster in t
he second round. And we reset the rates in September
for influenza as well to reflect the new season. Next slide. In many facilities, our COVID and
bivalent booster rates reached the same or higher than the facility's primary series rates
at start of the pilot. In almost every facility, our influenza vaccination rates increased.
And in many facilities, our pneumococcal vaccination rates were significantly higher
than at the start of the pilot as well. So what worked? How did we get there?
The facilities implemented structured processes and procedures because of the
pilot. They routinized their offerings, and they expanded what vaccines they
provided. I can't say how foundational that is to improving rates. They literally
offered TDAP and shingles, and residents said, “Yes, we’d like them,” and the rates went up. But
that's not the case in all facilities right now, and just expanding the offerings
could do a lot to increase rates. They also checked the status on admission
or
used reminder recall systems. Having a renewable consent document for multiple vaccines
on admission saves a lot of time and energy, and we've seen a lot of success
with that strategy as well. A lot of our sites organized vaccine availability
outside of clinic times for their residents. Increasing that accessibility led to a direct
increase in rates. They assigned someone or a team to be responsible for the process --
nothing new there. Having a champion makes a difference. And they used t
he state IIS to
get data on resident history. Next slide. We did have some pain points, too. The facility
billing during the Part A stay for Medicare was challenging once the Public Health Emergency
Act was over. Pharmacies were able to directly bill Medicare on behalf of facilities during the
public health emergency, and now the facilities must bill directly. It is cumbersome for them to
do so, and it's been challenging for them to offer vaccine to Part A stay residents specifically.
Ther
e was confusion around billing procedures for Part D vaccines. Finding histories without
an IIS was difficult, and getting consent from family members for residents who were unable to
assent for themselves was challenging. Next slide. We developed a billing guide that we just recently
released to help clarify for facilities and for pharmacies who needs to be billing and how they
need to be billing by type of Medicare, Part B or D vaccine, as well as whether they -- the
resident is in their
Part A stay or a long-term stay. There are a variety of complexities.
It's incredibly hard to navigate, and so this is just one step to try to make it a little more
clear about who can bill for what. Next slide. Staff were challenging -- more
challenging than the residents, for sure. But we did see an average trend upward
for flu and a slight upward trend for COVID and hepatitis B as well during the first round of
our pilot. Next slide. All of our facilities struggled with the bivalent boos
ter rates.
Vaccine fatigue spilled over to influenza in some facilities. We saw that strategies really
needed to be tailored to individual circumstances, and so the successes occurred when facilities made
more vaccines -- made vaccines more accessible, when facilities addressed staff and cohort
-- they would take the kitchen staff or the housekeeping staff or the CNAs and offer
education in cohorts -- and when they persistently offered the vaccine. It couldn't
just come in once for a clini
c and be done. What also worked was identifying the
reason for the lack of vaccination. Sometimes -- frequently -- it was a lack of
convenient time or location. We saw that offering it three times from a trusted peer or staff person
drove rates. And there was also still the more traditional hesitancy, so separating out those
issues really helped identify what strategies would work to improve rates. We also had a number
of facilities step back if the continued offering of vaccine pushed staf
f further away from the
mark of getting them vaccinated. We asked them to focus on building trust not specific to vaccines,
but just between leadership and frontline staff. And again, making vaccines accessible and
providing reasons for the staff to -- and being able to get their records, to bring their
records in if they're vaccinated outside of the facility. We also found that incentives worked
only around the concepts of community and comradery building. If you were working towards
ince
ntives that built your culture of vaccination, they were much more successful than a single
gift card. Even large incentives did not work if they weren't tied in to building
the culture of vaccination. Next slide. Data collection for staff was particularly
challenging, especially around hepatitis B vaccine. There's just not a lot of tracking of
data around this right now in a systematic way, and so aggregating that information for the
project or for just knowing what your staff rates are wa
s really challenging. There was
not an allowable use case. I think this is changing a little bit now, but many of
our facilities could not look up staff vaccination history in the IIS and that
was a challenge. All of our facilities struggled with the COVID bivalent booster
rates. That fatigue spilled over to flu. The other piece that's really important to
understand is that hesitancy is reflective of the communities from which staff come. Staff
are not hesitant because they're employees of
long-term care. They're hesitant because
they're coming from communities that are traditionally hesitant, and so only addressing
the hesitancy in the workplace setting is not sufficient. The other huge challenge that we
have right now is that with commercialization, facilities are unable to offer the vaccine, COVID
specifically, on site. Long-term care pharmacies are considered out of network with commercial
insurance, and that's who delivers vaccines to the facilities. And so without that
access
point, we've seen rates plummet. Next slide. We started our round two pilot in July of the
past year. We have four chains participating now with three facilities in each chain. We have
geographic diversity from the east, midwest, south, and west. We have both skilled nursing
and assisted living as well as for-profit and nonprofit institutions that are a part of the
pilot. We have a more directed process around the standards for adult immunization
this time around and a strong focus
on standardization and operating procedures. Our goal
ultimately is to understand what works and why, and to create a change package, likely based
on stages of readiness for change. Next slide. I'd also like to talk about our frontline
staff survey that we did last summer and an in-service training that we've developed
to help neutralize the topic of vaccines for frontline staff in long-term care. Next
slide. We did a survey to understand what types of information frontline staff would
li
ke to receive regarding immunizations, trusted sources for vaccine information,
as well as preferred modalities, sources, and formats for professional development. We
used the survey findings to develop a training module and a distribution plan to encourage
vaccine uptake among staff. Next slide. The key takeaways of the survey are that
respondents are motivated to protect themselves and others from illness. The frontline
staff see the value of protecting themselves and others from getting
sick. Half of those
frontline staff in the survey accepted vaccination as a responsibility or a requirement
for long-term care staff. But this is really the key piece here -- respondents' confidence in
protection through vaccination specifically is low. And so if they don't believe in
the protective aspect of vaccination, they look to other methods to protect
themselves and others from getting sick. Many of the respondents view vaccination as
a personal decision, and they want balanced inf
ormation to make their own health decisions.
They don't want what they called a sell job on just the benefits. And they want that information
-- no surprise -- from their healthcare providers first, similar to many other surveys that have
been done. Government agencies were seen as trusted sources as well as coworkers with medical
training. So for our -- for training, respondents prefer a brief paid in-service by a direct
supervisor or administrator, and so we developed an in-service slide
deck and supervisor training
that incorporate those findings. Next slide. Both of those are posted on our website,
movingneedles.org, and are available. We also have been working towards greater electronic
health record and IIS interoperability. Next slide. The goal of this work really is to have
the facilities be able to view resident history and make that process much more efficient for
them to recommend what a resident needs and begin that process to vaccinate them with what is
needed.
We have two documents. One is a technical mapping document with five keys to connectivity,
a workbook for self-assessment for the EHRs, and based on -- it's based on responses and
interviews with multiple long-term care EHRs. These were groups that were left out of the
meaningful youth incentives program to connect with public health when it first came out, and
so they lag behind the ambulatory care EHRs. We also have a second paper with implementation
considerations that identified that su
stainable funding is critical, that we need to ensure
awareness and understanding of connectivity benefits to strengthen and monitor collective
action. We need to positively incentivize connectivity, and we need to reduce the
operational and technical burdens of connectivity. Both of these papers were written
in concert with AIRA, who was hugely helpful and ensured that we had IIS participation and input
into our consensus recommendations. Next slide. And so now I'd like to talk briefly
ab
out where I think there may be some key opportunities for innovation in
the long-term care space. Next slide. Thinking expansively about solutions
to increase on-site accessibility, especially addressing billing challenges for
residents and staff, is an absolutely key area for us to think about how to increase rates.
Having on-site accessibility is what I've heard described as a six-inch chasm. It seems like we
should be able to make this happen very easily, and yet the billing challenges f
or both
residents and staff are preventing that. Another key opportunity is providing structural
support and sustained technical assistance for implementation of standard operating procedures.
Being able to use renewable consent documents, working towards standard operating procedures
is an easy way to increase access. And once the cooperative agreement is done, there
won't be a group that's providing that sustained technical assistance. Embedding
leadership training for medical directors,
DONs, nurse practitioners, and other
clinical leaders in facilities, including how to build trust, is a key component
of success for our quality improvement pilots. Where you have engaged leadership, you have
success in a quality improvement pilot. Where you do not have sustained leadership,
it's really challenging to have a vision, have the facilities buy into that vision, and to
spend the time that they need and take that time away from other activities that they're currently
doing or a
dd it in. There's also an opportunity to focus on interactive education opportunities
that address the true concerns of staff, namely perceived low vaccine efficacy, from sources
they trust. If we can target the messaging to be to their direct concerns, I think we've got a lot
of data that we can use to support these efforts. The fifth opportunity is considering incentives
to further EHR IIS interoperability, supporting increased awareness and understanding of the
benefits of connectivity,
and working towards reduction of operational and technical burden
here. There are a lot of technical and operational burdens to address, and we need sustained funding
to be able to do so. The last place is to consider additional connections between the long-term
care and immunization communities -- for example, more representation at NVAC or ACIP meetings
-- and having more systems that are built on adult versus a pediatric infrastructure. We
don't want to pit the two against each other, bu
t building more long-term care specific
expertise will benefit both the residents and the staff and the delivery systems
in these facilities. Next slide. Thanks so much for the opportunity to present to you
today, and I look forward to your questions. >> Robert Hopkins: Thank you very much,
Ms. Sobczyk, and I want to thank all the members of this panel. Are there any
questions or comments from members of the committee? Steve Rinderknecht. Go
ahead, Steve. Steve, can you go ahead? >> Stephe
n Rinderknecht: I'm sorry. Thanks much
for the discussions. I enjoyed listening to that. Hey, yesterday, we had a discussion and
kind of a celebration of 30 years of the VFC program and the success that that has showed,
so I'm really hopeful that the proposed VFA program will be similar and we can talk about
that in the future. My question about the VFA program -- right now, with the VFC, the -- we
receive vaccine for uninsured and Medicaid. Would the proposed VFA be similar in
providing v
accines to the office for Medicare and uninsured, or is it just
uninsured? And if Medicare is included, would it be both Part A and Part
D vaccines? The reason I ask, I think not being able or making it difficult to
give Part D vaccines to adult is a real roadblock, and I'm sure the vaccination rate has been
affected by that when it comes to giving that in the office setting and not sending to a
pharmacy. So just a question maybe for Ms. Johns. >> Marquisha Johns: Yeah, thanks. Happy to ans
wer
that. So the current proposal is specifically for the uninsured population, and that's -- I'll say,
also, the way that the proposal is framed at the current moment is that it's a capped dollar
amount, and so while it is mandatory funding, it's still a limited amount of mandatory
funding. So I think what I have seen the administration proposed was $12 billion over
10 years, I believe. Don't quote me on that. I have to go back to the last budget proposal,
but if I'm remembering off the t
op of my head, I think that's what it was. But it would be for
the uninsured population only. And in terms of the Part D coverage, that should be addressed by
the IRA now. The mechanics of how that's going to all get worked out, I think, is maybe still
happening, but that should all be addressed by the IRA at this point. So there should be
full coverage for that without cost sharing. >> Stephen Rinderknecht: Okay. Thank you much. >> Robert Hopkins: Courtney
Londo, please go ahead. >> Courtn
ey Londo: Hi. Thanks to all
of the panelists. This was a really interesting discussion. I just wanted
to make a comment on behalf of AIRA, the American Immunization Registry Association.
We're really thrilled that this important work is being funded. Interoperability between
long-term care, EHRs, and IIS has been an ongoing issue. It really needs to be funded a
decade or more ago, but now is better than never. And just wanted to point out that the work
that Elizabeth presented is one step
toward getting long-term care connected to IIS.
It doesn't mean that all long-term care facilities are connected today or will be
tomorrow. There's a lot of work to be done, but the recipe for successfully
connecting has been developed, and now the funding is just needed to continue
to make those connections. So thank you to Elizabeth for contributing to this important
work, and thank you to the rest of the panel. >> Robert Hopkins: I want to, again, thank all the
members of this panel. Yo
u know, we all recognize we've got a long way to go to catch up on many
of our vaccination rates following the pandemic, and there are, to put a positive spin on
it, plenty of opportunities to use vaccine protection for our patient populations. We are
now going to take a break. I now have 2:35 p.m. Eastern time. We will be on break until 2:45 p.m.
Eastern time. I apologize for the short break, but I want to get us back to on time.
Thank you for joining us for our second day of our February
2024 impact meeting.
We'll see you back at 2:45 p.m. Eastern. >> Male Speaker: Produced by the U.S.
Department of Health and Human Services.
Comments