Assembly Standing Committee on Health
- Mia Bonta
Legislator
Good morning. We are in a Joint Informational Hearing Assembly Health Committee and Privacy Committee with Chair Bauer-Kahan and Assemblymember Bonta, and we are going to talk about generative artificial intelligence and healthcare opportunities, challenges, and policy implications. I want to thank everyone for coming together to be able to talk about this incredibly important issue.
- Mia Bonta
Legislator
Although artificial intelligence applications or AI in healthcare are not new, interest in and research in AI and generative AI applications in healthcare have dramatically accelerated. A wide range of applications are being developed, tested, and deployed in medical research, administrative and even clinical tasks. There is excitement about these developments, but it should be tempered by caution.
- Mia Bonta
Legislator
There are ranges of challenges that pose barriers to responsible, effective adoption. In addition, this technology could potentially have a significant impact on the healthcare workforce. There are questions around coverage and reimbursement with changing models of care. There are questions around data privacy, around who's liable in a—if a—patient is harmed.
- Mia Bonta
Legislator
And there is concern around a growing digital divide around which providers can afford to deploy technology to begin with.
- Mia Bonta
Legislator
As state lawmakers, I believe we have a responsibility to pay attention to all of these developments, ask these questions, and help guide the technology in ways that maximize benefit to Californians and minimize harm, that is ethical, safe, effective, free from bias, and helpful versus harmful to the work of our Clinicians.
- Mia Bonta
Legislator
This is what Californians expect of us and it's what they deserve. This hearing will not provide all of the answers but will provide a forum for these very important discussions. And I want to thank all of our panelists assembled here today, some who have come from other parts of this state and other parts of this country.
- Mia Bonta
Legislator
We have three excellent panels, and I look forward to diving into this this morning with, with everyone. With that, I'd like to invite the first panel to come up to the presentation dais and I also want to just thank our Chair Bauer-Kahan for her stalwart and groundbreaking work in this area through Privacy Committee.
- Mia Bonta
Legislator
And very thankful to have our opportunity to have a Joint Hearing with that, Chair Bauer-Kahan. I'll turn it over to you for opening comments.
- Rebecca Bauer-Kahan
Legislator
Thank you, Madam Chair, and I want to thank Chair Bonta for her leadership in this space and the staff, both of Health Committee and Privacy Committee, for working so well together on the backgrounder and putting together, I think, what will be an interesting and informative hearing. And I think this is such an important hearing.
- Rebecca Bauer-Kahan
Legislator
I've had the opportunity to participate in joint hearings with the Labor Committee, with the Arts Committee, and now the Health Committee. And this topic, in particular, is incredibly important because I think that healthcare is where the opportunity for AI is the greatest. It is also where the risk is the greatest.
- Rebecca Bauer-Kahan
Legislator
And so, you know, I'm hopeful today that we will hear both how we are going to seize the opportunities to save lives, frankly.
- Rebecca Bauer-Kahan
Legislator
And I think that that can be in applications of AI that seem as mundane as the device that helps our doctors take notes, so that they are more attentive to their patients, and maybe they catch more things than they otherwise would be, if they're busy writing, or the more advanced applications, where they're using AI to supplement the work of radiologists, to screen cancer sooner, or whatever the case may be.
- Rebecca Bauer-Kahan
Legislator
All of that, I think, creates opportunity to save lives and make Californians live a healthier future. That being said, I also have said, and will continue to say, that I think healthcare privacy is one of the spaces in which people deserve the greatest protections, and our laws reflect that.
- Rebecca Bauer-Kahan
Legislator
HIPAA goes further than any other, and so does CMAA. The California Medical Information Act goes further in protecting the privacy of patients, because we believe the conversation that someone has with a physician, behind those closed doors, is information that is some of the most sacred, private data that we hold.
- Rebecca Bauer-Kahan
Legislator
And so, it is critically important that we move into an AI age where data is some of, has been more—is more valuable than it has ever been because AI tools are getting to where they are because of data that we are working to protect, the private data of the patients who walk into healthcare facilities.
- Rebecca Bauer-Kahan
Legislator
And in addition, that we are making sure that there is not bias in these decisions, that as these tools are deployed, that we don't have worse outcomes for community who have long seen worse treatment in healthcare facilities, and that those historical biases are not built into these AI tools, such that we are exacerbating historical bias.
- Rebecca Bauer-Kahan
Legislator
And so, all of those risks need to be at the front of our conversations as we see the opportunity, and California is a state that I think has long been the innovation capital of the world because we believe we can do it all.
- Rebecca Bauer-Kahan
Legislator
And I actually believe that here. We can create these amazing technologies, and we can protect Californians. And that's, I think, the balance that we will hopefully explore today, and I look forward to hearing from everyone. Thank you, Chair Bonta.
- Mia Bonta
Legislator
Thank you, Chair. Any of our other Committee Members have any opening comments? With that, thank you. The first panel is here and it's going to be focused on how is Gen AI being used in healthcare now and what's on the horizon.
- Mia Bonta
Legislator
Bios for the speakers have been provided to the Members, and they are available on the Assembly Health website. So, if each panelist can introduce themselves quickly and then begin your remarks, we'd appreciate that.
- Mia Bonta
Legislator
Request each of the panelists to stick to the allotted time they've been given prior to the hearing and, and so that everyone can have the time to engage in this robust dialogue. In this panel, we'll hear from healthcare systems who are deploying this technology, or these multiple technologies, as well as AI developers focused on healthcare.
- Mia Bonta
Legislator
Craig Kwiatkowski with Cedars-Sinai will start us off with a level setting overview on generative AI in healthcare and we'll hand it off to him now. Please begin.
- Craig Kwiatkowski
Person
Thank you. Good morning, Madam Chairs and Members of the Joint Committee. It's an honor to be here on behalf of Cedars-Sinai. Really appreciate your interest in this important topic and how it's affecting patients and the way in which we deliver healthcare. I think I've got slides here.
- Craig Kwiatkowski
Person
It's really hard to read from here and I didn't intend to read to you, but what you see on the slide is a definition at top and I think there's a handful of keywords that I wanted to sort of highlight there. Those words are "extend," "enhance," "assist," and "augment."
- Craig Kwiatkowski
Person
Really intended to reflect the way in which we're thinking about AI, not as a replacement for humans, but as a force multiplier and an assistant to improve the way in which we deliver care and the way in which we care for our caregivers as well.
- Craig Kwiatkowski
Person
The pyramid you see there is really just definitional to help, just sort of level set, as was described.
- Craig Kwiatkowski
Person
The types of AI that exist out there—if you look at the bottom of the pyramid all the way up to the top, going from least sophisticated to most sophisticated. Rules-based and machine learning and prescriptive AI we've been using for many years, that's not new.
- Craig Kwiatkowski
Person
It's really the generative AI tool, as was mentioned, that has taken the world by storm over the past two to three years and really what the focus has been, and I would say the reason why we're here. The aware capstone is really just there for completeness on the pyramid.
- Craig Kwiatkowski
Person
We don't think we're there yet and I won't have any further comments on that. This is intended to capture the spectrum of AI capabilities we have in use at Cedars.
- Craig Kwiatkowski
Person
These are things that aid in diagnosis and treatment, risk stratification of patients, disease pattern modeling, things that are used in research to match patients to clinical trials to see that they get the best care possible for them. Oncology is a great example of that.
- Craig Kwiatkowski
Person
Tools that can assist in the way in which we provide a better experience for our patients and for our caregivers, and tools that assist in operating efficiency.
- Craig Kwiatkowski
Person
So that, for example, we're maximizing our space in the ORs, we're working on patient throughput in our inpatient facility, which is usually bursting at the seams, and making sure we're making access available to as many people as we can. Governance is important to us. We established an AI council a handful of years ago.
- Craig Kwiatkowski
Person
These are some of the senior leaders of the organization in sort of administrative roles, medical roles, clinical roles, research, IT—such as myself—and we have ethics representation on that Committee as well. That Committee is charged with the overall oversight of AI, the way in which it's used in our strategy for deployment and selection of AI.
- Craig Kwiatkowski
Person
Beneath there, beneath that Committee, sits a handful of work groups. Those work groups are focused on specific domains.
- Craig Kwiatkowski
Person
We have a physician, a clinical, and administrative work group that are looking at the granular detail of types of use cases and prioritization of the work, as well as how we're integrating that tool or solution into the workflow, as well as some communications and sort of regulatory focus groups.
- Craig Kwiatkowski
Person
Beneath the work groups are an AI Evaluation Committee. So, the top process flow you see there is how we approach any IT solution, whether we're building it or whether we're buying it.
- Craig Kwiatkowski
Person
And if it has AI included, it goes into this deeper dive, looking at performance, accuracy, precision, is it hallucinating, checking for bias, equity, and many of the things that were mentioned earlier, really to make sure that we're being intentional about protecting our patients, protecting our caregivers, and making sure we're making smart decisions about the tools we're using.
- Craig Kwiatkowski
Person
I have a handful of use cases, just to highlight quickly. The first one here, I believe was mentioned earlier. This is AI assistant for physicians. So, taking healthcare by storm across the country, really rapid adoption of this tool. This is a generative AI tool that's listening to the conversation between the physician and the patient.
- Craig Kwiatkowski
Person
The main value driver for this is really around physician wellness and efficiency, helping to ensure that physicians can sort of step back from the cognitive load that exists and trying to assimilate volumes of information that are in the charts, combined with all the information the patient is sharing with them in real time and putting that into a nice note so they don't have to worry about taking notes and sitting behind the computer the whole time.
- Craig Kwiatkowski
Person
A way to connect the patients better with the physicians and vice versa. You can see some of the early experience metrics. Actually, it's blank up there for some reason. Maybe if I have to click. Well, there was some impact metrics, those have been wiped. But the main win around this is reduction in time in notes.
- Craig Kwiatkowski
Person
We have about a 10%-20% reduction in time to notes. 17% reduction in pajama time. This is time, after hours, at home, when physicians would be catching up on their day. And some nice early experience metrics on physicians' perceptions about the way in which they're connecting to patients. Next example is similar, but it's focused on nursing.
- Craig Kwiatkowski
Person
So, same sort of value drivers getting nurses out from behind the computer. Our nurses spend about 40% of their shift behind a computer. We want them to spend more time out with patients. So, this is a solution that allows a nurse to document via their handheld, their smartphone. They can say what they want to document that records directly into the chart.
- Craig Kwiatkowski
Person
Again, wellness opportunity for this for nurses and a nice patient experience improvement as well. Getting patients better connected to the nurses. They're saying more, they're communicating more about what they're doing, and so, patients feel more engaged in their care as well. Imaging.
- Craig Kwiatkowski
Person
We've long used AI and imaging mainly to speed up triage and treatment. This takes it a step beyond. This example you see here is scan for a pulmonary embolism. It's a blood clot in the lung and so the AI will read the scan and it risk stratifies the results.
- Craig Kwiatkowski
Person
It ranks those scans in order on the radiologist work list. So, the radiologist gets the highest acuity or the highest risk of scans up top.
- Craig Kwiatkowski
Person
In this particular workflow, we have sort of an agent built in so that when the AI detects a very high-risk scan, it automatically sends an SMS text to a pulmonary embolism response team.
- Craig Kwiatkowski
Person
So, then a human takes a look at the scan, can evaluate the patient's chart, see what's going on, and if appropriate, can whisk the patient off for an interventional procedure, a thrombectomy.
- Craig Kwiatkowski
Person
We've seen a 40% reduction in time to mechanical thrombectomy using this workflow, which is really a nice win for patients, as well as a really nice reduction in ICU and overall length of stay. Last example here is in maternal fetal health. A couple of examples actually.
- Craig Kwiatkowski
Person
The first one on the left is best practice alert clinical decision support, based on an algorithm that identifies early risk of preeclampsia. This is something that's typically under diagnosed or under recognized, particularly in African American females.
- Craig Kwiatkowski
Person
And this is actually a way to use AI to eliminate a racial disparity so that all patients are treated equally and have the same opportunity to receive the best care. It's a very simple intervention, aspirin, but a really nice way to help improve care delivery.
- Craig Kwiatkowski
Person
The screenshot on the right is another example of additional decision support in labor and delivery. We're able to predict with about 90% accuracy in the first four hours whether mom will deliver by C-section or have a vaginal delivery.
- Craig Kwiatkowski
Person
That allows the care team to make, you know, informed decisions about maybe watchful waiting, induction, or, you know, just better informed to think through the care plan with the patient. And with that, I appreciate, again, your time and attention.
- Craig Kwiatkowski
Person
My goal here was to represent a broad spectrum of capabilities that exist out there with a little bit of depth and some. Also, to just, I think, reflect that we know we're in the business of healing. We know we're in the business of human curing.
- Craig Kwiatkowski
Person
We don't believe we're going to code our way out of healing or caring, but we do believe these tools have great potential to impact the way in which we deliver care, the way in which patients receive care, and the way in which we care for our providers. So, thank you.
- Mia Bonta
Legislator
Thank you so much. We're going to, for my colleagues, we're going to hear all of the panelists and then do Q&A. Dr. Yang.
- Daniel Yang
Person
Hi, I'm Dr. Daniel Yang. I'm a practicing internal medicine physician. I'm also the Vice President of AI and Emerging Technology at Kaiser Permanente, where I'm fortunate to lead our enterprise program, Responsible AI. So ensuring that our use of AI at KP is safe, effective, reliable, equitable, and compliant.
- Daniel Yang
Person
While I'm intimately aware of the risks and my day job is to manage those risks associated with AI, I want to just extend what Craig was talking about and really discuss some of the opportunities, the benefits of AI, as well as the critical importance of why we need AI in health care.
- Daniel Yang
Person
To start, I want to just introduce a simple framework that at least helps me understand what ails health care in California and in the United States today. What I see is a simple mismatch between supply and demand. Demand for health care services continues to go up, driven in large part by demographic shifts.
- Daniel Yang
Person
We have an aging population in our country, and while it's great that Americans are living longer, they're also accumulating chronic diseases as they age, which makes delivering high quality care for them much more complex. The second demographic shift is the rise of millennials, which are now the largest generation in the United States.
- Daniel Yang
Person
And as a millennial myself, I can tell you that the way we like to consume health care is the same way we like to consume TV shows on our computer. We want things on demand, and increasingly, our customers, our consumers, prioritize convenience over all else.
- Daniel Yang
Person
Now, both of these demographic shifts are placing an incredible amount of pressure on our healthcare system. And normally, if you recall from the law of supply and demand, as demand goes up, you increase supply.
- Daniel Yang
Person
But unfortunately, we can't do that very easily in health care because we're reliant on a very highly trained clinical workforce, and we have a fixed supply of these highly trained clinicians. In fact, we have a growing shortage of these clinicians.
- Daniel Yang
Person
The Federal Government estimates that by 2037, we'll have a shortage of nearly 200,000 primary care doctors in the United States, which is one third of all primary care doctors practicing today in 12 years. And so there is no way that we will train or hire our way, despite everyone's best efforts out of that gap.
- Daniel Yang
Person
And so what are the consequences when demand outpaces supply? It's what we're starting to see in healthcare today. We have rising costs of care to unsustainable levels that is increasingly hitting the pocketbooks of our Members and our employers.
- Daniel Yang
Person
We have rising rates of clinician burnout because we're asking our providers to see more patients and more Complex patients with the same number of hours each day. And we also have increased wait times to see our providers worsened access to care. So first and foremost, why do we need AI in healthcare to address those very issues?
- Daniel Yang
Person
And that's what we're doing at Kaiser Permanente. And so, as we've already talked about, one of the use cases we are deploying, and I'm proud to say, actually at kp, we've deployed the largest implementation of a generative AI technology in the clinical environment in the United States, if not in the entire world.
- Daniel Yang
Person
So we've deployed this ambient scribe technology, which we've heard about, to our over 25,000 doctors across our organization. The way this technology works is after consenting patients to record, the visit generates a first draft of a clinical note, which the provider then reviews, edits, and signs into the medical record.
- Daniel Yang
Person
And we already discussed, we deployed this to, to support our clinician workforce and reduce administrative burden. So it's not surprising that our doctors absolutely love this tool. I still get love letters from doctors about the impact of this tool on their day to day life. But what really surprised us was the patient impact.
- Daniel Yang
Person
Our patients really have told us that they like this tool. They find that their doctors are more attentive and present in the visit. And it actually shouldn't be surprising because this tool liberates our doctors from their keyboards and allows them to refocus their attention from the computer screen back to the patient's faces.
- Daniel Yang
Person
And what's the most profound thing and paradoxical about this technology is that it actually makes healthcare more human again. It's a conversation between two humans, and the computer is completely out of the room entirely. We're also deploying AI to improve patient safety.
- Daniel Yang
Person
For example, we've deployed a predictive AI algorithm developed by our researchers at Kaiser Permanente to identify patients who are in our hospitals but are likely to get much sicker in the next 12 hours. So for example, patients who may unexpectedly be transferred to the ICU or even die in the hospital.
- Daniel Yang
Person
And by identifying these patients earlier on, it allows us to intervene earlier on, thereby preventing those negative outcomes. And so we eventually deployed this across all 20 hospitals in Northern California. And then we rigorously evaluated the safety and impact of this tool. And what we found astonished not only us, but the entire scientific community.
- Daniel Yang
Person
There was a mortality benefit from this tool. We estimate in Northern California, this tool alone saves 500 lives per year. And to put that number into context, that's roughly equivalent to all deaths from traffic accidents in the entire Bay Area.
- Daniel Yang
Person
And so in closing, I'll tell you what I told our KP Board of Directors when I presented to them last year, which is there is only one way for me to guarantee that we can eliminate risks associated with AI in health care and at Kaiser Permanente, and that is for me to never deploy these AI technologies at all.
- Daniel Yang
Person
At the time, many felt that that was too big a risk for health care, essentially accepting the status quo of what I described and the macroeconomic trends that we are facing. I think we need these AI tools to help our patients, our health systems, and our employees. So, thank you.
- Fawad Butt
Person
Hi, thank you for the opportunity to be here and address this body. My name is Fawad, but I'm the founder and CEO of Penguin AI. We're an organization that's built an artificial intelligence platform specifically for healthcare and healthcare only.
- Fawad Butt
Person
Prior to this, I served as the Chief Data Officer for Kaiser Permanente, Chief Data and Analytics Officer for United Healthcare, and Chief Data Officer for Optum. During this time, as you can imagine, I've seen a lot of healthcare data and processes.
- Fawad Butt
Person
And the one thing I can tell you is that there's a big mess and that requires transformation, which is different than what we've tried before. The numbers are pretty staggering and quite interesting in some ways. Let's start with what Daniel was talking about. There's a supply and demand mismatch.
- Fawad Butt
Person
We have higher demand for health care services than we have providers and healthcare workers to support it. And this is only getting worse. If I ask most of you what the average age you think is of a physician in the U.S. I bet you you'd get it wrong.
- Fawad Butt
Person
The average age for a physician in the US is 59 years, which means many are very, very close to retiring or reducing their workload. And as such, the problem of supply and demand continues to get worse in the coming years. That's not the only problem.
- Fawad Butt
Person
I think the other big problem that we have is the Administration of healthcare and how complicated that is and what it costs us. So talking plainly, the administrative burden in health care costs us $1 trillion a year. That's a trillion with a T. Right?
- Fawad Butt
Person
So we spend $1.0 trillion a year out of the 4.0 x trillion that we spend in healthcare every year on moving paper, creating paper, summarizing paper, pushing paper, passing paper to everybody in the ecosystem. And that's problematic, Right?
- Fawad Butt
Person
So we're spending over a trillion on non clinical administrative functions like prior authorizations, claims adjudication, medical coding, medical billing. These tests are often Manual, slow, error prone. And while they don't provide care, they actually delay care. In many, many cases, a prior authorization stuck in limbo will delay care for patients.
- Fawad Butt
Person
And by the time they ultimately end up getting that care, they would have suffered more, they would have been in more pain, and in many cases they would be sicker. And as such, that is not only bad for them, but for the healthcare system overall. But we're at a turning point with generative AI.
- Fawad Butt
Person
We believe that the opportunity to rewire healthcare is here and the technology finally can solve the complexity of the problems that we've tried to tackle with classical systems, which hasn't worked out really well. But I think it's important to sort of think about the risk reward.
- Fawad Butt
Person
I think that's a question that a lot of companies are dealing with. Hospitals and insurance companies. We have a point of view on this. My point of view is that you should start with the administrative and the back office tasks. The back office or the front office, where paper gets created or documented and is then consumed.
- Fawad Butt
Person
I think this is where the processes like summarizing documents, verifying coverage, extracting codes and reviewing denials could be automated. These are structured problems with measurable outcomes. You can actually see the ROI in a very, very short amount of time.
- Fawad Butt
Person
For example, a prior authorization digital worker that we've built on our platform can reduce average approval times from three days to under one hour. So when we talk to insurance companies and hospitals, they tell us that the turnaround time of one week tends to be on a good week.
- Fawad Butt
Person
In many scenarios, the prior Authorizations can take 234 weeks before a decision is made. Clinical applications, I think are exciting that healthcare systems are exploring, but I think they come with much, much higher risk. They involve more nuance, they require more data, they have more ethical considerations.
- Fawad Butt
Person
That's not to say that we shouldn't do them, but we should treat these areas like new drug development. We should evaluate, we should test, we should validate rigorously before we ultimately deploy the clinical applications out there. Now there are plenty of clinical adjacent use cases, kind of like what we talked about, which is scribing, having an AI.
- Fawad Butt
Person
Just review and listen in and summarize. To me, that's an administrative use case. But having that AI then recommend the next best action for your patient turns that Administration into a clinical use case. So I think it's very, very important to nuance between clinical and admin.
- Fawad Butt
Person
And I think if we do the administrative automation, payers and providers can really benefit from reducing manual load, improving accuracy, enhancing patient experience having higher throughput. And unlike traditional technology products, the AI platforms can demonstrate value in 90 days.
- Fawad Butt
Person
I mean, when I used to be in these roles, it would take us two years to figure out whether technology was actually beneficial. Now we can actually test this out in 90 days and I think that's an exciting place to be.
- Fawad Butt
Person
The last thing I would say is that it's very, very important to make sure that everything we're doing has governance and compliance as part of its DNA. Right?
- Fawad Butt
Person
So if you're using AI, you need to make sure that you're redacting pii, you're redacting personal health information, that when you're training the model, this happens in a very, very sensitive way. It's important to do bias detection.
- Fawad Butt
Person
While I hear a lot of companies saying, and hospitals and peers saying, well, we do bias detection, they cannot really tell me how they do it. The technology is still very much in its nascent stage and, and needs to be evolved.
- Fawad Butt
Person
And by the way, everyone's data has bias in it and that's a function of where they operate, who they serve, who they sell to. And as such, I think it's important for everybody to understand that the data that we have in the healthcare ecosystem today is biased, especially if you're going to use it for AI training.
- Fawad Butt
Person
Need to make sure that it is unbiased, it's level set and it's sort of firmed up in a way where it represents everybody. California is uniquely positioned, I think, to lead in this space. We have all the technology we can imagine.
- Fawad Butt
Person
We also have great demands and needs and a population that spans from very wealthy to safety net. And as such I think that represents an opportunity to not just think about this, but to actually test it out, build solutions and provide those to the industry.
- Fawad Butt
Person
The one last thing I would say is that I do think that the safety net community is at the highest risk with generative AI because for two reasons. One, I think it could be used very, very inefficiently against that community where they don't have parity power to fight against AI.
- Fawad Butt
Person
If you have AI on one side and a human on the other side, AI wins because they can keep doing the same thing over and over again.
- Fawad Butt
Person
I think one of the ways to actually make sure that there's equitable access to generative AI for safety net communities is for that community to be part of the training of these models, for that data to be a part of that data set.
- Fawad Butt
Person
Obviously it has to be done with governance and with consent and De identification But I think if the safe, there's plenty of commercial data, there's plenty of Medicare data that's available, there's just not enough Medicaid medical related data that's readily available.
- Fawad Butt
Person
So a lot of the model developers are actually sidestepping it and therefore the models they're building do not account for those safety net communities. And as such, there's even more bias in there. Thank you for the opportunity.
- Amy McDonough
Person
Good morning. I'm Amy McDonough. I work at Google where I lead a global health solutions team for the company. We help bring the best of Google to our partners in health. I'm deeply honored to have the opportunity to talk with you all today. Thank you Chair Bonta and to Chair Bauer-Kahan.
- Amy McDonough
Person
I was a constituent and I am a constituent. So thank you to you both and thank you to the Assembly for your leadership. Most of my career has been building products and partnerships with the last 20 years at the intersection of health and technology.
- Amy McDonough
Person
First as one of Fitbit's earliest employees in the consumer wearable space, and now as part of Google Health. Our work in health at Google is driven by a simple, yet very powerful mission to harness Google's tools and technology to help everyone everywhere live a healthier, longer life.
- Amy McDonough
Person
We do this by building health into the products and services that people already use and every day, and by creating technology that enables our partners to succeed and our communities to thrive. Health care faces significant challenges. We've referenced those today.
- Amy McDonough
Person
And while health care works differently across global markets, I've heard these common challenges when I speak with partners in California, in Kentucky, in Japan and in the uk. Universally, our systems face workforce shortages, rising chronic conditions and administrative burdens that hinder patient care. They're not just abstract problems.
- Amy McDonough
Person
They are real world pressures that profoundly impact the ability to deliver quality care. The current advancements in artificial intelligence offer a remarkable opportunity to transform the health experience. And at Google, we're focused on four key approaches. The first is meeting people where they are on their journey to health.
- Amy McDonough
Person
For many people looking for health information, Google is their front door. So we have a responsibility to provide accessible, high quality health information through Google Search and YouTube. Partnering with organizations such as the Mayo Clinic for credible health content and our AI overviews in search enhance the relevance and accuracy of health topics.
- Amy McDonough
Person
Lastly, products like our Pixel and Fitbit hardware provide personalized insights on health metrics like sleep, heart rate, body temperature. Secondly, we're advancing cutting edge AI capabilities. Google's Advanced AI and Genai models can unlock the potential of health information, generating novel insights, advancing science, and developing personalized health solutions. We're giving the aging. We're giving up.
- Amy McDonough
Person
Committed to responsible development of models that are trained on diverse, culturally relevant data and deliver best in class performance focused on safety, fairness and trustworthiness. Already, our Gen AI solutions are helping healthcare organizations and researchers like those that we've heard from today, solve their most important challenges.
- Amy McDonough
Person
While AI tools like AlphaFold, which was referenced in the background paper, are accelerating disease research that could one day help patients. Third, we're helping transform organizations who are working in health. Our technology enables organizations to make better use of their own data and give better care.
- Amy McDonough
Person
We provide health organizations with AI powered tools and products to make workflows more efficient, reduce administrative burden, accelerate innovation and drive down costs. 30% of the world's data is Healthcare data. That's a lot of data.
- Amy McDonough
Person
In the United States, we work with HCA Healthcare and they're piloting a system built with Google Cloud's large language models to automatically generate comprehensive nurse handoff reports.
- Amy McDonough
Person
That's a little bit of similar to your pajama time that you mentioned, but essentially shift handoffs and making sure that that information is consistent, safe, while also saving time for those on the front line so that they can be in conversations with their, you know, with the patients that they're serving. Fourth, we're fostering a thriving health ecosystem.
- Amy McDonough
Person
We believe that collaboration is key to creating meaningful change in healthcare. One way we do this is by partnering with startup organizations across the health spectrum to drive innovation. More than 60% of funded Gen AI startups and nearly 90% of Gen AI unicorns our Google Cloud customers. And we enable developers to build solutions on gemini.
- Amy McDonough
Person
With over 4 million developers already working to bring solutions to life, health is a team sport. It's important that we work across sectors, with governments, community organizations and global health leaders to improve population health. And our approach is one that's bold and responsible. How we do our work is as important as the work that we do.
- Amy McDonough
Person
We are bold in our innovation, responsible in our development and deployment, and committed to collaborative progress. Together, we understand the complexities and risks of emerging technologies. And we're dedicated to building beneficial AI that addresses both user needs and broader societal responsibilities, always safeguarding user safety, security and privacy.
- Amy McDonough
Person
We'll continue to partner with the health ecosystem and the people. It's serving to uphold one of medicine's oldest commitments. First, do no harm. Anything else is unacceptable. Thank you.
- Mia Bonta
Legislator
Thank you so much. We will bring it back to our colleagues for any questions. Dr. Patel.
- Darshana Patel
Legislator
Thank you for your presentations today. What an exciting topic and opportunity. My question to Dr. Yang and Mr. Butt regarding the models and how they're adapted towards language learning. We know that they are supposed to be emulating natural language processing at its most efficient way.
- Darshana Patel
Legislator
How does it do, how does it perform with accents, cultural names, cultural phrasing, et cetera? I know when I get a Google voice message translated, it often mixes up my name in particular, and as well as several of the people in my circles of sphere of influence.
- Darshana Patel
Legislator
So names aside, but even all of those other processes, I liked your. I was intrigued with your idea about having IT train on those marginalized community Members and those groups, because that's where we'll see IT adapt and learn. What does that look like? And if you can expound and explain a little bit, sure.
- Daniel Yang
Person
I'll start with maybe a health system perspective and maybe Fuwa can help with the top technical aspects. But we did a lot of rigorous quality assurance testing as we were deploying the ambient scribe technology across our enterprise.
- Daniel Yang
Person
And we were looking exactly at those questions that you described, which is how well does this tool work in languages outside of English? How well does it work in accented English? How well does it work when you have multiple speakers in the room?
- Daniel Yang
Person
So what may seem like edge cases in an organization as big as KP may may happen thousands of times a month. And so we felt a commitment to actually test and validate the technology in a wide range of settings.
- Daniel Yang
Person
And so, for example, for multilingual, we actually had not only our providers, many of whom are bilingual and were able to assess the quality of the notes and the transcript by reviewing it.
- Daniel Yang
Person
We actually had our interpreters listen to a subset of the conversations in the native language and then see the transcript and then look to the final note to kind of trace the entire fidelity to the note. And we found, by and large, the tool functioned extraordinarily well across a wide range of use cases. But it wasn't perfect.
- Daniel Yang
Person
And we did pick up with multilingual. There were some technical glitches that we were encountering. For example, there were some parts of the text were being missed in the transcript. And we went back to the vendor and we shared, hey, there was a paragraph that was in the conversation that wasn't in the transcript.
- Daniel Yang
Person
And they looked to us and said, zero, I know why that is. It's because we have a language model that is predicting the language being spoken every minute. And so when you switch languages from English to Spanish, which often happens in a visit.
- Daniel Yang
Person
The language model still thinks they're speaking in that primary language, and so it may miss that block of text. And so by identifying that through our quality assurance testing, we were able to bring it back to the vendor, who was then able to put technical modifications around it.
- Daniel Yang
Person
And then more importantly, we provided education to our providers to say, hey, be careful when using this in a non English context. This does not serve as an interpreter. So we're really important to make sure that this is not being used as a way of translating. We have other systems in place for that.
- Daniel Yang
Person
And we wanted to make sure really that our providers are informed users of this technology and they understand when it works well and what some of the hiccups of this technology are. So that's one way that we approached it. But like I said, by and large, the technology does perform extremely well across these wide ranges within kp.
- Daniel Yang
Person
As of a few months ago, this scribe technology has been used in over 6 million encounters. So we've gotten a decent amount of experience under our belt and we continue to get thousands of feedback comments from our providers. We have a very active program of ongoing monitoring.
- Fawad Butt
Person
Yeah, I think I would agree with Daniel. I think these things are getting better. I mean, my name is Fawad, but. Right. So in many cases my last name gets censored by these technologies because they just, you know, they don't put it in context that I'm South Asian and that's a normal name there.
- Fawad Butt
Person
So I would say the technology is getting better. What you get on your phone and with these robocalls is not the latest technology. I suspect in the next 12 to 18 months you're going to see quite robust pronunciation and understanding of names and other things.
- Fawad Butt
Person
But I think the place where I get scared is where we'll get all that. Right. But then the things that really matter, which is a Medi Cal patient looks very different than a Medicare patient, than a commercial patient. Right.
- Fawad Butt
Person
So when we're training our models on commercial data and Medicare data and not on Medi Cal data, then we're sort of disadvantaging that group. So it goes the social determinants of health, all the things that, such as transportation and food and housing, those parameters are very different for these underserved communities.
- Fawad Butt
Person
And I worry that we'll get the names right, but we'll get the inference wrong. Right. And I think that's where the technology sits today.
- Darshana Patel
Legislator
Additionally, concerns with technical things, like if an accent is not picked up accurately with milligrams versus micrograms, there will definitely be review of the actual medications prescribed or programs or plans for. I'm assuming there will be a final set of real eyes on things. Decisions like that.
- Fawad Butt
Person
I think it's required in many ways. I mean, for us at Penguin AI, when we sell a solution, we require a human have the last word on this. A human has to review it. Now, the human might be spending 30 minutes before and 30 seconds now, which is fine, which is more efficient.
- Fawad Butt
Person
But a human still has to be in the loop before key decisions like that are made.
- Daniel Yang
Person
I mean, one, yes, that's our workflow as well. We have a physician in the loop reviewing the outputs of all of these for accuracy. But I trained not too long ago in San Francisco, and I used to have to write all of those orders by hand. And I can tell you that patients died from bad handwriting.
- Daniel Yang
Person
So when we're comparing some of the safety risks of these new technologies, we also have to compare it against the default. And we look very closely at the accuracy of these AI generated notes.
- Daniel Yang
Person
I can tell you if you looked at my human notes, when I'm seeing 18 patients in an eight hour Urgent Care Clinic shift, and it's a lot of viral upper respiratory infections, I kind of forget if your snot was green or if it was yellow.
- Daniel Yang
Person
If you had a cough for two weeks or 10 days, a lot of those blur together when you're trying to write five notes all at the same time. So I think we also have to compare performance and accuracy, not against perfection, but against today, like the human standard.
- Rebecca Bauer-Kahan
Legislator
Thank you all for being here. I learned something new every time I hear people talk about the applications of this, and I think it's fascinating. So I want to start with the applications that the two of you talked about, which I thought were really interesting. I'll start with you, Dr. Kotowski. How did I do?
- Rebecca Bauer-Kahan
Legislator
Thank you. So you talked about the early risk preeclampsia screening and the C section prediction. Now, you said that your C section prediction tool can predict with 90% accuracy whether a C section will be needed.
- Rebecca Bauer-Kahan
Legislator
Now, that can be because the tool itself is incredibly accurate or because the physicians are now relying on the tool and deciding to do C sections because the tool outputted a C section prediction. So I don't know that. That accuracy felt super enlightening to me and was actually concerning to me because as a woman who's birthed three.
- Rebecca Bauer-Kahan
Legislator
I. Know that there's more risk in a C section than in A vaginal birth and having had incredible care, I was on the verge of a C section with my first. The Doctor was able to give me an hour to get him out. I was able to get him out.
- Rebecca Bauer-Kahan
Legislator
I then had two successful vaginal births after that. So I want to make sure that we're not putting tools out there that are resulting in more C sections. So can you talk a little bit about that?
- Craig Kwiatkowski
Person
Yeah, it's a great point. And again, this is not intended to be a decision driver. Right. This is a supplement of additional information that clinicians can use.
- Craig Kwiatkowski
Person
You take a seasoned OB who's going to look at you and talk to you and examine you, and are going to have their own sort of clinical impressions about the way in which. To your point, maybe some simple intervention, watchful waiting, et cetera, may be the most appropriate solution.
- Craig Kwiatkowski
Person
Even a labor and delivery nurse who's been doing this for years, you can guess what's going to happen with the patient. And so this is not a way to say, well, the AI said you're going to get a C section. So as soon as you come in the door, we're taking you to a C section.
- Craig Kwiatkowski
Person
That's not it at all. It's really just a way to supplement information. If the team is sort of conflicted about what's going on, they may look at this and say, well, the AI is predicting some probability that you'll deliver vaginally, you'll deliver by C section.
- Craig Kwiatkowski
Person
And that may help them have a conversation about, should we wait another hour? Should we take mom in? It's really just an additive information source.
- Rebecca Bauer-Kahan
Legislator
And I guess so in that. And then I was going to go, and I really do appreciate your comments, Mr. Butt, about bias in these systems. I think it's real and I think we need to be talking about it and acknowledging it.
- Rebecca Bauer-Kahan
Legislator
And so let's talk about maternal mortality, which is, you know, black women and women of color have much higher risk when they walk in to give birth than I do. And so I guess I want that tool which is interfacing in that moment when different women have different risk factors for survival.
- Rebecca Bauer-Kahan
Legislator
Feels like a critically important one to have tested against. Sort of the bias that Mr. Butt talked about. Can you talk a little bit about how that was tested to ensure that we have good outcomes for women of color?
- Craig Kwiatkowski
Person
Yeah. Thank you. It's running in the background at all times. So it's using real world data to the point earlier. We're testing it on. It's running sort of in a test format on real World information. So it's taking in the spectrum of patient populations, whether it's white females, Asians, African Americans, etc.
- Craig Kwiatkowski
Person
And it's learning based on the information that it's gathering. And so it's in a perpetual State of learning mode and reinforcement built based on what the physicians are seeing and what it's capturing just in real time information gathering as patients are either going for a C section or they're going to vaginal delivery.
- Craig Kwiatkowski
Person
So it's not pre programmed based on populations, it's real world information in real time based on the patients we're seeing.
- Rebecca Bauer-Kahan
Legislator
Right, but the real world outcomes right now is that black women die more. So if it's reinforcing black women getting worse care, the delivery setting, that is incredibly concerning. So I guess are you looking on the back end? So now you've been running this for some period of time on racial disparities in the outcomes.
- Rebecca Bauer-Kahan
Legislator
I mean, are we making sure it's not reinforcing what is a hugely problematic health care dilemma we are in right now?
- Craig Kwiatkowski
Person
Yes, absolutely. There's a team of folks in OB who work with our informatics team to analyze the tool and you know, make adjustments as, as, as warranted. We haven't done an official sort of longitudinal study, you know, comparing the tool versus the human factors. As was said earlier, you know, human based decision is not infallible.
- Craig Kwiatkowski
Person
And so really understanding sort of what the baselines we're comparing to are important as well.
- Rebecca Bauer-Kahan
Legislator
Right. And you are building, I mean, part of what that historical data that has trained these AI tools is human decision. Right. That's what led to that data that is informing your tool. So I'm not saying that it's. But we need to.
- Rebecca Bauer-Kahan
Legislator
I think we as a Legislature believe we need to be doing better as it relates to women of color in hospital settings specifically. And so we should be pushing these tools, not accepting the historical status quo, I think is where I'm at. And I couldn't agree more with Mr.
- Rebecca Bauer-Kahan
Legislator
Butt, that we need strong governance and compliance and bias detection and testing that is clear. And you could articulate to me to ensure that that's what these tools are doing. And I'm not getting that maybe it's a time constraint. So hopefully we can follow up because that's something that I need to know in California.
- Rebecca Bauer-Kahan
Legislator
We're doing and we're doing in a meaningful way that is changing outcomes where they need to be changed most. And I know you wanted to add in Dr. Yang.
- Daniel Yang
Person
Yeah, just so one, I appreciate those points and so I would take that same set of questions and I'll give you another example that illustrate the same concept, which is, you know, there's a lot of algorithms that are used to predict which patients might not show up to your appointment.
- Daniel Yang
Person
And the goal around for a lot of these health systems is by identifying patients that may not show up. Because we have a long wait list. Let's find a patient that's waiting that will show up. Right. But what we found is that the patients that often don't show up, show up, don't show up for good reasons. Right.
- Daniel Yang
Person
They may not have access to a car. They rely on public transportation. They don't have a job that allows them to take a half day off of work. They don't have reliable childcare.
- Daniel Yang
Person
And so you can take that algorithm and if you double book that patient and they do show up, you are inadvertently worsening their access to care. You could take that exact same algorithm and just change what you do with the output of that algorithm.
- Daniel Yang
Person
So instead of double booking them, what if you took that to outreach to those patients? These are people that are unlikely to show up. Let's give them a call. Let's see. Can we order you an Uber? Can we change the time of your appointment after hours?
- Daniel Yang
Person
Can we switch it to a virtual visit instead of an in person visit? It's the same exact algorithm. It's just what you do with the output of that.
- Daniel Yang
Person
And so I think it's also important to know it's easy to think about just the data only, but these are social, technical systems and we have to understand the entire workflow. And you can have a, quote, biased algorithm that actually drives equity. And so this is very much kind of the viewpoint from Kaiser Permanente.
- Daniel Yang
Person
These are agnostic tools that may have biases in them. We have to be very thoughtful about the actions taken from those tools. And so just like, you know, some of these maternal, these predictive tools for maternal morbidity and mortality, they can identify patients at higher risk. That may require more resources, more attention, that could actually improve outcomes.
- Daniel Yang
Person
So I think, I don't want to just think this is a data problem. This is much deeper than a data problem. And you can actually take a biased algorithm and drive equity as well.
- Rebecca Bauer-Kahan
Legislator
Absolutely. But I think you can only do that when you're watching the outcomes to make sure you're driving equity, which I think was the point I was trying to make, which it sounds like you.
- Daniel Yang
Person
Are trying to do. Of course, you have to follow the outcomes. You have to look at the impact of These tools on different demographic groups to have confidence that yes, this is actually improving disparities as opposed to the opposite. But one more point. KP Theater Sinai. We're large organizations, we've got a lot of data science expertise.
- Daniel Yang
Person
One of the things that I'm worried about is really this AI haves and have nots. You can put compliance requirements on kp, we'll manage. But what I worry about is inequitable access to the benefits of AI. Right. What I worry about is the only patients that really benefit from AI are Members of Kaiser.
- Daniel Yang
Person
Our patients of Stanford are patients of Cedar Sinai. And not rural access hospitals, not patients in federally qualified health centers or those communities.
- Daniel Yang
Person
And we have to be cognizant that the requirements that we place on these does not actually widen that disparity in those underserved populations that oftentimes ambient scribes, you know, the doctors and FQHCs love these tools and would benefit from it. So we want to make sure that we also can offer the benefits of these tools as well.
- Rebecca Bauer-Kahan
Legislator
Thank you. And if I know I want to turn it back over to Madam Chair, but I just wanted to make 1.0 about what Mr. Butts said around plans and the algorithms that determine. You're assuming a lot of your testimony, assumed approvals. I would say for the average Californian, we experience denials more than we experience approvals.
- Rebecca Bauer-Kahan
Legislator
So I just want to point out that those algorithms can be trained to maximize for profit or maximize for outcomes. And so we need to again, be conscious of how we are training, watching, paying attention to these, because we as a Legislature, I believe, want better health outcomes and the plans want to maximize 100%.
- Fawad Butt
Person
There's a dynamic tension between the plan and the patient. Sometimes between, you know, one has to pay out, the other one has to receive. So I do think that there's this dynamic tension that always exists. But I think there are techniques that, that plans are using that are, I think, in good faith.
- Fawad Butt
Person
One example is that for approvals, if an AI thinks it's approved, then they just say let auto approve it and not have a human in the loop. But for anything that AI says it's denied, should have a human in the loop.
- Fawad Butt
Person
So there are ways and techniques I think we can use to make sure that that is actually happening. But I agree with you. I think this is a new space. It's an area which requires a tremendous amount of oversight internally by organizations and cross functional teams.
- Fawad Butt
Person
It's not the data science's job, team's job to figure out the right Outcomes. It's the, it's a clinician's job to figure out the right outcomes, but they have to be paired together. And I think in many organizations they just don't have the capacity like the big ones here.
- Mia Bonta
Legislator
Thinking I'll just ask one question. I was very intrigued by this idea that basically for the closer we get to the patient and patient care, the more vigilant we should be.
- Mia Bonta
Legislator
If we are focused on trying to eliminate the $1.0 trillion in administrative waste in our system, which given the tough decisions we are going to be making in this state around Medi Cal in particular, is very intriguing to me. And we know that there is significant rising health care costs.
- Mia Bonta
Legislator
It's important for us to kind of have a frame. I appreciate the idea that for administrative tasks there should be an opportunity to be able to integrate. For clinical adjacent tasks there should be perhaps more vigilance.
- Mia Bonta
Legislator
And certainly for clinical applications we need to think about the way in which that we should be monitoring and creating a regulatory framework that is as robust as you said, as what we would do for new drug development.
- Mia Bonta
Legislator
My estimation just in going through this last legislative cycle is that it is the wild west in all three of those areas. And I'm particularly concerned about the clinical application components. So I would just appreciate from the panelists and perhaps Mr. Butt or Ms.
- Mia Bonta
Legislator
Mcdonough, if you were to kind of envision for us where Amy is going what you believe are our most likely pitfalls from a policy standpoint in terms of the administrative clinical adjacent. And I would put a lot of the prior authorization pieces in that bucket and the clinical applications.
- Fawad Butt
Person
I'd say a couple of things. I think one, today there's no categorization of what's clinical, what's non clinical. Everybody gets to make that decision on their own. Now when there are Clinicians involved, they're pretty good at making those decisions.
- Fawad Butt
Person
But I think you could do something as simple as saying just define the administrative tasks that fall in the administrative bucket that should be optimized like prior authorization, risk adjustment, claims adjudication, medical coding, payment integrity, things like that I think are low hanging fruits. I think many of these processes are clinically adjacent.
- Fawad Butt
Person
So for prior authorization, for example, a Doctor will submit medical necessity information, which is clinical documentation that comes into the payer side and payer, if they're using some sort of AI, are reviewing clinical documents now, they're not recommending care, they're just looking at those clinical documents to see if they match criteria.
- Fawad Butt
Person
So I think more rigor around when there are clinically adjacent applications, what model are you using? Is the model trained appropriately on the right diversity of content so it understands how to make decisions equally and equitably across the distribution?
- Fawad Butt
Person
I think that would be the second thing and then the last thing I would say is that I think the clinical applications represent the largest opportunity, but they also represent the largest risk. Right? You get it right, you can have a bot that can cure everybody's diseases every day. You get it wrong, people die.
- Fawad Butt
Person
So I think therefore having some sort of a framework that says, hey, this is what we consider administrative to be, this is what we consider clinically adjacent to be, this is what we consider purely clinical to be a framework that organizations like Kaiser and Cedars and others can probably work on and identify.
- Fawad Butt
Person
I think that's a good starting point to help just define what these things mean and then and then supporting the ones there where we want to encourage development.
- Amy McDonough
Person
And wanted to add on there agree with having frameworks that are appropriate for the risk level as it relates to coming close to human connection there.
- Amy McDonough
Person
But I think the frameworks are important across all those risk levels as it relates to AMI, which is our, I think was covered in the paper a little bit not named after me. It's a research project.
- Amy McDonough
Person
And what's really interesting about that, that I think actually addresses some of the concerns around bias and access, particularly in rural and other areas in the US is the multimodal capabilities that it has.
- Amy McDonough
Person
So the ability to listen, so ambient technology as well as to see if you will and to be able to kind of listen, look at things. But again, it's a research project. We have published a number of findings on that and happy to share those as a follow up to this session here.
- Amy McDonough
Person
But I think they are not diagnostic tools at this point. They are to determine next best action to here are common situations. Here is a common what this common skin rash might be. Here's kind of what you would find from a search and helping them determine the next best action, which is often to go seek care.
- Amy McDonough
Person
And so they're really meant as a kind of an early intervention, early warning tool to be able to help support that. But again, AMI specifically is around a research project right now to kind of show what future capabilities might look like.
- Mia Bonta
Legislator
Thank you. And Dr. Patel is going to have the final question before we move on to the next panel.
- Darshana Patel
Legislator
Sorry for coming back to me, Madam Chair. Thank you for your patience. As I'm listening to you all speak about the tools and the utility of AI in medical profession, I'm thinking Back about the C section example also mother of three, My outcomes looked a little different, but I was able to successfully ultimately have a natural birth.
- Darshana Patel
Legislator
What I'm thinking about is in combination, you spoke about the aging population of providers. And so right now we are seeing the final view as the person, the patient or the provider making that final call on the patient. We see that the tool may be giving you predictive outcomes. The physician is the ultimate decision maker.
- Darshana Patel
Legislator
But as with many technologies, we become more reliant as the generations progress with that technology. So right now we may have providers that are a little more skeptical and will give it that extra once over.
- Darshana Patel
Legislator
But as the millennials maybe and the Gen Zs after them come into becoming practitioners and providers, they'll become naturally more reliant on the tool and that predictive becomes decisive or decision based. And that may go along with AI becoming more accurate and more reliable. So my concern is more, maybe this will.
- Darshana Patel
Legislator
We'll learn more about this in the second panel, but I would like to hear your input on what does the legal landscape look for that.
- Darshana Patel
Legislator
And when I talk about insurance, I don't mean insurance for healthcare, but the litigation, the malpractice, as perhaps there will be a situation where the provider will make a different determination than AI and that may become known to the patient and there may be an outcome that is a poor outcome based on that determination.
- Darshana Patel
Legislator
So what are thoughts or protections, guardrails that we could think about around that?
- Mia Bonta
Legislator
And I do think that that is question for the second panel. So if there's one brief answer from one panelist that'd be helpful.
- Daniel Yang
Person
Sure, maybe I'll leave it. But you know, I mean community, it's really community centered of practice. And I'll just say that, you know, new technology is not new to health care. When I practice medicine, I don't know how to read a CBC myself. We rely on on machines to do that.
- Daniel Yang
Person
But I had, you know, mentors of mine that would look at a microscope and be able to determine it. So medical practice changes and we shouldn't be afraid of medical practice changing the core skills of what's a Doctor is not reliant on being able to read a CBC on your own.
- Daniel Yang
Person
And so I think we really have to understand what are those core skills, protect them. But also appreciate that as new medical advances come, that we also have to be able to train our workforce to adapt.
- Mia Bonta
Legislator
Well, thank you so much to our panelists for such a robust conversation. I'm glad that we dove right in. We're going to move on to our next panel right now, and I appreciate the offer to share some additional materials with this Joint Hearing panel.
- Mia Bonta
Legislator
We're going to move on now to our second panel, so if you all can come up right now, this panel will discuss some of the known challenges and help us think more deeply about the implications of these developments and the role of the state to support the type of future we want.
- Mia Bonta
Legislator
Our first panelists as they are coming up, is Kara Carter from the California Healthcare Foundation to take us to talk to us about what they've been, what they've heard recently from safety net providers and plans about AI.
- Kara Carter
Person
Good morning. Hi. Good morning. Thank you, Madam Chairs and Committee Members, for the opportunity to speak today. I'm Kara Carter. I am the Senior Vice President for Programs and Strategy at the California Health Care Foundation.
- Kara Carter
Person
The foundation is an independent, nonprofit philanthropy that works to improve the health and care system for all Californians, particularly those who face the greatest barriers to care. Today, I'll be sharing some insights on the opportunities and challenges that policymakers might want to consider when making laws related to AI adoption in California's safety net healthcare system.
- Kara Carter
Person
Okay, I have some slides. Can we go to the next slide? Thank you. Okay. As some panelists from the first panel noted, AI is not new in healthcare, nor is technology. So predictive models in healthcare have been around for years. That said, many recent advances are dramatically expanding AI's capabilities and potential impact.
- Kara Carter
Person
And that's why you, as policymakers, see before you now in recent legislation. zero, that went the wrong way. Can someone advance that? Thank you. Okay. To ground us in some of these potential, in the potential that AI has in the safety net, I'd like to share with you two powerful examples. The first comes from Los Angeles County.
- Kara Carter
Person
The county developed an AI model to identify an outreach to individuals at high risk of experiencing homelessness. The model uses cross sector data, including ER visits, food assistance and jail records to proactively intervene. It's still an experimental strategy, but it has already supported over 700 individuals, with 86% maintaining housing.
- Kara Carter
Person
This shows the potential of AI to support upstream whole person care and work to solve homelessness, an issue that is a top of mind for many of us policymakers and Californians alike. Okay, try to go to the next slide. There we go.
- Kara Carter
Person
My second example is El Sol Neighborhood Education Educational center, which is a CBO that provides community health services in the Inland Empire in California. The center deployed AI tools to support promotores and community health workers that serve limited English and immigrant populations. AI helps streamline workflows by improving task management, project coordination, and program monitor.
- Kara Carter
Person
I really can't make this go forward. Thank you. I'm going backwards. So AI must be safe and trustworthy. With promise comes responsibility, especially for US Policymakers. AI must be implemented safely and equitably. Many stakeholders recognize the need for guidance on trustworthy AI, and there are a number of efforts underway.
- Kara Carter
Person
For example, in 202328 healthcare organizations across the country, including UC San Diego Health, UC Davis Health, and John Muir Health here in California, committed to a framework which identified and defined trustworthy AI as having five elements that it would be fair, appropriate, valid, effective and safe.
- Kara Carter
Person
Another model came closer to home from UC Davis with where the Population Health team deeply engaged community to build a structured approach to minimize bias, harm and enhance equity in a predictive model. Next slide Stakeholder Perspectives so In the last 18 months, as the Chair mentioned, like so many of us, CHCF has jumped into AI.
- Kara Carter
Person
Knowing that AI innovation is moving incredibly quickly, we sought out and wanted to learn from safety net providers. The safety net serves our most vulnerable Californians, the 15 million people that receive their care through Medi Cal and the remaining uninsured.
- Kara Carter
Person
It is imperative that they don't get left behind their commercial counterparts in a race to adopt new tools. The good news is that across the board safety net health plans and providers are interested in AI adoption and want to shape it for the unique populations that they serve.
- Kara Carter
Person
Next slide they're also focused on making sure that AI innovations and policy put the patient at the center. Patient adoption of AI will vary widely based on factors such as age, language, immigration status and comfort with technology and the safety net.
- Kara Carter
Person
Patient voice must be embedded in the design and deployment of AI as it's a population which holds deep mistrust of the healthcare delivery system which which has historically harmed it. CHCF is currently embarking on a listening exercise to hear directly from patients about what they want and expect from AI in the health system.
- Kara Carter
Person
While safety net institutions and providers are excited about AI, they absolutely face significant barriers to adoption. These include some obvious items like having the financial capital to invest in and most importantly, sustain AI tools, as well as training staff whose time is already stretched thin.
- Kara Carter
Person
Additionally, SafetyNet providers are looking to policymakers to provide clarity in several areas including liability, privacy, safety and racial and ethnic bias. Next slide. Lastly, as you're all aware, AI will impact the workforce. Some jobs may change or be displaced.
- Kara Carter
Person
However, California continues as many people in the previous panel noted to have material shortages in many health professions and many safety net providers see AI adoption as an opportunity to relieve burdens and burnout. Policymakers should consider how AI can be a tool to support the healthcare workforce into the future while also protecting and creating new jobs.
- Kara Carter
Person
Next slide. In conclusion, AI adoption, particularly in the safety net, is still early. Many organizations are only beginning to explore or pilot these AI based technologies. This early stage offers all of us policymakers, innovators and adopters a rare opportunity.
- Kara Carter
Person
We can be intentional from the start in six key data availability and infrastructure, technical readiness, workforce capacity, governance and regulatory alignment, financial and operational viability and patient trust and consent. Making progress in these areas will help us lay a solid foundation for the future of AI in health care. Thank you for the opportunity to speak today.
- Kara Carter
Person
We've provided some contact information and some CHCF resources on AI. Thank you.
- Ziad Obermeyer
Person
Thank you. I am a physician and a researcher at Berkeley, and I want to talk to you about how AI done right can save lives, lower costs, and expand access to care, and how I think California can lead the way.
- Ziad Obermeyer
Person
I wanted to start with a story, which is that a few years ago, working with colleagues, we identified racial bias in a widely used family of algorithms used across the country in health systems. Those tools were supposed to identify patients who were high risk so we could get them extra help early.
- Ziad Obermeyer
Person
But instead of predicting who's sick, those algorithms predicted who was going to generate high health care costs. And it turns out that patients who are black or poor or rural or less educated often don't get care when they need it. So they cost less, not because they're healthier, but because they're underserved.
- Ziad Obermeyer
Person
The AI learned that fact and it automated it as policy, leading to millions of patients who needed help being left out of that extra help program. Assembly Member Bauer Cahan, you pointed out that AI learns from data created by humans, and those data have flaws.
- Ziad Obermeyer
Person
When we see language models that are trained on the words written by people on the Internet, they carry the biases and blind spots of people on the Internet. They encode today's scientific knowledge, not tomorrow's discoveries.
- Ziad Obermeyer
Person
And automating those things can scale up the worst and ugliest parts of our health system, cause job loss, and lock in the way we think about things today. So I've always been more interested in AI that can do things that humans can't do.
- Ziad Obermeyer
Person
And I wanted to give you a couple of examples of that, because those are the things that I think are going to really be transformative for health and for Society. Every year, 35,000 people in California just drop dead. We call it sudden cardiac death. Exactly what it sounds like. The heart just stops.
- Ziad Obermeyer
Person
And the reason those deaths are so tragic is because we have the cure. We could have implanted a defibrillator in those patients and saved their lives, but only if we know who needs a defibrillator. Doctors try to predict that risk, but we're often wrong. And so we miss a lot of people who would have benefited from defibrillators.
- Ziad Obermeyer
Person
And we also implant defibrillators in people who don't need them. Working with two health Systems in California, Sharp Healthcare in San Diego and Providence. My lab trained an AI system to read an electrocardiogram waveform and predict patient risk that works better than what doctors are currently using today, meaning it can save lives and avoid waste.
- Ziad Obermeyer
Person
Using generative AI, we can actually look for new patterns in that ECG waveform, things that doctors have missed despite looking at ECGs for 100 years, that can help us better understand the underlying physiology of that problem. So that's not just better healthcare, that's actually a new way of doing science.
- Ziad Obermeyer
Person
AI can also reshape where care happens and how much it costs. And I wanted to show you one thing about that. So this is a device that's made by a California company called Alive core. It costs $100.
- Ziad Obermeyer
Person
You can get it delivered same day to your door, and you just put your fingers on the metal parts and it generates an electrocardiogram on your smartphone. And that is an interesting starting point for a new way to think about public health.
- Ziad Obermeyer
Person
So my lab has paired data from devices like that one with data from expensive medical grade things you can only do in the hospital today. Cardiac ultrasound and X ray laboratory tests.
- Ziad Obermeyer
Person
And with that data set, we can train an AI system that goes from the device, data from this tiny little thing to predicting the result of the expensive test you can only get in the hospital.
- Ziad Obermeyer
Person
And that means this device is now a public health platform that lets us screen people in their home for pennies for a range of things, including heart attacks, catch problems early, and send only the people who need it into the hospital to get the care that they need.
- Ziad Obermeyer
Person
So this is actually how I think precision medicine was supposed to look, delivered by AI in a way that's affordable and equitable. I wanted to mention a few policy things that can help to make this future real. And the first one is, of course, accountability.
- Ziad Obermeyer
Person
So I've worked with federal regulators and state attorneys General, including Attorney General Rob Bonta office here in California, to hold AI accountable where it goes wrong. And I've seen firsthand that our current laws are very powerful tools for prosecuting people who use AI to do illegal things, discriminate, deny health insurance, violate privacy.
- Ziad Obermeyer
Person
And I think that's really useful to know because regulating the intricacies of AI directly is very hard. And if we get that wrong, we're going to choke off the innovation that we need. The second thing we need is data. Patient data are precious and we have an ethical duty to protect it.
- Ziad Obermeyer
Person
But we also have an ethical duty to save lives and to monitor algorithms and hold them accountable. Hindering access to data can mean sudden deaths without defibrillators, cancers metastasizing before they can be caught, and lack of accountability for algorithms that are already deployed.
- Ziad Obermeyer
Person
The only reason we were able to catch racial bias in that algorithm is because we had access to data that helped us hold it accountable.
- Ziad Obermeyer
Person
Too often privacy concerns just shut down conversations about access to data instead of starting a conversation about how we use all of the powerful tools we have to protect data security like strict De identification under hipaa, secure computing environments and strict law enforcement. I don't think we have to choose between privacy and progress.
- Ziad Obermeyer
Person
I think we can have both. The third element is leadership because I think Medi Cal and other public programs have a huge opportunity to steer AI in the right direction. The private sector today is under investing in this area because of uncertainty over what insurers and others are going to pay.
- Ziad Obermeyer
Person
And I think that's a big reason why the next time you go see your Doctor, there might be an AI scribe, but that Doctor is not going to be using cutting edge clinical AI tools. There's just not enough investment right now.
- Ziad Obermeyer
Person
But imagine if Medi Cal could say we want an algorithm that is going to detect this cancer at least this, this many months early and here's what we're going to pay for it. And you can choose how much you pay for it using standard health economics. Lives save, costs avoided.
- Ziad Obermeyer
Person
And you can set targets not just for the accuracy of those algorithms, but for the equity implications of them. You could even open up data for researchers and product developers to use Medi Cal data in a responsible way to help develop those products. I think if you lead, the market will respond.
- Ziad Obermeyer
Person
Finally, partnership together government and universities have won wars, created industries, cured diseases and health. AI is no different. There are thousands of researchers across California who are ready to build the next generation of these AI tools. And we need your support in the, in this critical moment. Thank you.
- Michelle Mello
Person
Thank you. Madam Chairs, I'm very pleased to be with you today. My training is as a lawyer and a health services researcher. I run a lab at Stanford that has responsibility for reviewing AI tools that are proposed for use in Stanford healthcare facilities which we're a multi hospital system.
- Michelle Mello
Person
We take care of about a million patients per year. So my day job is very much about thinking about AI governance and I'd like to talk to you about what I think states could do in that area.
- Michelle Mello
Person
And I'm going to assume a future in which there's still a role for state regulation of AI, notwithstanding the moratorium in the House appropriations Bill, which I'm happy to talk about. But I want to leave you with three key messages.
- Michelle Mello
Person
First is that I think that market incentives right now are sufficient to interest healthcare organizations in experimenting with AI, but not to stand up really strong governance systems.
- Michelle Mello
Person
The second is that despite the really impressive performance of large language models like ChatGPT, my lab's experience has been that they do require monitoring and that that monitoring is pretty challenging to do.
- Michelle Mello
Person
And finally, states can help overcome kind of an incentive gap around doing that monitoring by establishing a regulatory framework that requires healthcare facilities to have a governance structure as a condition of licensure. So first, what can we rely on the private market to do or not do?
- Michelle Mello
Person
I think again, the incentives are there for healthcare organizations to want to try things with AI, but when it comes to the difficult task of actually governing them to ensure safe and responsible use, there are not incentives sufficient for a lot of healthcare organizations. The organizations you've heard from today are the superstars.
- Michelle Mello
Person
The Kaiser Permanente, Cedar Sinai, Steam, Hanford, Berkeley. These are organizations that have taken the bull by the horns, but they're not typical. Most health care organizations that use AI do very little to vet it and even less to monitor it. And so, as a result, we have very little information about its safe and equitable use.
- Michelle Mello
Person
And the reason that they don't do more is it is really costly and difficult to do. It requires standing up committees of people who are willing and able to spend their time reviewing safe and responsible use. And while the conventional wisdom is that AI can be a cost saver for hospitals for most applications, that's just not true.
- Michelle Mello
Person
We're talking about a net cost increase, not just from buying the thing, but again, from having the people around to monitor it. The value Proposition for hospitals is not typically that it makes money or saves money. It's that it relieves the staff of some of the overwork that threatens to burn them out.
- Michelle Mello
Person
It helps them process more patients per day and relieves capacity constraint. So organizations are not out there sort of we with big financial cushions as a result of AI use. They need financial incentives to do this governance.
- Michelle Mello
Person
Now, on the developer side, from a lawyer's perspective, the problem is that most of these developers have gotten wise to the fact that they can just put disclaimers of liability and warranty into their terms of service and not worry about facing financial responsibility for errors.
- Michelle Mello
Person
That's not true of all products and all developers, but I see it a lot. And even though for consumer products, Congress has said you can't disclaim warranties, that actually doesn't apply to software. So there's really nothing stopping that from happening right now.
- Michelle Mello
Person
So when we think about where the incentives for safety are in the ecosystem, the liability to your question, Dr. Patel, is likely to fall on hospitals and physicians, not on developers.
- Michelle Mello
Person
And so as a result, when we work with developers at Stanford, it's actually quite hard to get them to give up information about the weaknesses of their products. You really have to bring the lawyers cross examination skills to the table to get that to happen. And again, why should they?
- Michelle Mello
Person
They're in a competitive market, nobody's making them do it and nobody's asking a lot of the time. So the incentive structure is not quite where it needs to be for robust governance. And that's a problem because to my second point, these tools really do require monitoring.
- Michelle Mello
Person
And the reason is that paradoxically, they work really well most of the time. And what does that mean for the human in the loop, the person who's checking that email at the end of a long day? It means that they feel really confident and trusting in the tool.
- Michelle Mello
Person
They're tired, and the whole reason their organization adopted this tool is to save them time. So actually at Stanford, when we've measured the effects of things like ambient scribes, they save like 2 to 20 minutes a day because our people still feel like they want to edit them.
- Michelle Mello
Person
But as soon as you stop doing that, the human in the loop kind of falls out as a meaningful check. So what does that mean? It means that organizations need to be looking for errors, but they're not naturally inclined to do that.
- Michelle Mello
Person
When I talk to our quite sophisticated digital services team and ask them, you know, what does it mean to you to monitor these tools? And the answer I get is a software engineer's answer, which is, is it on and is it working? As a safety expert, that's not really primarily what I'm interested in.
- Michelle Mello
Person
So doing this stuff is hard. Health care organizations tend not to gravitate towards the metrics that people who are interested in patient safety would want them to be measuring. And so again, we have to think about how to drive them in that direction.
- Michelle Mello
Person
And I guess the final thing I would say is I do think there's a role for states here.
- Michelle Mello
Person
And, and the analog that I keep returning to as someone who thinks about governance, albeit an imperfect one, is the common rule for human subjects research, which is the federal rule that says if you'd like to have federal money for Research, your organization has to have an institutional review board to review research and make sure it's safe.
- Michelle Mello
Person
And the review board has to apply the set of decision criteria, the set of standards. And beyond that, it's kind of up to you how you do it. I think that's a very sensible model for reasons I'm happy to talk about in Q and A. There's a lot of risk in being overly specific about standards.
- Michelle Mello
Person
But the good news is there are lots of private initiatives like Chai, from whom you're going to hear shortly, that have developed a very robust consensus based set of standards. And what remains is to create the regulatory incentive to make sure that those standards are being applied to.
- Michelle Mello
Person
So that's where I think states can play a valuable role in. Thank you. And I look forward to your questions.
- Mia Bonta
Legislator
Thank you. And I'll bring it back to our colleagues for questions. Dr. Sharp-Collins.
- Lashae Sharp-Collins
Legislator
Good morning. So as you were, you were talking a little bit more and talking about the different partnerships and so forth, and you kind of went there a little bit on the AI governance and federal side of it.
- Lashae Sharp-Collins
Legislator
So I'm asking if we can kind of dive into that a little bit because I'm wondering what are the federal impacts on the AI space when it comes to making sure that we are developing cultural competent and also the unbiased AI? So I'm wondering about that.
- Lashae Sharp-Collins
Legislator
And then are we using any data from the Feds that might be changed based on the current Administration shenanigans? Right now?
- Michelle Mello
Person
I'll let Dr. Obermeyer speak to the latter question. You know, I think it's still a little bit murky what we're going to see in terms of the federal landscape. It has surprised me that they have not explicitly rolled back Section 1557, which imposed those non discrimination obligations on organizations that use algorithms.
- Michelle Mello
Person
I have to think that it's either only a matter of time before that happens, or that they simply will direct the Office of Civil Rights not to enforce those provisions. It does not, to put it mildly, seem like a priority of the current Administration to enforce provisions against algorithmic bias.
- Michelle Mello
Person
And so that again creates an opening for states to do some things that I think could potentially be much more useful than the text of Section 1557.
- Mia Bonta
Legislator
Anyone else wanted to continue on? Okay, go ahead. That was my question.
- Rebecca Bauer-Kahan
Legislator
Thank you all for being here. I have a couple questions. The first one is--okay, that echocardiogram thing you held up, so that is really cool and I like the idea of saving lives--but it brought to mind Life360. For any privacy people in the room, you will know that many families use Life360 to make sure their kids are not driving too fast and to save lives. Great. Tell you, as the aunt of some teenagers who needed that, it probably saved some lives.
- Rebecca Bauer-Kahan
Legislator
But we also know that Life360 sold that information to insurance companies, causing people to lose the ability to get car insurance. So here you're proposing something on my phone that would tell people exactly how my heart is performing, which could be sold to health insurance companies, preventing me from getting coverage.
- Rebecca Bauer-Kahan
Legislator
And so does it save lives or does it not save lives because if I can't have health insurance coverage, I'm also going to lose my life. And so I just think that we need to have that conversation in the context of data privacy because you also were highlighting the value of data.
- Rebecca Bauer-Kahan
Legislator
So I think data is important. I think you're right. In many settings it is used well to advance science, to advance our understanding, and in many settings it is used to harm people, and so that tension is real and one we need to be really cognizant of and one our constituents demand, and so I just wanted to invite you to speak to that.
- Ziad Obermeyer
Person
Yeah, it's always something that's so important to keep in mind that almost any tool can be used for a variety of different purposes, and so the question that I think you all have to struggle with is, what is the right point at which to regulate the tool?
- Ziad Obermeyer
Person
And I think that--at least my view--is that the point of regulating access to data is a very difficult one because I think that's going to choke off a bunch of both good and bad uses of the tool, and similarly, trying to regulate the details of what algorithms do seems really, really hard to me as well as someone who spent a lot of time thinking about it. And so, at least for me, where it seems like we should be regulating is on the uses of the tool, rather than how we make the tool or what, what the data source of the tool could be.
- Ziad Obermeyer
Person
So if these data are being used to deny people insurance coverage, that seems to violate a bunch of the provisions around denying people coverage that, that are currently in place--
- Ziad Obermeyer
Person
And so we should aggressively prosecute those, and we might need new ways of regulating new nefarious uses of these tools, but it seems like regulating those downstream uses is a lot more targeted and precise than trying to regulate upstream because that's also going to prevent people from doing the life-saving parts of the tool.
- Rebecca Bauer-Kahan
Legislator
And I appreciate that. I mean I would counter that with, I think, the HIPAA and CMI protections, which allow for use of data when de-identified and anonymized which would prevent the harms I'm looking at and I think absolutely allow for the benefits that you're seeking is sufficient. And so that is a data protection that is real.
- Rebecca Bauer-Kahan
Legislator
It is in place today, although I will note that, as this committee knows, because a bill moved through this committee as a result of this, I showed up at my healthcare provider, my pediatrician, and was asked to sign away my HIPAA and CMI protections for the scribe that my doctor was using.
- Rebecca Bauer-Kahan
Legislator
I've, in my conversations with the health systems, have learned this was in part because I was at a smaller health system with less bargaining power. So the larger health systems appear to have bargained for a contract with the AI companies that protects patient data and my smaller system had not.
- Rebecca Bauer-Kahan
Legislator
So you know, I was being asked to give up that de-identification--you know, and anonymization that I think is critical. So I will say that. Now, I want to go to Dr. Mello. You mentioned the developer-deployer dynamic, and I think it is one that we should just talk about for a second because I think it is an incredibly important one.
- Rebecca Bauer-Kahan
Legislator
We have these tech companies that are creating these AI tools and they are being deployed by people--to be frank--in the hospital setting with much more pure motivation, I think, than the tech companies that really are paying attention to patient outcomes as you are every day in your role, and there is this tension because right now current law puts much of that liability on deployers and it is unclear--courts are sort of looking at the developer role--but it isn't clear yet that they will have the same liability for outcomes that the deployers do.
- Rebecca Bauer-Kahan
Legislator
And this is something that some of our AI legislation has grappled with is what is the role of developer? What is the liability of a deployer? And it has been one that I've been fascinated by. When I started this conversation as Privacy Chair, I really thought we were going to have the deployer community stand up and say, 'we want developers to have some responsibility here because we are the ones as purchasers of these tools that are deploying them on humans that need to know that we can trust them.'
- Rebecca Bauer-Kahan
Legislator
I will say that has not been my experience with the deployer community writ large. Some have seen that benefit, but I think it's one that we really need to talk about because I think hospitals--who in large part are deployers, although I think, you know, maybe some of our larger systems are developing tools, but I think that's an outlier--need to, I think, be talking about this, their liability, what their role is, what they need from the companies that they're contracting with, because I think it helps us understand how we should be thinking about the law. So I just wanted you to talk about sort of what you would like to see out of developer liability.
- Michelle Mello
Person
Well, I think most developers are in their business because they want to make people's lives better. I don't ascribe ill motives to them, but I think they have good lawyers, and as a lawyer, you would advise your client as well, right, to put this into your terms of service if that was available to you.
- Michelle Mello
Person
The problem is that that is available to them because nobody says that they can't. And so what's happened now is a disruption of the ordinary understandings in tort law about when is a product maker liable and when is a product user liable.
- Michelle Mello
Person
Now, all sides in this ecosystem face an enormous amount of uncertainty, and there too I have some sympathy with developers. A lot of these developers are smaller companies, they're competing for capital with bigger companies, and when they have a lot of uncertainty around their legal risk, the price that they pay for capital is going to be higher and the price of their products is going to be higher.
- Michelle Mello
Person
So it's in everybody's interest to clarify this area so that people can get a handle on what their liability is going to be and how to insure against it, right, because the solution here is generally, you get insurance, but when people don't know how much risk to insure, it's really, really hard.
- Michelle Mello
Person
So what would be helpful, I think, is either banning these complete disclaimers of liability or passing legislation that sets out sensible limits on the liability of product makers, which for any other product is when your product has a defect, you're liable for that. If somebody misuses it in an unforeseeable way, you're not liable.
- Michelle Mello
Person
I think those rules can be applied here. I think courts will get there, but I don't want to wait the ten to 15 years that it takes them to do that in a way that sends a clear signal to the industry. I think it would be better for that signal to come from something clearer.
- Ziad Obermeyer
Person
If I could just add one brief point from my experience in evaluating this family of algorithms that were found to be very racially biased, the developer didn't catch the problem, but neither did the hundreds of healthcare systems that purchased the product from that developer.
- Ziad Obermeyer
Person
And the doctors that were using it on an everyday basis also didn't catch it. The patients didn't catch it. So it was a real, I think, market failure to the point where not only I think do we need some sort of oversight of these things, but we also need the data to evaluate them at scale because a lot of these problems, even very big ones, can't be caught at the individual patient level in the same way that you can look at that clause in your terms of service thing. These things only emerge when you look at algorithms in large data sets.
- Darshana Patel
Legislator
Thank you for the presentations. Dr. Mello, I have a question for you because I was very intrigued by your, what you brought up as IRB protocols potentially being applied to using patient data. Do we see that being robustly done when AI companies come in asking for that patient data because that seems to be a well-established practice, especially when we look at experimental use and that companies are asking for that data to help develop their tools. That sounds very much like experimental use.
- Michelle Mello
Person
So what I've seen is that there are three different sort of pathways. One is somebody is looking at AI in the context of a research study. They want to know if that thing actually improves people's cardiac health.
- Michelle Mello
Person
And there they go through an IRB, and the IRB will typically be very uninitiated into issues that are specific to AI but they will do their best to apply our ordinary rules for protecting human subjects. That's the exception, not the rule.
- Michelle Mello
Person
Pathway two is that nobody's generating new data, but what's going on is that a company would like to do something with data, patient data that we already have. And most large hospitals have now data use committees that will apply guidelines about when they give or in some cases sell patient data out to external companies.
- Michelle Mello
Person
So there's variation in practice, but in general, that's a thing that is on most organizations' radar screens already. And the third and most common pathway is that somebody in the hospital has gotten wind of a cool new gadget and would like to try it in a quality improvement sort of situation, and that's the situation where we have the least amount of governance.
- Michelle Mello
Person
People have typically already fallen in love with the technology, they've got a plan in place, and there is no requirement because it's not pegged to research funding or anything else that they go through a governance structure. Even if they wanted to, there usually isn't one for AI in particular.
- Michelle Mello
Person
So that's what is missing right now at most large hospitals, and I would venture to say all small hospitals, is a separate committee that deals with situations where someone wants to start using AI in the hospital, but it's not pegged to a requirement for governance because it's research or something else.
- Mia Bonta
Legislator
Thanks, and I think I'll just ask the last question. So I am, again, hyper-focused right now on Medi-Cal for legitimate good reasons in terms of our overall cost share to--and the number of people who are taking advantage of Medi-Cal and being supported by Medi-Cal.
- Mia Bonta
Legislator
Both, I think, Dr. Obermeyer and Ms. Carter, you talked about the application of AI for our safety net communities and potentially leveraging Medi-Cal in a way that we currently aren't doing now.
- Mia Bonta
Legislator
Can you just speak more specifically around issues related to equity, data bias within kind of our Medi-Cal community for developing AI or generative AI and what, if anything, you would recommend from a policy perspective we focus on as it relates to Medi-Cal and its application, AI application within that?
- Kara Carter
Person
Thank you for that question. I share that concern. I mean, my fellow panelists have spoken a lot about where hospitals are, where commercial institutions are. When we look at our Medi-Cal population, we're not just dealing with hospitals or tertiary centers, we're also dealing with primary care clinics, CBOs, a much broader range of providers that meet the needs of our most vulnerable citizens everyday.
- Kara Carter
Person
When we talk to those folks--and you'll hear from some of them in the third panel--we hear an optimism and an enthusiasm for adopting the same range of tools that their counterparts in the commercial sector hold. Why is that? Because they see the benefits that accrue to providers.
- Kara Carter
Person
They see the benefits that accrue to staff. They see the potential for benefits that accrue to patients. And we hear a broad range of concerns around a kind of have and have nots that already exist that could be exacerbated by these kinds of technologies.
- Kara Carter
Person
So we talked a lot in the first panel about how providers welcome a reduction in pajama time, a reduction in paperwork. We already struggle in the safety net institutions in the state to recruit and retain providers across a range of specialties in primary care.
- Kara Carter
Person
When you can provide a benefit like that that is highly attractive to providers in our commercial institutions and our wealthiest institutions and you cannot provide it in our safety net institutions, you exacerbate a series of problems that already exist. So I'm not surprised that I hear enthusiasm from them.
- Kara Carter
Person
I'm not surprised--you know, we hear also grave concerns around governance from our safety net institutions who are excited and willing and looking for opportunities to have better data governance and better, a better understanding of what they can do.
- Kara Carter
Person
They are super confused by the lack of clarity right now in the deployer-developer relationship and they're working to try to catch up really fast so they don't exacerbate many of these challenges that I just listed out before.
- Kara Carter
Person
So that's kind of what we see as an environment where people are eager to adopt something, see potential benefits, certainly have read the many studies and evaluations of challenges and bias that have come before, are trying to avoid those, and they're trying to move quickly because they're in an environment where the whole environment is moving extremely quickly.
- Kara Carter
Person
So what would be most helpful? Somewhat the things that I named earlier: a clear attention to bias and equity, a clear attention to who has liability when issues might arrive in the future, a clear attention to how you on a long-term basis fund not just the deployment of these tools, but indeed--and the governance of them--but also the workflow changes that it requires to implement them in resource-strapped organizations is not to be underestimated.
- Kara Carter
Person
So all of these things are kind of top-of-mind, top-of-mind for me and top-of-mind for what we hear, and then we hear certainly from safety net providers, a keen interest in where patients are. So how are patients receiving this?
- Kara Carter
Person
The experience that was named earlier on the difference between being at a smaller institution or a larger one, our safety net institutions are very attuned to that, wanting to provide the best patient experience, perhaps don't have the same leverage in negotiation as other institutions so are very interested in how you address that. So how you--yeah. So those are, those are the things that for me are top-of-mind.
- Ziad Obermeyer
Person
I think that there's a couple of deep disparities that really hurt programs like Medi-Cal, and one of them is very straightforward. It's just a data disparity.
- Ziad Obermeyer
Person
So having tried to work with a county hospital in the Bay Area to do the kinds of work that I do with other, much better resource hospitals, it is just so much harder because there are fewer people, the data infrastructure is less sophisticated, the versions of the software that are being used are much older. So there are just a lot of things that make it much harder to get the data out of these safety net hospitals to put them into algorithms.
- Ziad Obermeyer
Person
And I think it's a lot harder also for researchers and people who want to evaluate algorithms to get access to the Medi-Cal data than, for example, to Medicare where there's a very clear access pathway. So I think that data disparity is one problem.
- Ziad Obermeyer
Person
The other disparity is, I think that along with that data disparity comes a disparity in what kinds of problems people are working on, and what you see in the market is that--there was a survey of health AI tools that were being developed, and I think that about half of the tools were being developed on the basis of data from Palo Alto, Rochester, Minnesota, and Cambridge, Massachusetts.
- Ziad Obermeyer
Person
And that's not just like a data problem, that's like what problems do those patients have? Those are different problems from the other patients in other parts of the world, and so I think beyond the data, there's a lack of cohesiveness and attention to certain problems. And I think the good news for Medi-Cal is that Medi-Cal is big enough that you can get people's attention.
- Ziad Obermeyer
Person
So if you laid out some priorities and gave people the tools and even just the permission to work on those things and say, 'okay, I'm going to set the goalposts for health AI or for other things,' you could actually really steer the market in the right direction and I think there are a few examples that we'd be happy to follow up on that are specific.
- Mia Bonta
Legislator
Thank you so much. We are going to move on to our next panel. Thank you so much for participating in this part of our discussion, and we are going to move on now to our final panel, if they could come forward. In the final panel, we'll hear from a range of stakeholder perspectives.
- Mia Bonta
Legislator
We're happy to be joined by the California Nurses Association, the California Medical Association, the Coalition for Health AI, LifeLong Medical Care, a Bay Area community clinic, and a representative from the Light Collective, a nonprofit that advocates for the patient's rights in health AI.
- Mia Bonta
Legislator
Again, I'll remind all panelists to stay within the allotted three minutes timeframe for your remarks so we have time to hear from everyone. Thank you. And we will begin with Mr. Nielsen.
- Christopher Nielsen
Person
Good morning. My name is Chris Nielsen. I'm the Education Director for the California Nurses Association where I also help to coordinate some of our unions' work around AI. We know that even some of the most well-intentioned and carefully designed technologies can have unintended consequences when they're deployed in real-world settings, and I really appreciated some of the previous panelists' insights into how these tools function within the socio-technical systems is a term that we heard in which they're embedded.
- Christopher Nielsen
Person
And I wanted to just call attention to some of the concerns that we have. Obviously some applications like generative AI nurse agents could lead to outright job displacement, but the more immediate concern for nurses is that generative AI systems could be used to actually justify continued understaffing, and I'd be happy to talk more about that later.
- Christopher Nielsen
Person
You know, we've heard a lot about the promise of AI to meet--to help address staffing shortages and the like, but I think we have concerns that it could have, in some instances, the opposite effect and that generative AI could also undermine nurses' clinical judgment and some of the other skills that are essential to providing safe and effective care.
- Christopher Nielsen
Person
So, you know, we've heard some of the studies about the accuracy and bias that generative AI tools can exhibit, and so, I won't, you know, review those, but in some instances, inaccuracies, error rates can be alarmingly high and we've seen both in our own polling of members that bias continues to be a significant concern.
- Christopher Nielsen
Person
A frequent selling point is that generative AI tools will help save clinicians time, but again, here we've seen in practice that they can wind up either not really providing meaningful time savings or in some instances actually taking up more of a nurse's time, more of a nurse's time away from direct patient care because of some of the issues that we've heard about, you know, the need to review the AI outputs, correct errors, and that sort of thing.
- Christopher Nielsen
Person
We know that without nurses' collective voice, many employers could use generative AI systems in ways, you know, intentionally or not, that could wind up deskilling or devaluing nursing or even automating parts of nurses' jobs which may be seen as administrative or clinically adjacent, but are actually central to the skill, the art, the process of nursing.
- Christopher Nielsen
Person
And I'll just offer a few broad suggestions of how we can address these, these harms. Obviously, we need strong regulation around AI and we need government's help in determining what the, the standards are for evaluating the safety, efficacy, and equity of these tools.
- Christopher Nielsen
Person
We need unions and workers to be not just at the table, but in the driver seat to have a significant say in decision-making about generative AI in the workplace, including, especially, the decision whether to deploy a tool in the first place.
- Christopher Nielsen
Person
We need to ensure that patients and nurses' right to safety and high-quality of care is guaranteed, and we could do that through a regulatory framework based on the precautionary principle, putting the burden of proof on, you know, not just deployers, as we've heard about, but on developers, on vendors, to demonstrate that their tools are safe and effective and equitable before they're deployed widely in real-world settings and not over rely on the human-in-the-loop principle.
- Christopher Nielsen
Person
We've already heard about some of the limitations of that. And finally, we need to establish the right of workers to override or object to new technologies and workers should have the final power and say on the shop floor over how and whether these tools are used in their facilities.
- Christopher Nielsen
Person
We have other recommendations that we'd be happy to share, but at the core, CNA nurses want to safeguard the rights of nurses, patients, and the public. Thank you for holding this hearing and happy to answer any questions at the appropriate time.
- Brent Sugimoto
Person
Microphone. Thank you. Thank you for inviting me today and I really appreciate that you guys are tackling this topic. It's something that's very interesting, very important to me. My name is Brent Sugimoto. I'm a family physician and HIV provider.
- Brent Sugimoto
Person
I am the Program Director at the LifeLong Medical Family Medicine Residency, which is a teaching health center with a mission to build a primary care workforce for underserved populations.
- Brent Sugimoto
Person
The generative AI revolution we've talked about has--are already here and there have been many perceptible changes, including to my practice, and while I can see the potential for a lot of the benefits, I want to highlight just what we've talked a lot about here is making sure that the safety net is not left behind, especially when they are already used to neglect and even harms, right, from our larger health systems and highlighting some of that.
- Brent Sugimoto
Person
And so I'd like to talk to you about three main issues. One is equitable access to GenAI tools, training data quality and AR governance, and workforce training and effective and safe use of these tools. So we talked a lot about pajama time today, right, and this is something I'm very familiar with as a primary care physician.
- Brent Sugimoto
Person
I used to chart late in the night and now that we have an ambient scribe, I'm usually done by the end of the day, right, which means more time to cook dinner, to help my kids with homework, to get them to bed, right? Our family life is perceptibly better.
- Brent Sugimoto
Person
But I would say that one of the issues with these tools is that they're very expensive. So in our own health system we're able to afford them. We have the broadband access to be able to run these systems, but that's not true in a lot of other safety net settings and rural settings, right?
- Brent Sugimoto
Person
If you look at just the ambient scribe technology, it averages--like for us, it was somewhere on the order of like 500 to $600 per provider per month. And you look at just this one solution.
- Brent Sugimoto
Person
We've talked about all these solutions that we need to try and buttress up our health system; there's no way safety net systems are going to be able to provide this. So community clinics need affordable access to tools like these so our primary care clinicians can sustainably care for our vulnerable patients.
- Brent Sugimoto
Person
These tools are great, but they're imperfect, and this is another thing I want to highlight, and I wanted to start with a story, my own personal story of caring for patients.
- Brent Sugimoto
Person
I was chatting with one of my patients who lives with HIV, and he was complaining about the time of year when all the pine needles fall from the tree and was complaining about how he would get scratches on his arms from cleaning them up, right?
- Brent Sugimoto
Person
And so when I got the transcript from our encounter, it mentioned intravenous drug use, which shocked me. And I'm like, where did this come from? And the only thing I could think of was that it heard needles and it heard HIV, and it concluded we were talking about IV drug use because that's a risk factor for acquiring HIV. So we talk about, like, potential harm to these technologies, right? This is coming from the bias in the data that is causing this to happen.
- Brent Sugimoto
Person
And I can talk about a lot of other incidents that I've seen in my own practice that's had this happen. So one is, it's really important--Dr. Obermeyer talked about this, Mr. But talked about this--is making sure that the data are representative of the patients that we are taking care of.
- Brent Sugimoto
Person
The other thing is, like, I was able to notice some of these shortcomings in the AI, and that has to do with my background, my interest working in this, but a lot of my colleagues do not, right, and would they be able to spot some of these potential harms?
- Brent Sugimoto
Person
And so this is where it comes to the point of workforce training is really important here, and safety net settings at least, we need support in being able to train up our workforce so that they can use these tools and that they can use them safely in our patients.
- Brent Sugimoto
Person
Finally, I think one thing I want to add on is Dr. Obermeyer was talking about the importance of leadership here, and in the safety net setting, right, one of the ways we primarily care for patients is primary care, right? That's what we do, right? That's the first line here. When you look at the development of AI tools, where are they when you look at the FDA approvals, right, their radiology, cardiology, dermatology?
- Brent Sugimoto
Person
How many devices have been approved for primary care? It's very, very few, and I think this is actually one of the big divides that is exacerbating this haves and haves not here, and so we need some sort of leadership to help incentivize developers to focus on primary care because this is where it's going to have the most bang for the buck for the improvement of health for our populations, right?
- Brent Sugimoto
Person
One of the things I think of is like our payment structures and how those incentivize what developers work on because primary care, right, doesn't generate the revenue for people to want to buy solutions in primary care so no one works on developing them. So that, those are kind of the things I wanted to highlight, you know, I can't think of a better way to underscore how this is an issue equity for Californians, and thank you for this opportunity to talk to you today.
- Brenton Hill
Person
Good morning, chair and members of the Assembly. My name is Brenton Hill, and I represent the Coalition for Health AI or CHAI in short, the nation's leading nonprofit advancing responsible, trustworthy AI and health.
- Brenton Hill
Person
We're a multi-sector alliance of nearly 3,000 organizations, health systems, startups, industry, government and patient advocacy groups, including all of the UC Health systems, Kaiser Permanente, Stanford Health Care, and Sharp HealthCare. I want to briefly highlight four of the issues that we're working to solve and then provide some of the policy implications on what you might consider.
- Brenton Hill
Person
First is the lack of trust and transparency in AI. CHAI's developed standardized model cards, structured disclosures that allow for clear, consistent evaluations of AI solutions. We're also developing a registry to house these model cards that will be publicly accessible.
- Brenton Hill
Person
A model template of what that card looks like has been provided in your packet today. We commend the Assembly for passing AB 2013 and urge that vendors be held to enforceable disclosure standards to make that transparency meaningful.
- Brenton Hill
Person
Number two: health inequities from uneven AI readiness. At CHAI, we're developing readiness-based guidance tailored to different levels of digital maturity. For example, we're working with the National Association of Community Health Centers on frameworks for lower maturity health centers that would allow them to put the proper controls and governance measures in place to successfully adopt and deploy AI.
- Brenton Hill
Person
Three: insufficient validation and real-world monitoring. As was mentioned before, there was a recent Health Affairs study found that more than half of U.S. hospitals using AI tools have not validated them locally using local data and most don't even evaluate for bias, putting underserved populations at a higher risk.
- Brenton Hill
Person
To address this issue, we are certifying a network of assurance resources, vetted partners who can support hospitals with pre and post-deployment evaluation using consistent equity-focused methods. And last: fragmented guidance. Health systems and hospitals face a confusing patchwork of federal and state frameworks. CHAI translates this plethora of information and creates technically-specific, operationally useful guidance.
- Brenton Hill
Person
And so, as you're considering future AI policy, we urge a principled yet adaptable approach, one that protects patients and providers. And so I want to bring out four things that we would want this Legislature to consider: one, promoting transparency while protecting innovation.
- Brenton Hill
Person
Requiring the use of standardized disclosures frameworks such as model cards or fact labels could reinforce this transparency, much like AB 2013 has been found to do or will be found to do in 2026. Number two: incentivize inclusive and representative data practices.
- Brenton Hill
Person
We've heard a lot about the need for more lower resource-setting data and so this Legislature could support privacy, preserving collaborative data aggregation efforts and reduce structural barriers to data sharing among under-resourced organizations, advance interoperability as a foundation for responsible AI by supporting startups and encouraging large vendors to enable access for validated third party tools and reward good governance practices, and last, support infrastructure readiness for lower resource providers by promoting secure cloud access, EHR integration, workforce training, and internal governance structures. Thank you for the opportunity.
- David Ford
Person
Thank you, members of the committee. My name is David Ford. I'm the Chief Executive Officer of what's called CMA Physician Services. We're the practice transformation, practice coaching subsidiary of the California Medical Association.
- David Ford
Person
As we look at this issue, one important piece of federal context--and it was briefly referenced in the last panel--it has to do with H.R. 1, the reconciliation bill currently being debated in the United States Congress.
- David Ford
Person
Just so everyone's aware, that bill contains a provision restricting states from, quote, 'enforcing any law or regulation regulating artificial intelligence systems for ten years.' That ten-year moratorium in the world of generative AI seriously might as well be a millennium because we don't really--and when we look at how fast these systems are changing and improving, imagine what it will be ten years from now.
- David Ford
Person
So while we wait to see the outcome of that bill, that aside, AI really does have great promise to improve the healthcare system. It's been discussed a lot how it can handle routine administrative tasks such as documentation, patient scheduling. What's been less discussed in this hearing it also has the ability to process absolutely incredible amounts of data that would be difficult for human researchers for the purposes of things like clinical trials and quality improvement.
- David Ford
Person
The challenges facing AI are very real. What Dr. Sugimoto was describing with his AIDS patient--we sometimes refer to as a hallucination--is real and it's something that the entire tech industry as well as the healthcare system is working to try to figure out to solve for.
- David Ford
Person
But the bigger issue--and it's been referenced many times in this hearing already--is the threat of creating AI haves and have nots in the healthcare system. We need the large integrated medical groups, the academic medical centers, the tech companies doing what they're doing right now because it's important.
- David Ford
Person
They have the resources and they can. They can develop these tools, they can deploy them, they can test them. That's very important and their resources are important to what we're doing right now. The question is how we translate that into safety net settings, small medical practices, community health centers, critical access hospitals.
- David Ford
Person
I posit to you there actually are some models we could look at. I personally developed and ran a federally designated regional extension center that helped more than 10,000 providers implement their EMRs. The CalHHS data exchange framework is currently helping many, many providers across the state implement data exchange, and it's potential that we need to copy one of those models for AI. Thank you.
- Christine Von Raesfeld
Person
Good morning, Assembly Members Bonta and Bauer-Kahan, committee chairs, and my fellow panelists. My name is Christine Von Raesfeld, a board member for the Light Collective and a co-author of the AI Rights Initiative. Our mission is to advance the collective rights, interests, and voices of patient communities in health technology and to ensure AI serves patients.
- Christine Von Raesfeld
Person
Our motto is: no aggregation without representation. AI is revolutionizing healthcare, but there is a critical gap. Today, patients have no enforceable rights governing the use of AI in our medical system. This leaves vulnerable individuals exposed to decisions made without their input, often leading to bias, blurred processes, and ultimately harm.
- Christine Von Raesfeld
Person
According to our 2025 Tangled in the Web report, 91% of patients want to be informed of how AI influences their care and and nearly 74% express concern over lack of privacy safeguards and legal recourse when errors occur. These statistics underscore the urgency to embed patient protections into the framework of AI-driven healthcare.
- Christine Von Raesfeld
Person
Our response at the Light Collective has been to develop the Patient AI Rights Initiative, a framework built on seven enforceable rights designed to ensure that AI serves patients, not just the corporate interests. Let me briefly outline some core components.
- Christine Von Raesfeld
Person
Patient-led governance: patients must have a seat at the table, co-creating policies and serving on governance boards to ensure that decisions reflect real needs. Transparency: patients should receive clear, accessible explanations about how AI makes decisions and what data it uses.
- Christine Von Raesfeld
Person
Privacy and security: robust measures are essential to protect patient identities and sensitive health data, guarding against misuse and potential re-identification. Legal recourse: when AI-driven decisions lead to harm, patients need actionable, legal pathways to seek redress and accountability. Without these measures, the promise of AI in healthcare is undermined by risk and inequality.
- Christine Von Raesfeld
Person
We must shift the paradigm from one driven solely by profit to one that protects those at its very heart, the patients. I urge you as policymakers to act decisively. There is an immediate need for legislation that codifies these patient rights into healthcare policy.
- Christine Von Raesfeld
Person
By mandating patient-led governance, fostering transparency, and embedding strong privacy and recourse provisions into the regulatory framework, we can ensure AI is developed and deployed ethically and equitably. Patients are not mere data points. We are individuals whose lives are directly impacted by these technologies.
- Christine Von Raesfeld
Person
Now is the time to take bold steps to safeguard our health and our privacy. Thank you for your time and for considering the urgent need to protect patients in this rapidly evolving technological landscape.
- Rebecca Bauer-Kahan
Legislator
I don't have as many questions for this panel, but I really appreciate it and I appreciate some of the real world examples of how this actually plays out and the comments about.
- Rebecca Bauer-Kahan
Legislator
I think we often forget the power that we have as the fourth largest economy in the world in our own purchasing power, which I think is what the chair really focused on in her question.
- Rebecca Bauer-Kahan
Legislator
And I think it's something that we should be pushing harder on is how do we buy good products that are responsible, that put patients first, that eliminate bias, that have good governance and create a market for that, such that we are moving the needle for the people who need us most. So I just want to thank you.
- Rebecca Bauer-Kahan
Legislator
I think it's so critically important as we have these AI conversations that we center civil society. And I think many of you have done that in a way that is really powerful because human centered decision making around AI is I think what will allow it to reach its promise and prevent some of the risks. And so just appreciate the comments of this final panel.
- Mia Bonta
Legislator
Thanks just probably for Mr. Ford and Mr. Hill. Well, you all just decide. I think several of you talked about the need to be able to provide additional infrastructure support within our safety net settings and what that might look like specifically. I think you ran through three or four different things.
- Mia Bonta
Legislator
I think combined with the sensibility around patient centered approach to care. Those are all things that are always on our mind. Can you just rewind a little bit or highlight some of the areas where you think we would be able to make some traction in the near term given the overlay, the potential overlay of not being able to do much.
- David Ford
Person
Yeah, I think we have to keep talking as though that provision doesn't pass in HR1 or there's not much to talk about here. Right. But so to take a step back because I was trying to stay within my time limits.
- David Ford
Person
You know, as technology has rolled out, this concern about the haves and have nots has come up over and over again. You know, as I mentioned, electronic medical records were very much the province of very large health systems, integrated medical groups.
- David Ford
Person
And then as you know, as President Obama signed the stimulus act and we tried to move them through the system, we found that yes, providing resources to safety net providers was important. There was a whole federal incentive program that I think we can't hold our breath and wait for at the moment.
- David Ford
Person
But there was also a lot of technical assistance that was provided to providers about how to choose the right system for your practice, how to implement a workflow, how to use it correctly, how to make sure you're providing better patient care. That I think was very important in how we rolled out electronic medical records in this country.
- David Ford
Person
And as I say, a very similar process has been taking place for the last few years. Hopefully everyone's aware under CALHHS and the data exchange framework, where we did have resources to go out and educate providers about and others about why data exchange is important, how to use it, how to pick a system, how to implement it into your practice workflow. All of that matters in safety net settings.
- Mia Bonta
Legislator
Just build on that. My last question and you all can feel free to answer as such but we heard a lot about the cost for implementation around kind of ongoing monitoring, our ability to ensure that we're upskilling our workers to be able to have the ability to fully integrate this.
- Mia Bonta
Legislator
And we also heard about the impact potential. I will be kind and say, unintended consequence of, of basically deskilling our workers and eliminating our workforce within that.
- Mia Bonta
Legislator
Can you all just speak from those who are kind of practitioners more on the ground, which is where you all sit, how we can ensure that we are building out our implementation models for AI and integration models in our safety net services places in particular in a way that preserves our workforce and ensures that we have the ability to take the best of what AI has to offer us without creating unintended consequences.
- Brent Sugimoto
Person
I think this is a very difficult question for us to answer right now because especially with what's going on with Medicaid funding, right? As it is for us in the safety net setting, we don't have a lot of extra time besides patient care, right, and with the threatened cuts that could possibly happen, that's going to put even more pressure.
- Brent Sugimoto
Person
We heard from other panelists who talked about all the governance structures you need to put in place. When I talk to my colleagues in FQHCs, we look at that and we're like, how do we do that in our systems, right?
- Brent Sugimoto
Person
When we are kind of just trying to figure out how do we provide access, how do we make sure patients get seen, right? How do we make sure we even follow up. So one model I would like to, I think that could be really helpful and I think if you guys are familiar with OCHIN, right?
- Brent Sugimoto
Person
And that's something that we're a part of, I think something that's been really important because we are kind of collectively working together to work on these things and so doing this, right? A bunch of safety net systems that are kind of collaborating to look at these issues of governance that we can't do on our own, right?
- Brent Sugimoto
Person
And I think we need more ways of kind of fostering these kind of collaborations. Because on our own, we're going to like, right where they say you stand together, right, Hang together, or hang, hang, hang apart. Right. Like that's what's going to happen to us here.
- Dawn Addis
Legislator
Just very briefly, to expand on that question, it occurs to me that much of this is going to have to do with how we're training the incoming workforce. I'm not sure if any of you can answer this, but just what we should be aware of as policymakers, along the same lines.
- Dawn Addis
Legislator
So there's concerns around deskilling workers or replacing workers who are already in the workforce. But we also need to get people into the workforce, particularly in rural areas and underserved areas. There's already not enough doctors, not enough nurses, not enough healthcare workforce at all, and we're trying to find everything we can to fill the gaps.
- Dawn Addis
Legislator
And we don't want to let go of people who can come in with the skills that all of us need. And what we should be thinking about as a legislature and policymakers when it comes to workforce development and the intersection between AI and training and the next generation of healthcare workers.
- Brenton Hill
Person
Yeah, I can kind of take a stab at that. I mean, I think one of the most important considerations there and what we're finding across the board is transparency.
- Brenton Hill
Person
So if you know what an AI solution can do, if you know its intended use, if you know its purpose, if you know its limitations, you're able to then train someone at your institution to say, use it for this, this and this. Don't use it for this, this and this. Don't use it on this, maybe certain patient subpopulation or in this situation.
- Brenton Hill
Person
Then you can confidently go and use an AI tool with full confidence, knowing that, okay, I'm using it correctly, I'm using it as it should be used, and it's going to perform in a way that was disclosed to me.
- Brenton Hill
Person
And so I think that's one easy way that we could think about right now that doesn't cost a lot of money, is just simple transparency and simple training to align to the disclosures the developers have for those transparency type of initiatives.
- Christopher Nielsen
Person
Yes. Yeah. So just in terms of the questions around deskilling around workforce, one thing I think it's important to note, I can't speak to kind of other professions or occupations, but at least when it comes to nursing, we hear a lot about the nursing shortage, how AI can be one potential tool to solve for that.
- Christopher Nielsen
Person
At least in California, well, in California there's actually no shortage of nurses. We do have a staffing crisis, yes. But we have about half a million nurses in California with active licenses, but only about 326,000 nurses working as registered nurses. So you have to ask why are they not choosing to work right now as registered nurses?
- Christopher Nielsen
Person
One of the biggest things that we hear from nurses is because of the intentional understaffing and a lot of profit driven, market driven at least, facilities. So we have to ask if AI is actually the right tool to solve that problem or if there are deeper systemic roots of these staffing crises that we're seeing.
- Christopher Nielsen
Person
And you know, if AI is used, you know, I guess it can be used as we've heard, you know, in for different purposes as a tool. It could potentially free up time for clinicians to spend more time on direct patient care.
- Christopher Nielsen
Person
It could also be used, those efficiencies could accrue to management and those nurses could just get more patient assignments if the, if management says, hey, with this AI tool you can be more efficient and more effective.
- Christopher Nielsen
Person
That's not necessarily going to improve the working conditions and bring more nurses back to the bedside if it's used in that way. So I just wanted to note that.
- Mia Bonta
Legislator
Thank you so much. I think we, for the sake of time, we need to wrap up our hearing at this point. So I want to thank our panelists for coming forward and being able to offer any information.
- Mia Bonta
Legislator
I think all of us have been very stimulated and I'm sure you will be hearing follow ups from many of us on many of these issues. I'm going to invite now us to move to public comment as the panelists excuse themselves.
- Mia Bonta
Legislator
This hearing has been very eye opening for me and I'm sure my colleagues, and we appreciate the information that has been shared by all of our panelists, developers, researchers, healthcare professionals, facilities plans and insurers, foundations and nonprofits, patients and more.
- Mia Bonta
Legislator
You all have important roles to play to make sure we can harness the benefits of this technology and these technologies while managing the risks and mitigating its potential harms. This hearing has been helpful to me as we and others, I'm sure, as we build out a shared understanding of what's happening, the challenges and impacts.
- Mia Bonta
Legislator
I challenge all of us to use this information to help build the human centered health care system the people of California deserve. With that, we will move to public comment. I as chair will certainly stay and I encourage others as well too, as well.
- Mia Bonta
Legislator
But just know that we might get in a little trouble for not leaving, but I'm staying, so go for it. Yes, please go ahead.
- Rebecca Bauer-Kahan
Legislator
Yeah, thank you, Madam Chair. I just want to sort of say two things. One is to lift up what Mr. Nielsen was saying about putting the people in the workplace in the center of the decision making.
- Rebecca Bauer-Kahan
Legislator
I think that is how we make good AI policy and it really will lead to augmentation versus replacement, which I think is the goal here. But secondly, it was mentioned many times, so I just wanted to drive the point home last, which is currently the Federal Government is debating. I wouldn't even go so far as debating, but considering a 10 year moratorium on AI enforcement in the states.
- Rebecca Bauer-Kahan
Legislator
And that was mentioned briefly by many of our panelists and I wanted to highlight it here as we close because I think this conversation today hits home the value of states being in the space to protect our own residents around AI policy and ensuring that we meet the moment with promise and not with the risks.
- Rebecca Bauer-Kahan
Legislator
And if that moratorium were to pass, I think that we would have disastrous consequences that were highlighted here today. And so I just think it's really important that we close ensuring that we are doing our best to allow for the states to move forward as they should.
- Mia Bonta
Legislator
Thank you so much, Chair. We will have public comment of one minute.
- Mark Farouk
Person
Thank you. Good morning, Mark Farouk on behalf of the California Hospital Association, representing over 400 hospitals and health systems. First I want to thank both Chairs and the committee members for this important discussion this morning.
- Mark Farouk
Person
CHA's members engage in risk assessment and governance on the use of AI to ensure that it is being used for patient centered approaches that improve outcomes and access to care. I want to reiterate comments made in the first panel that human healthcare providers review AI data and make final decisions based on their trained medical judgment.
- Mark Farouk
Person
AI is an incredible tool in the clinical setting, but it is a tool to assist patient care, not make independent care decisions. Also, I want to share the concerns expressed regarding the cost of compliance and utilization of these tools.
- Mark Farouk
Person
As members of this committee are all too aware, health care funding faces unprecedented constraints as we see potential sweeping cuts at the federal level to the Medicaid program as well as proposals to cut Medi-Cal funding in the state budget.
- Mark Farouk
Person
We must ensure that we balance regulation with innovation so that we do not increase cost that leads to smaller systems and safety net providers and their patients being left out. Thank you.
- Kathy Sunderling
Person
Thank you Madam Chairs for the hearing. Kathy Sunderling McDonald with the California Pan Ethnic Health Network. CPEN absolutely understands the promise of AI.
- Kathy Sunderling
Person
It can do so much to extend the ability of our healthcare system as well as through things like the Governor's Gen AI initiative, allow our state to do data analysis and ensure that we're holding providers accountable, especially in programs that we fund such as Medi-Cal.
- Kathy Sunderling
Person
We also very much appreciate the steps that have been taken here to regulate AI in California, but share the concerns regarding the risks that have been shared here. There's no real formal stamp of approval from, from the FDA or some governing body. And as we know and have heard today, it can exacerbate existing biases and disparities.
- Kathy Sunderling
Person
We agree California can and should do more to mitigate those biases. Some of ideas include a centralized AI governance structure, a more formalized California type stamp of approval, and certainly listening to the voices of communities and the underserved. Those with language needs, those that CPEN helps to represent as well to ensure that we can seek equitable implementation.
- Kathy Sunderling
Person
We think a mix of incentives and accountability seems like a good approach to hold both developers and deployers accountable and create incentives and create equitable and ethical systems and ensure that those are deployed as intended and then finally monitored so we can understand what they're really doing and how they're really working.
- Kathy Sunderling
Person
And finally, the concept of a patient AI Bill of Rights is one that we would also support. Informed notification to consumers and some sort of a centralized complaint system or oversight mechanism.
- Kathy Sunderling
Person
So when things may go awry, there's a way that people can receive some kind of, if not compensation, be able to report that and really have that be investigated and looked into. This has such promise for our entire system. We appreciate your goals of making sure that there are not communities left behind. Thank you.
- Ryan Spencer
Person
Good morning, Madam Chairs. Thank you as well for this important thoughtful hearing. Ryan Spencer, on behalf of OCHIN's California Collaborative, a nonprofit health IT network representing 93 community based organizations serving 2.21.0 million million 1.0 patients, including over 715,000 medical patients.
- Ryan Spencer
Person
As Dr. Sugimoto stated, OCHIN Member clinics have united around a shared modernized health IT infrastructure that lowers costs, improves care, enables delivery system transformation. This infrastructure is essential to safely and equally adopting AI. Without it, the digital divide becomes an AI chasm.
- Ryan Spencer
Person
To fully participate in AI future, SafeNet providers also need workforce training, streamlined regulations and integrated care models using AI tools. OCHIN urges the state and this committee to invest in infrastructure, training and policies needed to ensure community clinics and public health agencies can lead in California's AI driven health future. Thank you for your time.
- Mia Bonta
Legislator
Thank you so much. Very much appreciate our colleagues fully preparing and engaging in this very important conversation. And to all of our panelists who brought such thoughtful and robust conversations to the California State Legislature on the Assembly side. With that, we are adjourned.
No Bills Identified