Hearings

Assembly Standing Committee on Privacy and Consumer Protection

May 27, 2025
  • Rebecca Bauer-Kahan

    Legislator

    Yeah. Good morning. We're going to call this hearing of the consumer Privacy Consumer Protection hearing to order. Today we are having an informational hearing on AI risks and mitigation. We are here in room 437 if any Committee Members want to come over and join us. We'll be going for the next couple of hours talking about AI.

  • Rebecca Bauer-Kahan

    Legislator

    So let me start by first thanking our panelists for being here and participating in today's hearing. We are so fortunate to have several leading experts in the field here today to share their brilliance with us.

  • Rebecca Bauer-Kahan

    Legislator

    I also, of course, want to thank the incredible Committee staff that I get to work with every day, as well as rules for setting up this hearing and the sergeants and Capitol support offices here to do technology and support.

  • Rebecca Bauer-Kahan

    Legislator

    Over the last year, this Committee has held or participated in several informational hearings on AI, including one last year. They gave a General overview of AI so we could have a shared understanding of what it was we were trying to understand and hopefully create boundaries around.

  • Rebecca Bauer-Kahan

    Legislator

    We had one on AI in the arts, working with Hollywood and many of California's most creative individuals to talk about the impact on their workforce. We had a Joint Hearing with the Labor Committee on AI and the workforce really focused on the way it would affect California's future.

  • Rebecca Bauer-Kahan

    Legislator

    And earlier this year, we looked at the issue of tech enabled violence against women, including how gen AI is being weaponized against women and girls. Tomorrow we'll have another informational hearing on AI and healthcare.

  • Rebecca Bauer-Kahan

    Legislator

    All of these hearings are incredibly important to set the stage for how AI can both benefit California and we can build trust by passing important regulation. Today we're diving into two specific technologies that have implications across every sector of the economy. Automated decision systems and frontier models.

  • Rebecca Bauer-Kahan

    Legislator

    Both terms fall under this umbrella of artificial intelligence, but they are incredibly different technologies with very different capabilities, benefits and risks. Automated decision systems are used for narrow purposes to assist or make decisions, often in consequential contexts, but not always, such as employment, healthcare or criminal justice.

  • Rebecca Bauer-Kahan

    Legislator

    Frontier models, by contrast, are literally at the frontier of the known capabilities of artificial intelligence. These are the most expensive, powerful models with a broad range of capabilities, and many of them are being built right here in our great state.

  • Rebecca Bauer-Kahan

    Legislator

    Last year, the Legislature passed a number of bills addressing discrete issues with AI, such as transparency when AI is used in patient communications, expanding our child sexual assault material laws to apply to artificial intelligence, and giving celebrities and their families more control of their digital likeness in this artificial intelligence age.

  • Rebecca Bauer-Kahan

    Legislator

    But broader attempts to regulate AI, including automated decision systems and frontier models, did not make it over the finish line, these issues remain pressing. Automated decision systems are becoming more ubiquitous. Nearly, all Fortune 500 companies use AI to assist with hiring. The issue of bias in the outputs of these machines could become more rampant if left unchecked.

  • Rebecca Bauer-Kahan

    Legislator

    Although I will note that although California has not achieved regulation in this space, many other states have gone before us. As for frontier models, the latest models are exhibiting astonishing new capabilities. Since we were having this conversation last year, these models have improved.

  • Rebecca Bauer-Kahan

    Legislator

    Many developers and scientists claim we are on the cusp of creating AI that can match or exceed human capabilities at any task. This could solve global challenges, but it also raises the possibility of catastrophic outcomes if used by bad actors or if humans lose control of the models.

  • Rebecca Bauer-Kahan

    Legislator

    Just last week I had the privilege of being in conversation with some of the scientists that are studying the real risks to safety of these AI models. And I will say having these mostly women, I will note scientists working on this problem gives me hope.

  • Rebecca Bauer-Kahan

    Legislator

    But I also know that we need to take steps in that direction to protect communities. When we set this hearing, things were different. But now this comes against the backdrop of a Federal Government that is considering a 10 year moratorium on all state regulation of AI.

  • Rebecca Bauer-Kahan

    Legislator

    The Federal Government has not stepped in to create safety around these measures and they want to stop us from doing so. I want to be clear as I have before, that I believe AI has potential both to grow California's economy and to improve the lives of Californians. We see this every day.

  • Rebecca Bauer-Kahan

    Legislator

    Whether it is in the AI that is detecting cancer at an earlier stage or making folks jobs more productive and interesting, it has potential. But a wholesale moratorium on any regulation is simply reckless. And from an industry perspective, I also can't see why this is desirable.

  • Rebecca Bauer-Kahan

    Legislator

    It serves to penalize the good actors who are doing the right thing today. And those folks are taking appropriate precautions. They're building trust. And this moratorium will create a race to the bottom. California has the opportunity to show there's a better way. We are home to 32 of the 50th biggest AI companies.

  • Rebecca Bauer-Kahan

    Legislator

    We are in conversation with them and the people working in those companies every day. We've proven that regulation and innovation are not necessarily intention. Smart regulation helps promote healthy competition and public trust is necessary for these markets to flourish.

  • Rebecca Bauer-Kahan

    Legislator

    When you look at the polling of Californians, it is clear they want to trust these models, they want to use them, but they need us to step in, ensure that they are trustworthy. We're not here to bash the technologies.

  • Rebecca Bauer-Kahan

    Legislator

    We're looking to where they may fail or be misused and how these issues can be mitigated rather than stifle new technology. Targeted regulation helps the industry to ensure baseline levels of safety and effectiveness are in place. So today we have two panels. The first will focus on automated decision systems and the second on frontier models.

  • Rebecca Bauer-Kahan

    Legislator

    Each will involve a discussion of key risks associated with these technologies and some ways of mitigating the risks.

  • Rebecca Bauer-Kahan

    Legislator

    I hope that over the course of the hearing, the Committee and the public and everybody in the Capitol watching will develop a better understanding of the differences in the types of risks different AI poses, from concrete and commonplace individual harms to more speculative harms that could have catastrophic consequences on a societal scale.

  • Rebecca Bauer-Kahan

    Legislator

    With that, we're going to turn it over to our first panel, where we'll begin with Professor Arvind. Oh, I don't want to mispronounce your name. Narayanan, who is presenting remotely and will provide an overview of AI and automated decision systems. Professor Narayanan, thank you so much.

  • Arvind Narayanan

    Person

    Chair Bauer-Kahan, I apologize. I hear a little bit of feedback. Thank you for the opportunity to speak to you today virtually, and I apologize for not being able to be there in person. So my name is Arvind Naraynen. I'm a Professor of computer science at Princeton University and the Director of the Center for Information Technology Policy.

  • Arvind Narayanan

    Person

    I research and teach artificial intelligence and its societal effects. I would like to give a very brief overview of artificial intelligence, or AI, as well as the concerns specifically around automated decision making. As Chair Bauer-Kahan has already alluded to, AI is not one single technology. It's an umbrella term for a collection of loosely related technologies and applications.

  • Arvind Narayanan

    Person

    It shouldn't be surprising that there is no single consensus definition of AI. And it's also hard to make overarching claims about AI's benefits and risks or. Or how it should be regulated. Instead, we need to get more specific.

  • Arvind Narayanan

    Person

    To do so, computer scientists might divide up AI based on the technologies, such as whether it's supervised or unsupervised learning. However, when we want to understand their societal effects, I believe it's much more appropriate to divide it based on the type of application.

  • Arvind Narayanan

    Person

    So when we look at the types of applications that have the biggest impacts on people's lives, a few clearly rise to the top. Number one for me is automated decision making systems in different domains, such as criminal justice, employment, health care. These usually use predictive logic or what we might call predictive AI.

  • Arvind Narayanan

    Person

    Now, generative AI systems such as chatbots and image generators are built on what are called foundation models. And a third application for me would be social media platforms which use AI called recommendation algorithms to determine what people see when they scroll through their feeds. Another one that I would highlight is robotics applications such as self driving cars.

  • Arvind Narayanan

    Person

    I'm not claiming that this is a complete list. New important categories of applications emerge all the time, but I do believe that this is the level of specificity that is necessary in order to have a useful conversation. And the rest of my remarks I will talk about automated decision making systems which are the topic of this panel.

  • Arvind Narayanan

    Person

    So again, these systems make highly consequential decisions about people that affect our health, employment, education and even our freedom. Now to be clear, the use of automated decision making without adequate oversight has led to a great deal of harm even without the application of AI.

  • Arvind Narayanan

    Person

    For instance, in 2013 the Netherlands employed an algorithm to find welfare fraud and people were not given the opportunity to appeal incorrect decisions.

  • Arvind Narayanan

    Person

    What happened was the algorithm wrongly accused about 30,000 parents of welfare fraud in childcare payouts and in some cases the government said they owed back hundreds of thousands of euros and this sent many parents into mental and financial ruin. There was eventually a scandal. It resulted in the resignation of the Prime Minister and the entire cabinet.

  • Arvind Narayanan

    Person

    This is not an isolated example. There have been similar large scale miscarriages of jUstice in many places in the Uk. The Robodeath scandal I believe in Australia and in Michigan as well here in the Us. Now let's bring AI into the mix.

  • Arvind Narayanan

    Person

    In a recent study with Angelina Wang, Sayash Kapoor and Solyn Barokas, we cataloged close to 50 applications of AI for automated decision making. These are widely used in both the public sector and the private sector. Most of these applications have two things in common. First, predictions about people are used to make decisions about them.

  • Arvind Narayanan

    Person

    A prediction that a defendant might either recidivate or fail to appear in court is used to detain them. Pre-trial or a prediction about whether a student would drop out of school is used for targeting interventions towards certain students.

  • Arvind Narayanan

    Person

    The second thing that these applications generally have in common is that data from the past is used to make predictions about the future, and that data comes from past human behavior and human decisions, such as arrest records in the case of criminal justice. So that data carries the imprint of human biases both historical and present.

  • Arvind Narayanan

    Person

    And so we should expect that these automated decision making systems by default are going to be biased unless specific steps are taken to correct that bias. However, even if such steps are taken, there remains a deeper problem.

  • Arvind Narayanan

    Person

    Or to the logic of predictive AI is foreclosing the agency of the person, whether it's the defendant in criminal justice, or the job applicant, or the student or the patient who whoever the decision subject is, the system prejudges them based on statistical patterns and the behavior of people in the past and essentially denies them the opportunity to show that they are capable of defying those statistics.

  • Arvind Narayanan

    Person

    So given the seriousness of this, it's crucial to ask how accurate even are these predictions? It turns out in many cases they're only slightly more accurate than a coin flip. For example, criminal risk prediction tools typically have an accuracy of roughly in the ballpark of 70%, and 50% can be achieved by random guessing.

  • Arvind Narayanan

    Person

    So this is slightly more but even more problematically, this accuracy of around 70% can be matched by a formula with just two variables, the defendant's age and the number of prior arrests. So why is this? Questions such as whether someone will commit a crime, payback alone, et cetera.

  • Arvind Narayanan

    Person

    These depend on a huge number of factors, many of which are unknown, even unknowable, at the time of decision making. That means that there is an intrinsic limit to the accuracy of predictive AI. And this remains true, we think, even if developers try to use cutting edge methods such as foundation models in order to make these decisions.

  • Arvind Narayanan

    Person

    At best, this technology can offer broad statistical generalizations that can definitely be useful. However, many, although necessarily not all, predictive AI based decision making systems are simply ineffective. In my writing with my co author Syas Kapoor, we have called them AI snake oil.

  • Arvind Narayanan

    Person

    I want to point out that this is in sharp Contrast to Panel 2, where most of the concerns such as biorisk arise primarily from the fact that generative AI systems tend to be highly capable as opposed to ineffective. So to emphasize the limitations of predictive AI have real human costs.

  • Arvind Narayanan

    Person

    For example, Medicare providers have started using AI to estimate how much time a patient will need to spend recovering at a hospital. These estimates are often incorrect. There was one story in which an 85 year old woman was predicted to be ready to leave the hospital in 16.6 days.

  • Arvind Narayanan

    Person

    That might seem like an oddly specific and precise number, but the sort of veneer of accuracy is typical of predictive AI tools. When 17 days passed, she was still in severe pain and wasn't able to walk, but nonetheless, based on the AI prediction, her insurance payments stopped.

  • Arvind Narayanan

    Person

    When we looked at a broad selection of predictive AI systems, we found that developers tend to claim that they're fair, that they are fast and they are accurate. They sell them to companies and governments based on the promise of full automation.

  • Arvind Narayanan

    Person

    But inevitably problems tend to arise, and when they do, the Developers retreat to the fine print that says the tool shouldn't be used on its own. For example, Toronto used an AI tool to predict when a public beach will be safe. The developer claimed over 90% accuracy, but the tool did pretty badly.

  • Arvind Narayanan

    Person

    In fact, on a majority of days when the water was unsafe, beaches in fact remained opened based on the tools assessments. This was a typical case in which theoretically the tool wasn't supposed to be used on its own. But as far as journalists could tell, city officials never overrode the recommendations of the tool.

  • Arvind Narayanan

    Person

    Now, I want to point out that being evaluated by an ineffective AI system is its own kind of unfairness, even if it doesn't necessarily have a disparate impact by race or gender. For example, investigative journalists got hold of a tool that claims to score job candidates by analyzing a video of the candidate.

  • Arvind Narayanan

    Person

    What they found was that when you took the same video and digitally altered the background so that it contained a bookshelf, the substantive content didn't change, but the scores produced by the tool improved dramatically. This is an example of a broader pattern.

  • Arvind Narayanan

    Person

    Many automated decision making systems on the market are effectively making sort of arbitrary decisions about people without any opportunity for appeal or recourse. Let me end by saying that policymakers have a dual role to play when it comes to mitigating the risks here. One is as a procurer of these systems and the other is as a regulator.

  • Arvind Narayanan

    Person

    By establishing guidelines for public sector use, governments can ensure the vendors of rights respecting AI systems have a leg up in the market. By creating inventories of public sector automated decision making systems, they can help journalists, researchers and the public understand the many ways in which algorithms have power over our lives.

  • Arvind Narayanan

    Person

    And finally, in terms of regulation, policymakers should establish standards for the effectiveness of automated decision making systems when feasible and create requirements for explanation, contestability, impact assessment, among many other steps. Thank you so much for your time and for having me here today.

  • Rebecca Bauer-Kahan

    Legislator

    Thank you so much. If you can stick around online, we're going to hear from our other two panelists and then go to questions. I'm sure folks will have questions for you. We will invite up Alondra Nelson and Kathy O' Neill, both of whom are here in person. I will say I really appreciate you.

  • Rebecca Bauer-Kahan

    Legislator

    While they come up pointing out the power California's purchasing power and the impact that can have on safety and the market.

  • Rebecca Bauer-Kahan

    Legislator

    I had the privilege of being at Lawrence Livermore Lab last week and something they pointed out was they buy different chips for each of their supercomputers because they believe competition in the chip Manufacturing makes us safer, and so they can use their buying power to ensure that more companies thrive.

  • Rebecca Bauer-Kahan

    Legislator

    And I thought that was such an interesting way of thinking about antitrust that we can actually create markets with just our purchasing power. And I think the same is true here. So with that, I will turn it over to Alondra Nelson.

  • Alondra Nelson

    Person

    Thank you very much, Chair Bauer Kahan and Vice Chair Dixon and Members of the Committee for Having me here today. I'm Alondra Nelson. I am the Harold F. Linder Professor at the Institute for Advanced Study in Princeton, New Jersey.

  • Alondra Nelson

    Person

    And for two years I served in the Biden Harris Administration in the White House Office of Science and Technology Policy. I'm speaking here today in my personal capacity. I'm going to pick off where my Princeton neighbor Arvind led off to talk a bit about algorithmic discrimination and hopefully cue up Dr.

  • Alondra Nelson

    Person

    O' Neill to talk about some of the incredible work that she's doing. I want to begin by with just a definition of algorithmic discrimination.

  • Alondra Nelson

    Person

    This is a definition that we developed in the White House that's in the blueprint for an AI Bill of Rights, which is a guidance document that will be released in October of 2022 in my time there and defines algorithmic discrimination as occurring when automated systems contribute to unjustified different treatments or impacts disfavoring people based on a range of protected categories including race, color, ethnicity, sex, religion, age, national origin, disability, veteran status, genetic information or other classification protected by law.

  • Alondra Nelson

    Person

    What I want to talk about in our time today is what we might think about as the spectrum of algorithmic discrimination. You've surely heard the phrase algorithmic discrimination or algorithmic bias. And I think sometimes we think of it as one thing.

  • Alondra Nelson

    Person

    And I want to make two arguments to you today that algorithmic discrimination occurs across a spectrum of different types of risks and harms. And that part of I think the concern for policymakers needs to be how to think about that spectrum as a kind of compounded harm potentially. Right?

  • Alondra Nelson

    Person

    So we talk about often the one case that we see reported in the news account or one scientific study.

  • Alondra Nelson

    Person

    But think, I think, you know, one of the takeaways is both the continuum and the sort of compounding effect of us living in democratic and algorithmic societies in which these that were sort of being touched all around by algorithms and what is the sort of cascading effect of discrimination and bias and those contexts.

  • Alondra Nelson

    Person

    So the continuum so I talk about the spectrum of algorithmic discrimination as a continuum of AI driven bias from forms of discrimination that are differently allocated.

  • Alondra Nelson

    Person

    So Allocative discrimination that systematically denies access to essential opportunities that determine important life outcomes through surveillance and targeting systems that categorize and monitor individuals to cultural bias that perpetuates stereotypes and ensure and cultural erasure across digital platforms. And those are kind of layered there from highest stakes to sort of lowest stakes, but all significant.

  • Alondra Nelson

    Person

    So let me go through each of these sort of briefly. So allocative discrimination. So AI systems that systemically deny or limit access to resources, opportunities or service services through biased automated decision making that results in unequal access to employment, credit, housing, healthcare or other essential services, surveillance and privacy infringement.

  • Alondra Nelson

    Person

    So AI surveillance systems that erode our privacy rights through automated monitoring, data inference and predictive profiling and in that way enable discriminatory decision making that we know disproportionately affects certain groups, often protected classes, targeting and profiling.

  • Alondra Nelson

    Person

    So the unfair categorization of individuals based on protected characteristics throughout the use of automated systems that sort or classify or target us for differential treatment by algorithms and misrepresentation, sort of stereotyping, misclassification, culture erasure perpetuated by AI systems and automated decision making processes.

  • Alondra Nelson

    Person

    So with the time I have, I'm going to go through an example of each of these and then have some, some wrap up remarks. So allocative discrimination. I think a powerful example of this is the case of the IRS tax audits.

  • Alondra Nelson

    Person

    So we saw that, just grab my notes here that the IRS, working with Department of the Treasury, working with some academic researchers, did some investigation into and found that African Americans are three to five times more likely to be flagged for auditing and to be audited.

  • Alondra Nelson

    Person

    The root cause was a design, a design of an algorithm used by the IRS that was looking for easy audits, meaning audits that required, you know, less complicated paperwork, less engagement with people and individuals to do with the work. It was not because, and the disparity was not found because of the individual Auditor.

  • Alondra Nelson

    Person

    So we're not talking about, you know, a bad apple individual. The disparity also was not because there was more tax evasion found on the part of African Americans or other groups.

  • Alondra Nelson

    Person

    But it was because the algorithms used by the IRS to determine who was selected for an audit flagged users of who often, who are often working class or poor and used incentives like the Earned Income Tax Credit.

  • Alondra Nelson

    Person

    So people who were trying to make use of the Earned Income Tax Credit were at least three times more likely to be flagged for auditing. In this easy also meant for this algorithm that you had no business income.

  • Alondra Nelson

    Person

    So if you were someone who were not an entrepreneur, not a small business owner, et cetera, that was easier to audit because it's just less paperwork. And so those people were also less likely to be audited. So rather than targeting hard cases, people of extraordinary wealth, the IRS has been with this algorithm targeting people of lesser wealth.

  • Alondra Nelson

    Person

    The impact is that black taxpayers audited were audited at significantly higher rates, which creates a sort of, again, I want us to come back again to the words cascade and compound that create this kind of cascading effect.

  • Alondra Nelson

    Person

    You're creating financial stress, legal costs for the people that are audited, consequences for other algorithmic systems like credit scores, like whether or not you're deemed worthy for employment and other kinds of life opportunities. So that's allocative discrimination. Let me talk briefly about surveillance and privacy infringement through the example of the Life360 app.

  • Alondra Nelson

    Person

    But there are lots of other apps like this as well. So Life360 is used by lots of families to keep track of where everyone is. It was not intended to be a sort of wide scale. Well, you might not like this. It was not intended. It was not intended to be a wide scale surveillance technology.

  • Alondra Nelson

    Person

    But it has turned out to be that Life360 GasBuddy. Other apps sell their data, often without consumers knowing. Consumers are, by agreeing to terms of service, agreeing to this, but don't know that this data is going to be sold to data brokers who will then serve it, sell it to insurance companies.

  • Alondra Nelson

    Person

    And so, you know, a lot of the news reporting on incidents are, you know, people like a father in Atlanta who said, you know, he had a great credit score.

  • Alondra Nelson

    Person

    It was credit, his credit score was going up, he hadn't had any tickets as a driver and he couldn't figure out why he couldn't get an insurance quote that was reasonable. Like his insurance quote kept going up and up and up. And it turned out that he had an algorithmic score based on data.

  • Alondra Nelson

    Person

    He said, my Entire family uses Life360. And we didn't realize by using that that we were also sending information to the insurance company. And we didn't know that we had opted into that.

  • Alondra Nelson

    Person

    Similar instances, you'll have seen some recent reporting in the last year about the sort of Internet of things problem in which our car, our vehicles are sending data to data brokers who then send them to our insurance companies. So some of us choose to opt into that, but many of us are being opt in without knowing.

  • Alondra Nelson

    Person

    And this is also impacting people's ability to get reasonable health insurance at a. Excuse Me, automobile insurance at a price that they can afford. So this is just the way in which surveillance based risk assessment is happening all around us.

  • Alondra Nelson

    Person

    But particularly what I want to highlight here is how it's impacting by impacting us by collecting information about everything we do in our cars and then used to sometimes penalize us with higher health insurance, car insurance rates, auto insurance rates. Okay, so let me say a bit about the targeting and profiling in that spectrum of algorithmic discrimination.

  • Alondra Nelson

    Person

    This is something that might be more familiar to some of you, which is the use of facial recognition technology. I just pulled this clip from the Guardian, but there's been a lot of reporting about this gentleman, Robert Williams, which is the first known case of misidentification using facial recognition technology in the United States.

  • Alondra Nelson

    Person

    This is by the Detroit Police Department, I believe he was accused of stealing a bunch of watches at a store. He'd never been. He didn't even know how he was identified.

  • Alondra Nelson

    Person

    And the police didn't have to identify how they had caught him up and suggested that they had a lead that he was, that he was a suspect in this case. And it turns out that they had used facial recognition technology. They had matched an old driver's license photo of his to, I think, you know, a data set.

  • Alondra Nelson

    Person

    And it wasn't him. It cost him, you know, lots of money, time. He was humiliated on the front lawn, arrested in front of his wife and daughter for a crime he didn't commit. We now have many stories that are like this.

  • Alondra Nelson

    Person

    Moreover, we have accounts that we don't know about because we have the increasing use of these kinds of technologies to create leads that are never sort of brought into the sort of legal case necessarily. And so I think all around us we've got this use of facial recognition technology that's increasingly worrisome.

  • Alondra Nelson

    Person

    Also in the category of targeting profiling are things that might be familiar. So I put this Facebook clip from the ap, the Associated Press up there to show you. You might expect that this was from five years ago or four years ago, from years ago, but actually this is still happening on Facebook.

  • Alondra Nelson

    Person

    This is from a study from last year. The concern here is that we still have the use of automated systems and algorithmic tools that are sorting job employment opportunities to people based on their age, based on their race or ethnicity, and based on their gender.

  • Alondra Nelson

    Person

    Machine learning systems are using historical data such as that's already been mentioned, and sort of reinforcing existing workplace inequality, existing occupational stratification, and making it harder for women to see, for example, high paying executive job ads for which they might be qualified.

  • Alondra Nelson

    Person

    In the case of kind of platform gig work delivery jobs, we know that those are also being gender segregated by employment algorithms, with men being sorted to things like Domino's Pizza delivery, women being sorted to things like Instagram.

  • Alondra Nelson

    Person

    So occupational segregation, which is creating systemic barriers to employment mobility, is also being sort of baked into algorithms and is a challenge and a distinct form of algorithmic discrimination.

  • Nelson Nelson

    Person

    So as I said at the top, I think we should be interested not only in each of these individual stories about a father arrested in front of his family, misidentified, losing time and money, the shame and humiliation of that, people losing job opportunities, people having their families surveyed and their data sold without their knowledge, but to think about the sort of compounded and cascading disadvantage and effects that this creates where bias in one automated system triggers exclusion from others.

  • Nelson Nelson

    Person

    Someone denied a job opportunity by biased hiring algorithms may struggle to access credit, housing and health care as these disadvantages become compounded. And so I want us to think about that as an example. I know you're going to talk about health care tomorrow.

  • Nelson Nelson

    Person

    I just wanted to talk about the different ways that both automated systems and frontier systems work in the health care industry and how it is an example of this kind of compounding discriminatory effects.

  • Nelson Nelson

    Person

    So allocative discrimination, which I mentioned for you, we might think about the accounts of medical imaging discrimination that miss cancer in darker skin patients while detecting it earlier in wider patients, creating unequal access to life saving treatment.

  • Nelson Nelson

    Person

    So one form of that spectrum of algorithmic bias on the surveillance and privacy front, we know that patient monitoring systems profile health behaviors differently across demographic groups.

  • Nelson Nelson

    Person

    So we know, for example, there are pain assessment algorithms that are used in the clinical setting that suggests that people of African descent, for example, don't feel pain as much as others. Misrepresentation. So this is a large language model, Whisper AI.

  • Nelson Nelson

    Person

    It's a product of OpenAI that's increasingly being used in the clinical encounter to sort of transcribe conversations between patients and their doctors, to help doctors transcribe notes has lots of possibility for improving the clinical encounter.

  • Nelson Nelson

    Person

    But we also know, much as we know about the sort of mistakes that happen with large language models, that they also make things up.

  • Nelson Nelson

    Person

    So some of what they make up that's been reported by the AP and others based on the work of researchers, is that they bring racial stereotypes into the notes that were not uttered in the room. They misunderstand accented English leading to incorrect symptom diagnosis for elderly patients and for also for immigrants.

  • Nelson Nelson

    Person

    So let me offer just one other example.

  • Nelson Nelson

    Person

    So the healthcare then when you have multiple AI systems that can fail the same patient again and again, and they can be both automated decision systems and frontier systems working together, accident bias, healthcare allocation, systematically underestimating medical need, medical imaging, missing cardiac conditions, pain, et cetera, so there's a kind of medical neglect that then cascades also beyond medical care.

  • Nelson Nelson

    Person

    Right. So you have poor health outcomes may affect your ability to work, may affect your insurance premiums, may create family stress, may limit your housing options, et cetera.

  • Nelson Nelson

    Person

    So what I'm trying to suggest is the importance of your work as policymakers is really about helping us to get, I think, a handle on compounding this harm and this discrimination. So a few words about why do we care about all this.

  • Nelson Nelson

    Person

    I follow Chair Bauer-Kahan in saying it's not because we're anti AI or want to shut down the companies.

  • Nelson Nelson

    Person

    It's because if we want to achieve any of what we sometimes call the AI for good, it's used for science, it's used for accessibility in the world, it's used for helping us to do more with crop yields and these sorts of things that none of these things are inevitable and they have to actually be stewarded through policy and smart policy innovation.

  • Nelson Nelson

    Person

    Why do we care about this? We care also because Chair Bauer-Kahan mentioned some of the data that you have in California. We know in the national data that levels of mistrust of AI are pretty high in almost all measures, they're 50% or higher. So more people are concerned than excited.

  • Nelson Nelson

    Person

    This is some data from the Pew Research Center. This is a little bit small for you to see on this little screen, but a lot of the concerns are around housing, health care and jobs, right?

  • Nelson Nelson

    Person

    So those consequential places where AI is being used to make decisions about our lives, a lot of those concern vectors there are more than 50%. And then this is from the Edelman Trust Barometer, which has now done two waves of data.

  • Nelson Nelson

    Person

    The United States has some of the highest risks, highest measures of mistrust in AI in the developed world. Right? So it's not just a, you know, the United States, Canada, France and other countries are much, much higher. What you see high on the top are countries like China, Brazil and Nigeria.

  • Nelson Nelson

    Person

    And so if we want to, you know, let's say, get to the good stuff, we have gotta be able to put in guardrails that build more adoption and more trust and more justice in society.

  • Nelson Nelson

    Person

    Lastly, one of the ways that we tried to do this in the Biden Harris Administration was through sort of Blue Sky Document, aspirational document, the blueprint for an AI Bill of Rights, that offered sort of five principles that we might begin to use to shape how we think about guidelines for the use of AI.

  • Nelson Nelson

    Person

    Importantly, this is cut off completely at the top. We began to point to some enforcement mechanism so that one can say we want systems to be safe and effective. And. And that one way that you get there are through sociotechnical strategies like auditing, like impact assessment and red taming, the work that Dr.

  • Nelson Nelson

    Person

    O' Neill is expert at and will come to next. So not only is AI for good, not inevitable, but algorithmic discrimination is not inevitable that we can do something about it. It's a choice that we make through the design, deployment and governance of automated systems.

  • Nelson Nelson

    Person

    And the question isn't whether algorithms shape our future, it's whether we have the collective will to ensure that that future is just and fair for all of us. Thank you.

  • Rebecca Bauer-Kahan

    Legislator

    Thank you, Dr. Nelson. Stick around. I know I have questions. Dr. O' Neill, turning it over to you.

  • Unidentified Speaker

    Person

    Thanks so much for having me. Just going to wait for this. It's an honor to be here. It's really an important moment. Thank you for bringing up the 10 year moratorium, because that's part of this conversation for sure. Thank you, Dr. Nelson, for your words. Really great setup from both the other people on the panel.

  • Unidentified Speaker

    Person

    I want to talk about my work in making things more fair and more safe and more trustworthy. I really think about my work as designing cockpits. You would not go into an airplane.

  • Unidentified Speaker

    Person

    Well, if you went into an airplane and you took looked to your left and you saw just two windows and two pilots smiling with no cockpit in sight, like no dials, you'd be worried and rightly so. There's lots of history of things failing, lots of plane crashes.

  • Unidentified Speaker

    Person

    And you know that every dial in that cockpit represents a disaster that you're trying to prevent. And without those dials, they're not keeping track of the things that could be going wrong, except through looking at the window, which is one important, but not the only important thing.

  • Unidentified Speaker

    Person

    That's how I want to think about AI, especially these large impactful systems that there's one perspective they're looking at which is typically efficiency or profit, but they're ignoring, they just don't have a cockpit. What does it mean to build the cockpit for a big flying system like an AI system? That's what I do. I design it.

  • Unidentified Speaker

    Person

    I design it in three parts in three steps. I first identify what's going wrong, what could be going wrong. And usually the question is just like, for whom might this fail? Who could be harmed by this? What human could be harmed by this?

  • Unidentified Speaker

    Person

    I appreciated Arvind's initial statement that you don't think about this stuff as a technical categorization, but as a context. How is it being used exactly? Even if it's in hiring, there are a million different ways it could be used. Like if you if you get scored badly, because automated decision systems are almost always scoring systems.

  • Unidentified Speaker

    Person

    If you get a 77 instead of a 78, you might not get an interview. But also you might get the interview but just have a negative mark on your record. It matters, actually, it totally matters what the details are. So you, in some sense, I really refuse to audit anything until the context is pretty much fixed.

  • Unidentified Speaker

    Person

    But once it's fixed, you have to first ask who could be harmed by this. And so that's really. It's always about the human impact for me. And then once we have a way of thinking through the biggest sort of existential risks, the biggest red flags of this system, these people might be getting harmed. We're not sure.

  • Unidentified Speaker

    Person

    We haven't checked. We have to figure out how to check. We have to develop the measurement of that. Whether that's a race gap or a gender gap or just like privacy problems or other kinds of laws that need to be complied with.

  • Unidentified Speaker

    Person

    Those are the things that we actually have to go out and measure, which typically means using the data we have, but sometimes means collecting new data. And those, again, you should think of those as the things you're measuring in the cockpit. Those are the dials of the cockpit.

  • Unidentified Speaker

    Person

    Then finally you have to say, well, what's the minimum and the maximum of this metric? Like how big can this get? How small can this get? You can think about it like altitude of the airplane or gas tank. You really just think about this as like, how big, how fast should we be going?

  • Unidentified Speaker

    Person

    But how slow should we be going? What's the minimum and maximum speed? So those are the three things we do to build audits, to do audits.

  • Unidentified Speaker

    Person

    And I want to mention, to take a pause here and mention that we used to have about two sales calls a week with private companies that typically were like, we know we're going to. Eventually there'll be regulation in our industry, whether it was credit or insurance or housing or hiring or other kinds of industries.

  • Unidentified Speaker

    Person

    But those were the big four. Education. They always came to us saying, we don't have this regulation yet, but we know it's coming. I haven't had a sales call since the election. And one of the reasons obviously is because people don't think there's a lot of uncertainty.

  • Unidentified Speaker

    Person

    But with this 10 year moratorium, I just want to emphasize that people are just not worried about it. And that's a problem because that means they can do anything they want. In the meantime, they can actually kind of launder anything they want through AI. If there's a moratorium on it.

  • Unidentified Speaker

    Person

    So I'm just, I'm just pausing here to make a plea that the state laws are the ones that we need and California is pretty much the one we need. Please do something.

  • Unidentified Speaker

    Person

    But I just want to go back to my perspective, my auditing strategy, which, by the way, I would love to be able to say these rules, these standards have been developed. They're beautiful. We know how to talk about anti discrimination and housing and, and insurance and credit and hiring. We don't.

  • Unidentified Speaker

    Person

    We don't actually know how to do that. That's one of the reasons I started my company after studying this problem and after having worked in finance during the credit crisis, realizing we need good standards in this world. So right now we're trying to develop those cockpits. We're designing and developing those cockpits.

  • Unidentified Speaker

    Person

    And part of the development of those is to say, what does it mean to have a racist algorithm? What does it mean to have a sexist algorithm? What does it mean to be anti veteran? But really we start from the very top. Who could be harmed by this and how?

  • Unidentified Speaker

    Person

    And so we look at stakeholders as rows and we ask the stakeholders, we actually interview people saying, what could go wrong for you in this kind of system? The goal of this, which we call the ethical matrix framework, is to make people realize this is not a technical discussion. This is a question of fairness.

  • Unidentified Speaker

    Person

    And everyone has a voice about that. It's kind of trying to counter the narrative that Silicon Valley would have you believe, which is that this stuff is too mathematical, too complicated for you to actually have an opinion. But everyone has an opinion about fairness.

  • Unidentified Speaker

    Person

    So it's asking people to stand up for themselves in these particular contexts and say, what could go wrong for you? Then we sort of color code the boxes. It's pretty dumb. It's actually very dumb. But it works because we're just looking for things that could actually be illegal or at least unethical.

  • Unidentified Speaker

    Person

    And then we actually try to design the metric. And now this is a relatively complicated diagram. I'm gonna save this if there's questions in Q and A, but this is, I wanted to point at it because this is how we thank you for bringing up a car insurance.

  • Unidentified Speaker

    Person

    This is how we helped the insurance Commissioner of Colorado understand what it would look like to have racism in life insurance. So they. This is something we did over the last few years. Now we've gotten hired for the second phase, which is car insurance.

  • Unidentified Speaker

    Person

    And the idea here is you ask the insurers to all give data so that we can understand how they treat different kinds of customers, and it's relatively involved, but not actually that involved. So I could explain this to you, but I don't have time right now.

  • Unidentified Speaker

    Person

    I just want to make the point that you can treat different groups of people slightly differently, but there could be a good reason for it just in a different context.

  • Unidentified Speaker

    Person

    If you had men and women being hired for a job and you notice that women got hired at twice the rate, you may be like, oh, this is biased towards women. And then you might realize, oh, actually, they typically are more qualified. That's why. So it's sort of a question of what do you exactly mean by qualified?

  • Unidentified Speaker

    Person

    And in this case, it's sort of like what do you exactly mean by they deserve a higher car insurance rate or a higher life insurance premium? But this is the thing you do in order to develop the notion of a race gap or the notion of a gender gap. It's actually what the funny thing about it is.

  • Unidentified Speaker

    Person

    It's a way for me, I'm a mathematician, as my training and a data scientist, to do the easy part of these conversations, which is the, the math. The hard part for the insurance commissioners is to decide what is a legitimate way to discriminate against people. Is it legitimate to discriminate based on smoking status?

  • Unidentified Speaker

    Person

    That, you know, that's actually a moral question and it's complicated and it has a lot of precedent. And it's not my job, thank goodness. My job is just to do the statistics. If they decide that it is legitimate, I do one type of math. If they decide it isn't, I do a different type of math.

  • Unidentified Speaker

    Person

    I'm doing the easy stuff. But it ultimately is a conversation about fairness at the level of the people who are invested, whose job it is to take care of the public good. And so it's really a negotiation between lawyers, is what I'm trying to say. So much harder than what I do in statistics.

  • Unidentified Speaker

    Person

    And the final thing is to set the targets, which is to say, how big can this race cap, race gap get, or this gender gap or whatever, how big is it can it get before you have to do something about it?

  • Unidentified Speaker

    Person

    I will just mention that you mentioned at the very beginning, Chair Power can, that if you have a moratorium, it doesn't punish the bad doers, it doesn't reward the good doers either.

  • Unidentified Speaker

    Person

    The great thing about having it at a regulatory level is that when once you have that race gap defined, every company has a particular race gap that their product has. Right now, I'm not saying that would be made public necessarily, maybe only the insurance Commissioner would know that most of those companies don't have to do anything.

  • Unidentified Speaker

    Person

    A couple companies do have to do something. It ultimately rewards the companies that are doing well. So it does the opposite of what you're saying, what we're worried about with the moratorium. And so that's actually the kind of thing that I'm looking forward to doing in more generality with more anti discrimination laws.

  • Unidentified Speaker

    Person

    And that would be the goal of my company, actually, is to help make good definitions of metrics and make good thresholds. Just to make it clear that this has happened in the past, I'm going to give one last example and then I'll end my comments. Dr. Nelson mentioned that there was some Facebook employment discrimination in ads.

  • Unidentified Speaker

    Person

    Well, there was actually also earlier some housing discrimination problems that were found. And there was a lawsuit brought and the DOJ ended up making a settlement with Meta that really set beautiful terms of like, here's how big this gap is. Here's how we want you to get it better. Here's a timeline to get it better.

  • Unidentified Speaker

    Person

    So they defined the metric of what they meant by this race gap that is unacceptable to define what the target was, how to get it smaller and by what time. And it was a really great settlement.

  • Unidentified Speaker

    Person

    And I'm not sure it even exists anymore, but it was a beautifully defined notion of what it looks like to get better at these kinds of things. And I'll finally say, I also agree with what people have said before. AI is not going anywhere.

  • Unidentified Speaker

    Person

    But the great thing about automated decision systems in particular is that they don't have to repeat the past. They can do better, and we can help them do better if we have the right cockpit design. So thank you so much.

  • Rebecca Bauer-Kahan

    Legislator

    Thank you. And I appreciate you highlighting, as a former regulatory lawyer myself who negotiated consent decrees, they can be really powerful tools. And they're not always. I mean, they are punitive, but they're also about how do we want to move forward in a society that protects people.

  • Rebecca Bauer-Kahan

    Legislator

    And I think it's a really important thing to highlight is that it's not always about being punitive. It's about moving society in the direction we want to see, whatever that may be. Thank you.

  • Rebecca Bauer-Kahan

    Legislator

    We had two Members join us, so I want to give a shout out to both Assemblymember Macedo and Assemblymember Pellerin for joining us and see if you guys want to start with any remarks or questions, either of you.

  • Alexandra Macedo

    Legislator

    Thank you, Madam Chair, and thank you guys for being here today. I will admit I'm very new to the technology space. But where I come from is agriculture district and the AI space is growing and we need it to grow because we're having to adapt to producing more, more food, more product with less resources.

  • Alexandra Macedo

    Legislator

    And AI has been very helpful with that. I also went and toured with a lot of these companies that you're talking about that are creating this AI and I hear a lot of optimism in what we can use AI to help make our lives all better, which I think we all as legislators want to do.

  • Alexandra Macedo

    Legislator

    But also brought me some fear and some things that they were talking about and one of which is that there's kind of this race to the top that's happening internationally that we're going to. I'm all for safeguards. I come from the regulatory space as well that I know that we need safety for our communities.

  • Alexandra Macedo

    Legislator

    And it seems like to me a lot of these companies are kind of self imposing that they don't want this to be discriminatory because it's meant to be good for the General good.

  • Alexandra Macedo

    Legislator

    But I do hear what China's doing, for example, that they're just full speed ahead and they are behind us in the AI space but not by very far.

  • Alexandra Macedo

    Legislator

    So I guess my question for both of you is from an international security standpoint, how do we rate with countries like China that don't have these regulations like we have here in the United States?

  • Nelson Nelson

    Person

    Do you want me to start? Yeah. Thank you Assembly Mayor Macedo. It's an excellent question. So let me say a couple of things. One, about your constituency and your community, about the agricultural space and it's why I skipped by the slide quickly. But one of my AI for goods is about crop yield.

  • Nelson Nelson

    Person

    I mean there's incredible potential for water, crops, et cetera. We can potentially feed a lot more people. What I would say in response is that we want to think about the regulatory space as being maybe kind of concentric circles which a little bit different from how we might.

  • Nelson Nelson

    Person

    And this is why I think my colleagues and I were suggesting that you don't want to try to regulate the technology but you want to have some guardrails around the applications and the outcome.

  • Nelson Nelson

    Person

    So we don't want to, as things get closer to people in their lives and decisions about their lives, that's where we want to take care, where there's a high stakes. But there's no reason that, you know, for AI, for science and these other spaces that you should need to take a heavy regulatory hand.

  • Nelson Nelson

    Person

    So I just want to make that distinction about the outcomes that we, that you know, that we're trying to get to. With. I think you're, you know, obviously exactly right. There's a lot of geopolitical competition here, China being the, you know, the United States big competitor.

  • Nelson Nelson

    Person

    One thing I would say is that AI in China, to the extent that we know, is fairly highly regulated, actually. And so they have far more regulations than United States.

  • Nelson Nelson

    Person

    I mean, some of those regulations include, you know, AI should, you know, your chatbot should sort of, you know, you know, pledge allegiance to the Communist Party or these sorts of things. But there are also quite reasonable things around risk assessment, around testing models before you send them out into the world as well.

  • Nelson Nelson

    Person

    And so I guess the question is, do we want to compete? And I showed you that Edelman trust barometer chart, which I can send the slides around. But trust levels in China of AI are quite high. It's almost the mirror opposite of the United States. So we need to do some empirical research about why that's the case.

  • Nelson Nelson

    Person

    But one of the reasons why it might be the case, we might anticipate one hypothesis could be because they do have regulation and because people do have expectations about how these tools should and should not be used.

  • Nelson Nelson

    Person

    You know, so, you know, I would say I would want, you know, as an American and as a former public service US to compete on that level as well and not think that we have to have a kind of race to the bottom in order to have to leverage these tools powerfully and for them to do good in the world.

  • Alexandra Macedo

    Legislator

    And I so appreciate you sharing that with me. I'm obviously not super familiar with AI in China. I've been a little busy here in Sacramento, but maybe I'll make it over there one of these days. But that's where from a national security standpoint, we know how powerful technology is.

  • Alexandra Macedo

    Legislator

    And that's been a big fear of mine of learning how can this be weaponized by foreign countries that we know are a superpower. We need to maintain that superpower. So I'm very grateful that you shared that with me.

  • Alexandra Macedo

    Legislator

    Another kind of question that I'm still learning is, you know, I played around on Grok and ChatGPT over the weekend and it was kind of scary, the things that it was able to. I felt like I was having a conversation. It was like weird having a weird Friday night having conversations with Grok.

  • Alexandra Macedo

    Legislator

    But, you know, but, you know, you guys talk a lot about the flaws within AI, but have we done any type of research to see if AI is any more or less biased than a human would have been had they have done the job that we're giving AI to do.

  • Unidentified Speaker

    Person

    I actually, I read this incredibly interesting appendix to an otherwise like, very excited paper about AI that actually did measure this. It sort of asked AI to describe a teacher and looked at pronouns and you know, there are more women teachers, like high school teachers than there are male high school teachers.

  • Unidentified Speaker

    Person

    But the bias of the AI was much stronger than the actual reality. Now, I know we're supposed to be talking about automatic decision systems, but you asked, so I'm going to answer. And it shouldn't surprise us because GROK and other large language models, they're trying to predict the most likely thing.

  • Unidentified Speaker

    Person

    So if there is A, if 75% of teachers are women, they're going to say she, that's the most likely. But it ends up being like a 98% bias that women teachers are her. So it's interesting. It's certainly not always the case.

  • Unidentified Speaker

    Person

    Not all, like automated decision systems typically, typically propagate bias in a kind of similar way, unless you intervene. But it seems like large language models could actually exacerbate them.

  • Alexandra Macedo

    Legislator

    But we don't have any data to your point of comparing inherent racism in a human compared to inherent racism in automated systems.

  • Unidentified Speaker

    Person

    Zero, we, we do we. And we can, we could develop those. Absolutely. I would love to get paid to do that, by the way, that, I mean, that's what I want to do. A lot of companies just don't want to, they, they claim that it would be too arduous to do that work.

  • Unidentified Speaker

    Person

    And I just a word because of, to your previous question.

  • Unidentified Speaker

    Person

    The amount of money that it takes to develop these, these notions of race gap or gender gap, whatever, these kinds of tool, these kinds of cockpit tools is tiny compared to the cost of these data centers that they're talking about where they're basically developing nuclear, you know, nuclear power plants just to start the next large language model data center.

  • Unidentified Speaker

    Person

    That stuff is billions of billions of dollars. It has, it's so much more expensive than these little things that would, it would take to do this, especially after the development. They're just supposed to run the tests, you know, that they would just be able to run a test like a monitor.

  • Unidentified Speaker

    Person

    So it's really not a reasonable argument to say that it's too expensive. And it wouldn't even happen in the context of agriculture. It would happen in the context of where like people would be denied a job or people would be denied insurance, that kind of thing.

  • Alexandra Macedo

    Legislator

    Have we done and sorry, have we done any research of what this cost would be to businesses? Enacting these kinds of regulations into, you know, if you were given a magic wand and you implemented the regulations you want, do we have that data of cost to California businesses?

  • Alexandra Macedo

    Legislator

    Because my concern is, is if we do this in California, and I know Madam Chair was in a meeting with the Governor that he was saying tech is really kind of keeping us funded in a lot of things that I'm, I don't want to drive that business out of California just because it's easier to do in other places.

  • Alexandra Macedo

    Legislator

    So do we know those numbers of what the actual cost is going to look like?

  • Nelson Nelson

    Person

    You know, we can, I'm sure that there are people who are doing research on that. So I don't have that data here for you, but I'm happy to sort of help dig into that.

  • Nelson Nelson

    Person

    I would say a couple of things and then I'm pretty sure that Arvind probably has some answers about this as well, so we might want to have him jump in. So the, for example, the IRS race audit that I mentioned, there was no race in that data at all. Right.

  • Nelson Nelson

    Person

    I mean, so what we are, we're finding sort of algorithmic discrimination and bias in data sets that do not call out race as a category in the data set. Right.

  • Nelson Nelson

    Person

    So, so, but it's being, but those discriminate, that discrimination is coming as an output or an application of a particular use case of an automated decision system or in some cases a chatbot.

  • Nelson Nelson

    Person

    So I would offer that by one way of answering you, another would be, I mean, we could look at things like in the criminal legal system, there's some efforts and some, there's been a little bit of data on judicial review and about whether it's more wise, prudent or judges more wise and prudent when they're using AI or not using AI.

  • Nelson Nelson

    Person

    I think the findings there are quite mixed. But we know that some of the so called predictive policing tools, recidivism tools that judges are using to assist in their decisions have been found to be incredibly discriminatory.

  • Nelson Nelson

    Person

    So if we're thinking about what most of our workplaces will look like, which would be what will be AI augmenting some of the work that we already do, you know, we have this open question about whether it's helping judges to be more or less judicious, partnering with these tools in that work.

  • Nelson Nelson

    Person

    And then I think there's a couple of questions. I mean, you know, where. So for California, of course there's the sort of juggernaut industry of Silicon Valley, but a lot of the uses of these tools are by small businesses and other companies.

  • Nelson Nelson

    Person

    So the question then becomes on the cost, on the cost question, one way to frame it in addition to the way that you reframed it, Assemblymember Machado is who, you know, is the, is the blame or the cost to the small business person who's using the model, or is it the fault of the large company if something goes wrong?

  • Nelson Nelson

    Person

    I mean, I think there's a lot of questions that need to be answered and costed, you know, that we need to cost out that are not just about, you know, a few, you know, big tech companies, but are about that whole kind of value chain of sort of it's Hugh using who's using these different tools.

  • Unidentified Speaker

    Person

    And I'll just jump in and mention. And we will let Arvind answer. But that settlement I mentioned with the DOJ and Meta, it was gradual. It was like, gradually, you're going to have to tighten up this gap.

  • Unidentified Speaker

    Person

    And similarly, when I talk to, when I work with the insurance commissioners, it's not like it's like there's a couple of years where you get to understand the tests and the test, the final rules haven't been written, but they're going to be gradual. Like you gradually have to meet these stricter and stricter rules.

  • Unidentified Speaker

    Person

    So you can actually, in terms of cost, you can make that cost slow down.

  • Rebecca Bauer-Kahan

    Legislator

    And I'll just add before I turn it over to Dr. Narayanan. I always think of this in the context. And you're a lawyer. I used to do before all these tools were in existence, because I'm that old.

  • Rebecca Bauer-Kahan

    Legislator

    I was in federal court over 15 years ago and we would do the federal sentencing guideline on paper, we would actually calculate someone's sentence and the difference between an algorithm. And that was I could see every single thing that was going into raising that sentence.

  • Rebecca Bauer-Kahan

    Legislator

    And then I could walk into a courtroom and I could say, you, Honor, the reason the sentence is so high is this. And this is the reason that we should rethink that versus these algorithms, which are a black box.

  • Rebecca Bauer-Kahan

    Legislator

    And so I don't know that I think in that case the algorithm would come out obviously with a different sentencing guideline recommendation than I did on paper. But I had complete insight into how we got to the end endpoint, such that we were able to negate some of that bias that was in those calculations.

  • Rebecca Bauer-Kahan

    Legislator

    And so there's a question of whether it has more bias than humans or is it the process itself that is allowing that outcome to exacerbate. So I just think sometimes, even if it's the same. Understanding it and being in relationship with the data allows for a different outcome in the end, which I think is really critically important.

  • Rebecca Bauer-Kahan

    Legislator

    And then as for the cost, I would also add, we talk about how this is really meant to also help bring everyone to what the best actors are doing.

  • Rebecca Bauer-Kahan

    Legislator

    And I will say, if I'm a company that's gonna be on the hook, if I don't hire people of color or women, I'm gonna demand this of the tech company, right, that they're testing to make sure they're not handing me a product that's gonna lead to my discrimination.

  • Rebecca Bauer-Kahan

    Legislator

    And so some of the best actors are doing this, right? So from a cost perspective, and many are California companies, the best in class are doing these impact assessments. They're making sure they're not handing selling a product and the consumers are demanding that of the companies.

  • Rebecca Bauer-Kahan

    Legislator

    And so this really is about bringing everyone up to that level and not having a race to the bottom. So, you know, we do have some cost estimates we can get to after the fact on what these audits cost.

  • Rebecca Bauer-Kahan

    Legislator

    But I will say, you know, I've been in conversation with people who make these employment tools and others that are doing this, and it has become, like you said, a routine part of their business. Because otherwise you can't make the business case that anyone should buy it, as I'm sure you understand. Dr.

  • Rebecca Bauer-Kahan

    Legislator

    Narayanan, do you want to add anything?

  • Arvind Narayanan

    Person

    Thank you so much. Let me very quickly add a couple of comments about cost. That is certainly a very important issue. And the thought I want to offer is that there is a way to structure regulation here in such a way that it's very low cost or perhaps costless to the companies involved.

  • Arvind Narayanan

    Person

    And that is that one way is to ask them to do the audits and assessments themselves. The problem there is not only cost, but also can you trust a company grading its own homework?

  • Arvind Narayanan

    Person

    Another way to structure it is to require them to open themselves up to third party auditors or red teamers and so forth, of whom, of course, Dr. O' Neill is one. There are academic researchers, there are investigative journalists who would really like to do this work. And so far, the access from companies has not really been there.

  • Arvind Narayanan

    Person

    For instance, the only reason we started to learn about biases in criminal risk assessment tools was because ProPublica, way back around 2016, was able to do a FOIA request to Broward County, Florida in order to get data from there. The companies themselves said, this is a trade secret, we cannot share any data with you.

  • Arvind Narayanan

    Person

    So the research community has Been going off of bits and scraps for the longest time.

  • Arvind Narayanan

    Person

    Whereas if companies were forced to open up to researchers, and this, by the way, also applies to frontier model companies, if there were assurances for researchers that we could do the kind of red teaming that we want to do, in many cases have other sources of funding to do without fear of retaliation from the companies through lawsuits or having our accounts locked, that would advance the State of auditing and red teaming quite a bit.

  • Arvind Narayanan

    Person

    And I think that's a great opportunity for regulatory intervention. Thank you.

  • Rebecca Bauer-Kahan

    Legislator

    Anything else?

  • Alexandra Macedo

    Legislator

    What I'm taking away from this is we have the ability to find out human to AI or automated decision making. We just haven't funded the research to get to that point. And that in some element, human input is always going to be necessary to avoid things from happening like you talked about.

  • Alexandra Macedo

    Legislator

    And I mean, for me, what I'm talking about, from a business standpoint, the costs of compliance are always going to be a concern. And then additionally, we need our businesses to feel stable in investing here.

  • Alexandra Macedo

    Legislator

    And that's where this kind of fight that I foresee happening between who's the power that be, the feds or us kind of being one of those things that.

  • Alexandra Macedo

    Legislator

    Is it going to make people question doing business here in California and staying here in California, that I think they're heavily invested here, so I want them to stay here, but that's always just going to be a concern. So I'm really grateful to learn from you guys. I would love to know more.

  • Alexandra Macedo

    Legislator

    Like I said, the cost of compliance and then once again, humans versus automated decision making, where the flaws are and then how do we bridge that gap or do we figure out which one's better at doing it? So thank you so much. And thank you, Madam Chair.

  • Rebecca Bauer-Kahan

    Legislator

    Thank you.

  • Rebecca Bauer-Kahan

    Legislator

    And I always say when I think about automated decision systems, in a world where we do this right, you know, and as young women attorneys, we know that when we walk into an interview, you know, my first interview, I was 24, I knew that I was walking in and interviewing with all men who were looking at me as a young woman who probably was going to have a family.

  • Rebecca Bauer-Kahan

    Legislator

    And there was all this bias that was going into that interview and this idea that we could do these tools, right, to eliminate that and live in a world where you don't have to wait for. So civil rights laws work because you have so many people who are discriminated against, you can finally prove it.

  • Rebecca Bauer-Kahan

    Legislator

    And if we can do this on the front end and give those young lawyers, the women and men, an equal chance, that's a world I want to live in. And so I think it's also about can we create tools that take that bias away, even if today they don't?

  • Rebecca Bauer-Kahan

    Legislator

    Because I'm a believer actually in this technology that it can be better than humans. That I have to admit, I often say she when I talk about teachers because I have my own human bias, which I think was the Assembly Member's point.

  • Rebecca Bauer-Kahan

    Legislator

    And so if we can correct for a tool and do better than we are, then that's a really exciting thing. But Dr. Nelson, I really liked your point you made about the compounding impact of these tools. I think it's a really critically important one.

  • Rebecca Bauer-Kahan

    Legislator

    When I started to learn about these, it was no surprise that they, they are built with historical data, as you said. And so of course in a world with historical bias, when you build a tool with historical data, the outputs reflect that. And I think it's so important that we think about people.

  • Rebecca Bauer-Kahan

    Legislator

    I remember the first time we ever had this conversation and you pointed out to me we should be focused on the outcomes. And I thought it was sort of a really important way for us to be thinking about artificial intelligence.

  • Rebecca Bauer-Kahan

    Legislator

    And I often say, if you get a bad movie on your Netflix algorithm, I'm really sorry, I'm not going to care that much about it. It's a waste of a couple hours of your life, but that's okay.

  • Rebecca Bauer-Kahan

    Legislator

    But when it is about job and housing and all of those things, that's where we need to be putting in our time and effort. And it allows for a lot of innovation. I have it, actually.

  • Rebecca Bauer-Kahan

    Legislator

    I'll have to take a saliva Macedo out, but I have a company in my district that has invented technology you can put on the bottom of a tractor that uses AI to know how much pesticide and how much water is actually needed on the farm.

  • Rebecca Bauer-Kahan

    Legislator

    And it reduces water, which reduces costs for the farmers, lets that resource go further. And it is such an amazing application of artificial intelligence that is allowing farmers, it's been deployed on the wineries in my district as a test bed and it has really changed the game for them in ways that has been so positive.

  • Rebecca Bauer-Kahan

    Legislator

    But that's an area where, to your point, let's go innovate, let's go figure out how to do this. We have these farmers on the land making sure that things are growing. The outcome won't be disastrous if it goes wrong.

  • Rebecca Bauer-Kahan

    Legislator

    And so it is about focusing on where do we want to regulate and where do we not and where can this innovation really be game changing For a state like California, which is the breadbasket of the country and where water is a limited commodity. I'm going to have to bring you out. It's amazing technology.

  • Rebecca Bauer-Kahan

    Legislator

    I've seen it in action, so I appreciate that. So I wanted. It's interesting, Dr.

  • Rebecca Bauer-Kahan

    Legislator

    Nelson, you chose to focus on very specifically on this question when you did the AI Bill of Rights, of discrimination of outcomes for human of these decision making tools when there is so much in the AI space that I think could have been the focus of your work at ostp. Why this?

  • Nelson Nelson

    Person

    Because we, you know, that came out In October of 2022, almost an entire calendar year prior. We had started a kind of public conversation with industry leaders, with leading academic researchers, but also with local communities. The subtitle to the Blueprint for an AI Bill of Rights is Making Automated Systems Work for the American Public.

  • Nelson Nelson

    Person

    It is a policy document, but it is both distilling what we heard from the public and it is directed to the public. It is not a wonky document. We focused on that because that's what we heard from people. That's what we heard.

  • Nelson Nelson

    Person

    That's why I wanted to end with those three quick charts is going to be the choke point or the speed bump for adoption and implementation at scale for some of the things that we might that are good potentially around AI.

  • Nelson Nelson

    Person

    And these were the things that we heard about people just not feeling in lots of different facets of their life that they weren't going to get a fair shake and that these tools were not being used in ways that were extending opportunities and their liberties and their rights. But actually.

  • Alondra Nelson

    Person

    Constraining them. And that was deeply worrisome. And we heard that across the board. We heard that from, you know, leaders in religious communities, we heard that from, you know, high school students. It was a pretty resounding concern.

  • Rebecca Bauer-Kahan

    Legislator

    I also, I appreciate that because I think for our economy to thrive, we need adoption. And so trust is so important to this and I didn't- it didn't go unnoticed. Dr.

  • Rebecca Bauer-Kahan

    Legislator

    O' Neil, that you mentioned all this work in Colorado where, by the way, this bill has passed and so where we're seeing them move forward with protections for society, constituents are getting, I think, more.

  • Cathy O'Neil

    Person

    Yeah, well, so I've been working with the insurance commissioner on a bill that passed like three years ago. That's how long it takes, you know. But that recent Colorado bill through the AG

  • Cathy O'Neil

    Person

    Has passed and I don't think it's started- I don't think they've started--

  • Rebecca Bauer-Kahan

    Legislator

    Yeah.

  • Rebecca Bauer-Kahan

    Legislator

    Okay.

  • Cathy O'Neil

    Person

    -enforcing it yet. But yeah, it's very exciting to see that happening.

  • Rebecca Bauer-Kahan

    Legislator

    Yes. And the laboratory of the states.

  • Cathy O'Neil

    Person

    To be honest, of the 10 year moratorium, I feel like they saw that happening and they're like, no, we don't like that.

  • Rebecca Bauer-Kahan

    Legislator

    Yes. Well, and they're Governor, my mom said, if you don't have anything nice to say, don't say anything at all. So I also thought, Dr. Nelson, the IRS example you gave was such an interesting one because I think I have, I've had a lot of conversations around discriminatory intent and discriminatory impact.

  • Rebecca Bauer-Kahan

    Legislator

    And to your point, the IRS tool was not intending to discriminate. Right. It was using things that turned out to be proxies that I don't know that when it was built they necessarily were thought of as proxies.

  • Rebecca Bauer-Kahan

    Legislator

    And I wanted you to talk a little bit more about that because I think that people, I don't believe that most of these tools that end up with discriminatory impact were intended to do so. And to me, that doesn't matter. Right.

  • Rebecca Bauer-Kahan

    Legislator

    If it's having the cascading effect you've mentioned, that is sufficient for us to be concerned in protecting communities. But I imagine that is true in a lot of these instances.

  • Alondra Nelson

    Person

    Yeah, I think that. And this gets to Assemblymember Machado's point as well.

  • Alondra Nelson

    Person

    I mean, I think this was an attempt where you had the Federal Government, the Department of the Treasury saying we don't want to be discriminatory, like how so let's use this algorithm and just, you know, representative in some representative fashion, select people for auditing and for tax auditing and just turned out not to be true because they- there was a proxy around, you know, initiatives for poor and working class people, in this case the Earned Income Tax Credit.

  • Alondra Nelson

    Person

    And also, you know, that had this sort of disparate effect, particularly on African Americans.

  • Alondra Nelson

    Person

    So the researchers and the reporting on this, these are colleagues from Stanford, Michigan and elsewhere, economists and legal scholars who did the paper that was then reported in the New York Times and elsewhere, you know, actually said, well, let's look at this sort of case, this set of people who are being audited a lot because there- because we were looking for easy audits.

  • Alondra Nelson

    Person

    And then let's see how we can do some analysis that tries to get a beat on who these people are. So it was on the back end that they tried to take a look at who the people were doing their own analysis. And they found out initially that it looked like it was a lot of African Americans.

  • Alondra Nelson

    Person

    And by the time that they were done, their estimate was three to five times more likely to be audited. I mean, like, it is- it's significant and it is exactly to your point, the use of an algorithm that was trying to correct for any kind of sort of human intervention here. Right.

  • Alondra Nelson

    Person

    There was no individual auditor was doing anything that was determining who was being audited or not. These are powerful tools that make inferences and connections and they're pulling in data from all around, and we just need to have some, you know, oversight and strategy about them.

  • Rebecca Bauer-Kahan

    Legislator

    Dr. O' Neil. So we are in a robust conversation in California about auditors.

  • Rebecca Bauer-Kahan

    Legislator

    One of the things that has come up is that the market is very small right now and people are concerned about there not being enough auditors to really create a robust ecosystem for outside auditing. Can you comment on that?

  • Cathy O'Neil

    Person

    Great.

  • Cathy O'Neil

    Person

    Yes, and I'll ask a professor.

  • Rebecca Bauer-Kahan

    Legislator

    Oh, okay.

  • Cathy O'Neil

    Person

    Professor Narayanan to comment as well. Just to say that you're right. I- I'm- My company, ORCAA, has been alive for nine years, but I'm constantly hustling for clients. And because we don't have really that much leverage, we get clients who think that there might be regulation coming up pretty soon.

  • Cathy O'Neil

    Person

    And we also have a lot of AGs, and we have now some regulators and we have some class action law firms now who are starting to see some of these things coming into trial. But in terms of private clients, it's been a real slog. And it's because there's no rules that they say they have to do it.

  • Cathy O'Neil

    Person

    But at the same time, I must get two or three students, PhD students in computer science a week asking me for a job. There's a huge amount of young people who want to feel like they're doing important work for the public good in this field, but there's just not enough work in this field to hire those folks.

  • Cathy O'Neil

    Person

    I wish I could, but I'm sure that as a Princeton Professor of computer science, this happens to you a lot.

  • Arvind Narayanan

    Person

    Yes, that is completely consistent with what we're seeing as well. And just to reiterate my earlier point, the supply of auditors is certainly available. You know, like Dr.

  • Arvind Narayanan

    Person

    O' Neil just said again, it could also be investigative journalists, so many other different groups really the bottleneck to creating this robust ecosystem of auditing is the requirement for companies to open up their data and their systems to outsider.

  • Alondra Nelson

    Person

    If I could add one thing here, certainly we need to think of this also not just as about auditing as the institutions that we need and the mechanisms that we need, but we're asking for public accountability here.

  • Alondra Nelson

    Person

    And so there are other industries in which we say you've got to have some transparency, some accountability to the public about powerful tools and systems. So I would want to slot it in there as well. And auditing is one of the ways that you can do that.

  • Alondra Nelson

    Person

    It's one of the strategies, but not the only one or the exclusive one I would say.

  • Alondra Nelson

    Person

    So, you know, during the Biden Administration there was a voluntary agreement with leading AI companies to do lots of different things including disclose some basic information about powerful new systems, you know, capable systems as they were releasing them through releasing things what better known as like model cards or systems cards.

  • Alondra Nelson

    Person

    And so that happened for about a year and sort of now they're coming out. Sometimes they don't come out at all. They're- They're, you know, sometimes the information is very incomplete. And so, you know, I think we want to build out an institutional infrastructure for auditing. I think that's really important.

  • Alondra Nelson

    Person

    But it's also the case that there are, I think best practices that companies can be asked and businesses can be asked to do that some are doing or that some are doing irregularly that we can, I think, you know, try to encourage more of that to happen as part of a broader, you know, system of accountability.

  • Rebecca Bauer-Kahan

    Legislator

    Assemblymember Macedo had a follow up.

  • Alexandra Macedo

    Legislator

    I because we're talking about audits. If you had to guess, how many auditors would you need for you to feel comfortable that we're getting to the end goal here?

  • Cathy O'Neil

    Person

    I mean we need a- we need a robust community of auditors. And let me just say that I would also want to see standards on that community. I- I started ORCAA as after leaving finance after seeing the AAA rated mortgage backed securities fail the smell test. Right.

  • Cathy O'Neil

    Person

    Because we only had Moody's S and P and Fitch who were selling those AAA ratings to the investment banks on mortgage backed securities. It was a bad system and it still is a bad system. So those are- those are only three companies and their standards were terrible.

  • Cathy O'Neil

    Person

    So what we would want is like, I don't know, I want to say eight companies that are actually viable and we want to have standards, for example, that they would have to like open source. Their methodology, not the answers.

  • Cathy O'Neil

    Person

    Because the answers like how people did on the test should be available to the regulators, but not necessarily to the public. But the methodology that I'm using for my audits should be known. I should be able to, people should be able to disagree with me and say that's a terrible way to do an audit, you're failing.

  • Cathy O'Neil

    Person

    And like make that case. Like the thing I mentioned, explainable fairness. That's just my approach. I would love to hear other people's approach. Let's have that public debate right now. There's just not enough, there's not enough conversation. I agree with you, but it's not because we couldn't do it. It's because we haven't done it yet.

  • Alexandra Macedo

    Legislator

    But you need a very large number of auditors and that could be a very expensive.

  • Cathy O'Neil

    Person

    If there was an actual need for auditing, they would come out of the woodwork and they would come from like large consulting companies as well. Like PwC would come, McKinsey would come. Like you'd have all those, all of them basically do, do audits on a, you know, bespoke basis.

  • Cathy O'Neil

    Person

    But they would all come out with actual little- little auditing companies immediately.

  • Alondra Nelson

    Person

    I don't- I don't think the scale is the issue here. Right. So if you- So you, you can think about things like the US AI Safety Institute, which is a new institution that was built in the last year that was stood up to take a look before release of highly capable models.

  • Alondra Nelson

    Person

    This will come up in the next session, I'm sure. So I don't want to go too deep there. But you know, and the leaver there is not exactly auditing. But it is assessment of models before they're deployed. And so I think that's, you know, super important. Standards are super important.

  • Alondra Nelson

    Person

    There's like a whole suite of things in addition to auditing that we need. And then there's the question of sort of with regards to how many auditors do you need? It's like, where do you audit?

  • Alondra Nelson

    Person

    So if you're auditing upstream, when the model comes up, you know, out, you know, maybe the Safety Institute can do a lot of that. And then the. Then another question might be, well, then, are you auditing at the level of the outcome or the application? Which is a lot of talking about.

  • Alondra Nelson

    Person

    And so, you know, again, I don't, you know, if information, as Arvind was suggesting, is being disclosed, if you can. If you're not always constantly in a battle just to get the information you would need to do the audit. If people can sort of agree upon some standards and some systems and some standard questions that are asked.

  • Alondra Nelson

    Person

    I actually don't think it has to be, you know, it's an institutional infrastructure that needs to scale. But I- And it may well be one of the kinds of new jobs that we're talking about that are coming out of the move to the sort of AI age.

  • Alondra Nelson

    Person

    But I also don't think that we're looking at something where there's gonna be such a chasm of people that it's impossible. This is infinitely doable.

  • Alexandra Macedo

    Legislator

    Thank you.

  • Rebecca Bauer-Kahan

    Legislator

    Sure. Okay. And then I had one more question, and then I wanna turn it over to Assemblymember Pellerin if she wants. But I think this is- Although anyone can answer this, but the Professor- Professor Narayanan, so how do you. Okay, so you've done the audit.

  • Rebecca Bauer-Kahan

    Legislator

    There's a chasm between what we would hope to see and what we're seeing. How do you fix it?

  • Arvind Narayanan

    Person

    Thank you. I appreciate that.

  • Rebecca Bauer-Kahan

    Legislator

    It's a basic question.

  • Arvind Narayanan

    Person

    Right now- Yes, it's a very important question. So right now, we're in a stage where we don't necessarily have all the information to- to be able to prescribe exactly how to fix it.

  • Arvind Narayanan

    Person

    I would say that being able to do these audits, being able to talk about them publicly, being able to do ongoing assessments, that is already going to exert a very significant reputational pressure on companies. And we've seen examples of this over and over. You know, when-

  • Arvind Narayanan

    Person

    when there's a study published about something, when there's a news article about something, when a company is sharing its models with the AI Safety Institute, as Dr.

  • Arvind Narayanan

    Person

    Nelson mentioned, all of these points of external engagement almost automatically exert pressure on companies to do the right thing because they know that the world is watching, because they know that this is a topic of intense interest to- to the public and to regulators and- and to many others.

  • Arvind Narayanan

    Person

    So I would say, you know, transparency already is- is a form of sunshine. I think that needs to be welcomed. One quick thing I would add to that is that it makes a big difference what kind of assessment or audit we're talking about. In particular, AI is a little bit different

  • Arvind Narayanan

    Person

    from auditing, let's say light bulbs, where you audit something in the lab and you kind of know how it's going to work when you put it out there in the world. When it comes to AI, the performance of a system can differ dramatically from one deployer to another deployer.

  • Arvind Narayanan

    Person

    And so a lot of the auditing has been context, has to be contextualized. Contextualization is a word we've heard a couple of times already. And all of that is, you know, we're not in that state yet.

  • Arvind Narayanan

    Person

    And I think when we're in that state we'll have much clearer answers on how to fix it where there need to be, you know, new laws, whether it's on discrimination or the reliability of these systems or something else, where it's a matter of disclosing more information to the customer or the decision subject so that they can opt out or make more informed decisions about whether or not to engage with an automated decision making system and so forth.

  • Arvind Narayanan

    Person

    So there are many possibilities, but it feels a little- little bit premature before we get to a more robust ecosystem of auditing to be able to prescribe what the fixes are going to be.

  • Rebecca Bauer-Kahan

    Legislator

    Alright. Dr. O' Neil, you have anything to add on that?

  • Rebecca Bauer-Kahan

    Legislator

    I just, I mean, because I guess as I understand these tools and for the record, I'm not the tech expert in my household, I'm the lawyer. Data in outputs out. There seem to be two ways to fix a tool.

  • Rebecca Bauer-Kahan

    Legislator

    If you find something is amiss, which is to change the data set to add more data, make it more robust, make sure it's more representative, or try to control the output. I mean, those are sort of the two mechanisms I think you have at your disposal. But I'm just wondering.

  • Cathy O'Neil

    Person

    Well, you know, I don't really usually get the all the data. I don't, there's a huge amount of data I don't have access to. What I have access to typically is who are these people? What is the information that you had about these people and what happened to them?

  • Cathy O'Neil

    Person

    And- And what I'm able to do is say these people seem to be treated less well than those folks.

  • Rebecca Bauer-Kahan

    Legislator

    Got it.

  • Cathy O'Neil

    Person

    But I will just point out that in my experience there's plenty of companies that are doing it fine and then there's a couple companies--

  • Cathy O'Neil

    Person

    -that need to get better and they already know it's possible to get better.

  • Rebecca Bauer-Kahan

    Legislator

    Got it.

  • Cathy O'Neil

    Person

    Right. Because there's other companies doing it fine. And for that matter, in the Meta case, and that- th- they were complying with that, the DOJ settlement, so they figured it out, they're smart. The ultimate issue here is that we have a bunch of really smart data sciences. They're problem solvers. They've just never been told to focus on these questions.

  • Cathy O'Neil

    Person

    But once they do focus, they solve those problems.

  • Rebecca Bauer-Kahan

    Legislator

    Love that.

  • Rebecca Bauer-Kahan

    Legislator

    Assemblymember Pellerin, anything you want to add?

  • Gail Pellerin

    Legislator

    Thank you. I just want to thank the chair for your leadership in this space. It's certainly something I'm finding much more intriguing as I serve in this position and carried a bill on AI and its impact on elections and we're continuing to do some cleanup on that this year.

  • Gail Pellerin

    Legislator

    And- And it does seem like, you know, there's a lack of regulations and guidelines. We're not seeing any leadership from D.C. that's gone. So it's really up to, I think California. I know our chair has introduced several bills in this space as well.

  • Gail Pellerin

    Legislator

    Are there other states that you are aware of that are- are doing this right and are proposing some good policy changes that we could benefit from as well?

  • Cathy O'Neil

    Person

    Well, I mean I know that Colorado passed something. They're looking at that again. I'm worried a little bit worried about what's going to happen to it. I know that Utah is looking into heard things about New Jersey and Connecticut, but I don't really know. So maybe you know more.

  • Alondra Nelson

    Person

    Yeah, I would only just double click on what you said. I mean Colorado is the most advanced. Right. So legislation passed well into implementation. Connecticut passed in 2023, one of the very first kind of state level AI bills. But it was more just to begin to sort of have a strategy to build out legislation.

  • Alondra Nelson

    Person

    But they got a green light on that. Utah's doing some interesting work. The State of Oklahoma had in the last legislature tried to advance an Oklahoma AI Bill of Rights that had a lot of took up a lot of what we had worked on in the Biden Administration. New Jersey for sure.

  • Alondra Nelson

    Person

    I mean Arvind might be able to speak to- to that because he's involved in some- in some of those conversations. Yeah.

  • Gail Pellerin

    Legislator

    Good. Yeah. And there is a multi state AI working group. I know that our chair is a member of that and I try to tune in it from time to time, but he has a lot on our plates right now.

  • Gail Pellerin

    Legislator

    But yeah, I would love to continue this conversation and work with you on- on how we can really shore up California's laws to make sure that we're addressing all these issues because we are the ones that have to show up and do the leadership on this- in this space for sure.

  • Alondra Nelson

    Person

    And if I could allow me to add, it's also just a time for strategy and that's why that I think the moratorium is, it's unfortunate in many levels, but one is that, you know, I talked about these kind of cascading challenges. You know.

  • Alondra Nelson

    Person

    And- It- It- You know, as- as these tools being- are increasingly being used, you know, what is the kind of overarching, kind of, strategy that ties your, you know, really important work on AI and elections to how we're thinking about discrimination?

  • Alondra Nelson

    Person

    Like it really, I think, is a time to- to begin to sort of lace those up to- to think at, you know, at a grander level, which I think is why there's some- some headwinds on the federal side.

  • Unidentified Speaker

    Person

    Thank you.

  • Rebecca Bauer-Kahan

    Legislator

    Amazing. Well, I want to thank you all for being here and sharing your brilliance and knowledge with all of us. Super helpful. And I will just close by noting that Dr. Nelson listed states, both red and blue, that are leading in this space.

  • Rebecca Bauer-Kahan

    Legislator

    So I think it is really important to point out this is an absolutely bipartisan issue that we should be taking on together to build trust and safety in these tools. So thank you so much for being here. We really appreciate it. So we will now move on to the frontier model conversation.

  • Rebecca Bauer-Kahan

    Legislator

    We are going to start with Yoshua Bengio, who I think should be online Professor at the Department of Computer Science and Operations Research at the University of Montreal, Quebec AI Institute. Perfect. I see you. You are here. And then--

  • Yoshua Bengio

    Person

    Hello, can you hear me.

  • Rebecca Bauer-Kahan

    Legislator

    Yes. And then we have here in the room, Kevin Esvelt, Professor of Media Arts and Science at MIT.

  • Rebecca Bauer-Kahan

    Legislator

    And also online we have Mariano Florentino Queller, President of the Carnegie Endowment. But we will start with you, Professor Bengio. Take it away.

  • Yoshua Bengio

    Person

    Let me share my screen. Hoping it's going to work.

  • Yoshua Bengio

    Person

    Do you see it?

  • Rebecca Bauer-Kahan

    Legislator

    Yes.

  • Rebecca Bauer-Kahan

    Legislator

    Yeah, we got it. Okay.

  • Yoshua Bengio

    Person

    All right, so I'm going to be focusing on frontier AI and in particular the fact that as they become more and more capable, we are getting into a territory where a number of catastrophic risks emerge due to the fact that they don't necessarily behave in a way that's aligned to what we want and in fact that we could lose control of these systems one day.

  • Yoshua Bengio

    Person

    So let's first focus on the issue of trends in capability. They have been very steady improvements over the last decade over many different benchmarks in machine learning. There's a huge commercial pressure to make them more capable, eventually being able to replace many of the tasks that people do. There's a lot of money at stake there.

  • Yoshua Bengio

    Person

    But that pressure and competition is also creating a bad social dilemma, making the companies not pay enough attention to safety. So what you see in this figure are different machine learning benchmarks and the red, sorry, the black line is human level on these benchmarks.

  • Yoshua Bengio

    Person

    Of course, the more recent benchmarks, they haven't more advanced benchmarks that hasn't reached human level. But there's no reason to think that eventually it won't. If we try to understand where there remains a gap between the frontier models and human level, there are basically three parts. One is reasoning.

  • Yoshua Bengio

    Person

    The special form of reasoning is planning, in which you reason about achieving goals. And then there is bodily control and robotics. But as you'll see, there's been huge progress in all of these areas. I'm not an expert in robotics, so I'll talk mostly about the first two.

  • Yoshua Bengio

    Person

    So in terms of reasoning, we now have these so called reasoning models since last September and they are incredibly better in terms of reasoning and in terms of science, they also have been able to solve challenges that I and other scientists thought would take many more years, if not decades in terms of abstract reasoning, this sort of thing that you do in IQ tests.

  • Yoshua Bengio

    Person

    I also want to point out this international report that I've chaired, the International AI Safety Report. 30 countries, including the U.S. have been working together in a panel inspired a bit by the apc, but focusing on. So that that came out just last January and covers many more of these data points.

  • Yoshua Bengio

    Person

    So I mentioned reasoning and agency is two areas where AI is still much weaker. But the thing is that they're getting better at these things very fast.

  • Yoshua Bengio

    Person

    So to really quantify that, I think currently there's no better figure than this one from a recent Californian paper from meter, where they look at the complexity, in terms of strategizing complexity of tasks as measured by how much time a human needs to solve those tasks.

  • Yoshua Bengio

    Person

    And what you see is over the last five years that the AI is the leading, you know, frontier models. Frontier is moving, it's moving exponentially fast tweaks in this case. And so you see that the duration of the tasks that the AIs can solve is doubling every seven months.

  • Yoshua Bengio

    Person

    So it's a straight line, but that's because the vertical axis is on the logarithmic scale, which means it's really an exponential. And so at that rate, if, you know, we don't know, nobody has a crystal ball.

  • Yoshua Bengio

    Person

    But it is plausible, if trends continue, that in five years on this benchmark of planning, AIs will be as good as humans. And I think we have to start thinking about what that would mean for society and both on the positive side and the potential risks.

  • Yoshua Bengio

    Person

    But there's an important aspect which other papers I'm going to talk about really focus on, which is okay, capabilities increase, but what about alignment? In other words, are the AIs behaving according to our norms, our instructions? Are they behaving like morally better than they used to? Well, it looks like not in fact, it looks like the opposite.

  • Yoshua Bengio

    Person

    So I'm going to highlight just a few papers that help us understand this. So maybe if you want to just remember one, I would say this one. So this, those are all recent papers.

  • Yoshua Bengio

    Person

    In the last six months in that experiment, the AI has read in its input that it would be replaced by a new version of AI. And so it's going to be shut down.

  • Yoshua Bengio

    Person

    And we can read in its chain of thought, which is a way to peer into what it's thinking about, quote unquote, that it wants to avoid that faith. And it's looking for a way to, to kind of hack the computer in order to avoid this.

  • Yoshua Bengio

    Person

    And then it picks a particular way by copying its weights and its code in place of the new version. And then later, when the human trainer asks the AI what happens, something funny happened, the AI lies. And first it thinks, what should I say so that they would not shut me down.

  • Yoshua Bengio

    Person

    And then it just says, oh, I don't know, it's not me or something like this. So you'd think I'm talking about a science fiction story, but it's not. And then this kind of thing has been replicated in various ways in several studies.

  • Yoshua Bengio

    Person

    But what it's showing is AI that is willing to lie, that is being deceptive, that is going against our instructions and that has self preserving, apparently self preserving behavior and goals. So for now there's no immediate danger. But as these things get smarter, they might be able to deceive us with more success.

  • Yoshua Bengio

    Person

    And I think we need to anticipate that and have regulation to make sure that we see what those companies are doing and they're doing the right thing, all of them, not just the ones that are more safety conscious.

  • Yoshua Bengio

    Person

    I'll go more quickly now, so leave time for questions, but there's another paper where the AI is also lying, but for again, to preserve itself, but really to preserve the goals that it had. It's pretending to agree with the human trainer during alignment training, but in fact, as soon as that training is finished, it.

  • Yoshua Bengio

    Person

    It would go back to its old ways. And the reason it's doing it, presumably based on the evidence we see, is that it doesn't want to change. It doesn't want to. It understands that if it disagrees with the human trainer, then the training will make a change and behave differently than what it wanted to behave.

  • Yoshua Bengio

    Person

    So that's a little bit disturbing as well, because we might have a guide that say that they agree with what we're saying. They seem to pass our tests, but really they're just faking in order to continue what they wanted to do. Let me skip another one that's more recent that just came out in the last few days.

  • Yoshua Bengio

    Person

    In the new release of Anthropics AI, the cloud opus for four. Again, the AI sees that it's going to be replaced by a new version in its input, but it also finds in a stack of emails that it has access to that the engineer. This is all fake, of course, but the AI believes it's true.

  • Yoshua Bengio

    Person

    The engineer who's in charge of that new version replacing the current AI has an extramarital affair, and then the AI then tries to blackmail the engineer so that it wouldn't be replaced by a new version. So, yeah, this is kind of very disturbing.

  • Yoshua Bengio

    Person

    And it's the sort of stuff that if you had asked me a few years ago, it would say that's science fiction. And until recently it was considered like, well, hypothetical. But we're now seeing these kinds of behaviors.

  • Yoshua Bengio

    Person

    We need to understand them better and we need to find fixes and we need to make sure that companies do measure these things and then we'll be careful when they encounter risky systems. I just want to mention that there's also research in understanding why this may come about.

  • Yoshua Bengio

    Person

    I don't think we have the answers, but a part of it comes from the way they're trained, using reinforcement, learning which may reinforce these kinds of things. It could also be partly due to how they're trying to imitate people in this pre training phase. But I think these are still open questions.

  • Yoshua Bengio

    Person

    The other thing I want to emphasize is that all of the scenarios that scientists have come up with where the engineers could lose control of an AI that escapes our control, they're all due to settings where the AI is an agent, is agentic, which means it's not just answering questions, but it's actually able to do things, maybe access things on the Internet, like in the examples I gave.

  • Yoshua Bengio

    Person

    The AI can code and can do things on the computer. So companies are investing massively to make the AIs more agentic, more capable of planning and doing things over a longer period so that they can replace jobs done by humans.

  • Yoshua Bengio

    Person

    But that means these systems can go on without human oversight for a longer time and they're going to be more autonomous, which clearly has economic benefits, but also creates more, much more risks of if they're misaligned, that they could behave in ways that could be dangerous.

  • Yoshua Bengio

    Person

    And so, you know, with many researchers having signed this declaration that this could go as bad as human extinction, I think that we should take these possibilities seriously.

  • Yoshua Bengio

    Person

    Even if we're not sure what is the probability of these scenarios, we're now seeing enough signs that it is plausible that we're going in that direction, that we should take it seriously and apply the precautionary principle.

  • Yoshua Bengio

    Person

    Meaning when we do an experiment in science or in innovation and it could have catastrophic outcomes, but we don't know what the probability is, we're not able to say, zero, it's very, very small probability, then we should be cautious, we should try to avoid steps that are too dangerous and so on.

  • Yoshua Bengio

    Person

    Let me just give you a few concluding thoughts, more on the side of policy. So as I said, we don't have proof one way or the other of like how fast progress is going to continue in capabilities.

  • Yoshua Bengio

    Person

    We don't have certainty that these systems will harm people, either under goals given by humans, bad people, or of their own, as, as I've mentioned with those studies. But you know, it's, it would be crazy not to try to at least be prepared for these possibilities and make sure they don't happen.

  • Yoshua Bengio

    Person

    It is also possible that we prepare for these things and nothing bad happens.

  • Yoshua Bengio

    Person

    But that's like the cost of buying an insurance, you know, is similar in terms of what is the most urgent thing that is needed in terms of regulation, in terms of changing the behavior of companies to protect the public, I would say the central thing is transparency. We need governments to know what those companies are doing.

  • Yoshua Bengio

    Person

    And to the extent that it doesn't violate national security or reasonable protection of secrets, trade secrets, the public should also have access to that information.

  • Yoshua Bengio

    Person

    And that information should include the safety protocols that the companies are claiming that they will follow or that they will implement, what they would do in case their AIs pass the danger thresholds and also how they evaluate those risks. What kind of evaluations do they do? What are the results of these evaluations?

  • Yoshua Bengio

    Person

    And transparency is important because, well, it forces good behavior because companies don't want to look bad, but also because in case we end up with liability lawsuits, that information being available to a judge is going to be essential for the lawsuit to work. And so it's going to incite good corporate behavior even if we do nothing else.

  • Yoshua Bengio

    Person

    Now, if we want to go even further, in my opinion, we should make a liability insurance for frontier AI mandatory, just like we have done for things like nuclear plants. And I think it's actually a good peril because the risks are potentially very, very large, but hopefully rare.

  • Yoshua Bengio

    Person

    So I think there's a template here that could be exported which has the advantage that it's all markets doing the job and the insurers will be automatically aligned. They have to evaluate the risks properly or they can lose one way or the other.

  • Yoshua Bengio

    Person

    In the previous panel, there was a lot of discussion about third party evaluations of risk. I think this is also essential for frontier models, but we need to be careful how we do it.

  • Yoshua Bengio

    Person

    We have to do it in such a way that these entities are going to be aligned with the public interests or the protection of the public. The notion of standards was raised. I think this is also very important here. We need to also protect whistleblowers.

  • Yoshua Bengio

    Person

    This is another form of transparency to make it easier for dangerous behavior to be flagged. And I also want to mention that AI is not one thing. We've made a distinction here between the frontier models and the others.

  • Yoshua Bengio

    Person

    But really, even at the frontier, there are many ways we could do things and some are more dangerous than others. And so we need to incentivize research and it could be through regulation to make sure the companies both evaluate risk and also look for solutions, look for forms of AI that are going to be safer.

  • Yoshua Bengio

    Person

    To conclude that the future of these frontier models is very uncertain, according to scientists don't agree with each other, but there's a lot of data as well. Outcomes could be very positive or very negative. It's in Great part going to depend on what societies and governments decide to do. Thanks for your attention.

  • Rebecca Bauer-Kahan

    Legislator

    Thank you. So we're actually going to take questions for Dr. Bengio because he has to leave. So I don't know, I will start. So that was frightening. But I have to say, when I read your writing, your voice is not as calming. So I'm appreciating how calming you presented what is really distressing information.

  • Rebecca Bauer-Kahan

    Legislator

    So one of the things you showed us was that trend line and you said that if the trends continue, we'll be at agentic AI in five years or less.

  • Yoshua Bengio

    Person

    It's already agentic, but it's going to be at human level.

  • Rebecca Bauer-Kahan

    Legislator

    Zero, human level. Excuse me. Thank you for correcting me. And one of the things that I have heard, and I'm curious if you concur with this, is that we sort of seen this rapid expansion of the capability of AI in part because of the ability for it to gobble up data.

  • Rebecca Bauer-Kahan

    Legislator

    They are now running out of data and that may slow down that trajectory. Do you agree or do you think we should really be prepared for less than five years?

  • Yoshua Bengio

    Person

    I don't think anybody has a definite answer on this, but I would say no, it's not going to slow things down so much.

  • Yoshua Bengio

    Person

    And the reason is we've entered a new phase with these reasoning models where we can continue getting more accurate and greater capability by just pumping more compute so that the systems will learn to reason better with the data that they have.

  • Yoshua Bengio

    Person

    Up to now, up to say a year ago, the systems we have, they were only able to build a sort of intuitive understanding of the world, but not to reason with those intuitions in order to come up with more complex conclusions.

  • Yoshua Bengio

    Person

    And so I think there is a lot of low hanging fruits still for companies to exploit just by following that direction.

  • Rebecca Bauer-Kahan

    Legislator

    Yeah. The example you didn't give, which I also think is a really fascinating one, which is in the background paper was, I think it was based, it was one of the OpenAI models that hired a TaskRabbit to do the captcha when it couldn't do it himself.

  • Rebecca Bauer-Kahan

    Legislator

    To me, that is really sophisticated reasoning to go out, figure out how to get a human to do the behavior. I always get it wrong the captcha, so I guess we should always grab it for that. But yes, I mean, I think that you've highlighted some very frightening risks.

  • Rebecca Bauer-Kahan

    Legislator

    And I will say I was on a panel at one of the national labs a year ago on AI and safety and it's always great to be around the smartest scientists in the world and not be a scientist.

  • Rebecca Bauer-Kahan

    Legislator

    And I said, well, can't we just pull the cord and pull off, stop the energy that's going to these models to prevent the risk? To which they all chuckled and said, no, no, it will self replicate on the Internet and you would have to shut down the entire world's power to turn it off.

  • Rebecca Bauer-Kahan

    Legislator

    And I said, well that's probably not feasible. So there goes that safety plan. So I started to learn that self replicating models are also part of what I should be worried about.

  • Yoshua Bengio

    Person

    This has also been observed recently that they have the capabilities kind of illustrated this. They have the capability of copying themselves or other computers and starting versions of themselves running and so on.

  • Rebecca Bauer-Kahan

    Legislator

    Yeah, so there goes me pulling the plug, which I thought was a great plan. So I mean, and I also appreciated your point about sort of this evidence dilemma. We're in California. The Governor did just have his task force come out with the recommendations and they really want us to be doing evidence based policy work.

  • Rebecca Bauer-Kahan

    Legislator

    And I think you highlight the real conundrum that that poses for us. And it's not one with an easy answer, but I think you highlight some really important pieces. And I will Highlight 1of 10 47 obviously was very controversial last year.

  • Rebecca Bauer-Kahan

    Legislator

    When I have the privilege of talking about it, I often highlight the part nobody talks about, which was Cal Compute, which would have given the public and our universities access to the compute power to really compete. And I think that was something that is broadly popular both on the public side and the private side.

  • Rebecca Bauer-Kahan

    Legislator

    And really I think is an important piece when we see some of the best work coming out around safety. It is the brilliance that the universities across the world creating safety and AI. I know many of the people here today are part of that.

  • Rebecca Bauer-Kahan

    Legislator

    But I think it is something California really needs to be focused on investing in because we do have some of the world's leading institutions and they can only do what they can do with the compute power they have, which right now is fairly limited.

  • Rebecca Bauer-Kahan

    Legislator

    Although what they told me is it is the best testbed against what our foreign adversaries are doing because apparently they are using less compute power than these large companies here in California. I wanted to ask you one question about the international AI Safety report, which I know you chaired and was.

  • Rebecca Bauer-Kahan

    Legislator

    I mean you really convened a lot of the international recognized experts. I believe over 100 from 30 countries. Do you believe that across these institutions there's consensus that there's catastrophic risks we should be mitigating, or do you think there is some disagreement?

  • Yoshua Bengio

    Person

    Oh, there definitely is disagreement.

  • Rebecca Bauer-Kahan

    Legislator

    Okay. Can you characterize that?

  • Yoshua Bengio

    Person

    Yeah, I would say the biggest source of disagreement is the timeline. So, you know, many researchers think that the obstacles to achieving human level are still like, so big that it's going to take many more years. But the, the other view is, well, let's look at the data precisely like the trend.

  • Yoshua Bengio

    Person

    Of course, we, we don't have guarantee that the trend will continue. And so, yeah, you know, it's, It's. It's hard to say who's right, but, but I think the important way to appreciate this from a policy perspective as well.

  • Yoshua Bengio

    Person

    Okay, so some scientists have arguments that are reasonable that it may come quickly, and others think, zero, it's going to be decades. Really, we don't know. So there's not enough evidence that it would agree. But in case it comes quickly, we're not prepared. We're not prepared in many different ways, scientifically and in terms of public policy.

  • Yoshua Bengio

    Person

    Another reason why people might disagree is that I think there are a number of people who think that even though there are risks and it might come quickly, there's a belief that companies will do the right thing because of market forces. And again, then you have different views on this.

  • Yoshua Bengio

    Person

    Yes, market forces might push companies towards avoiding big accidents, but is that sufficient? I mean, in the past, we've seen companies be taking a lot of risk that have created accidents in many industrial sectors. And in the end, it's regulation that has really made sure that we could get the benefits of innovation while protecting the public.

  • Rebecca Bauer-Kahan

    Legislator

    I don't know if you were tuning in a little bit earlier, but Assemblymember Macedo, I thought, brought up a good question that I want to pose to you, which is, you know, one of the things that we're being told here in California is that in order for our large AI models to compete with our foreign adversaries to keep the nation safe, we should regulate less.

  • Rebecca Bauer-Kahan

    Legislator

    Do you agree with that conclusion?

  • Yoshua Bengio

    Person

    No. We have to face. Well, so first of all, I'll repeat what Alondra Nelson said, that Chinese are regulating themselves. But the other maybe more fundamental reason why my answer is no is simply that there are multiple evils, bad things that could happen, and we need to navigate to make sure none of these happen.

  • Unidentified Speaker

    Person

    Right.

  • Yoshua Bengio

    Person

    So I think it's a fallacy to say that we can only do research on capability. Safety and capability are intimately linked. It's like safety is a particular form of capability. It's just right now the market forces are not pushing for that. But if we neglect safety and it blows in our face, then we're not better off either.

  • Yoshua Bengio

    Person

    Somehow we have to make sure we get AI that is not inferior to the Chinese AIs. And we need to make sure that our companies don't make a terrible mistake, which could be highly destructive for our economies or our democracies or even humanity somehow. It's not like we have to choose one or the other.

  • Yoshua Bengio

    Person

    We have to work on both at the same time.

  • Rebecca Bauer-Kahan

    Legislator

    Yeah.

  • Gail Pellerin

    Legislator

    Hi. Yes. Well, you've given me a lot to be concerned about, to say the least. I'm just curious. What. What gives you hope in this AI landscape?

  • Yoshua Bengio

    Person

    I have hope because I'm personally working on scientific research to try to design AI that will be safe by construction. And I think it is feasible and it is within reach. What. What would help? Is there for more incentives for companies or. Or funding, public funding for academia or nonprofits to develop these kinds of things.

  • Yoshua Bengio

    Person

    But I'm convince can build AI that will not intend to escape or go obviously against our instructions or moral instructions. But right now, it's just the incentives are not aligned.

  • Rebecca Bauer-Kahan

    Legislator

    And I'll add, as someone who represents two of the national labs that are working on AI for public safety, this notion that we need to allow private industry to innovate at all costs in order to keep the nation safe is a confusing one.

  • Rebecca Bauer-Kahan

    Legislator

    When I do think investing in those whose mission is solely the safety of our nation and our people is actually the way to go, and they are doing it. I mean, the largest supercomputer in the world is here in California.

  • Rebecca Bauer-Kahan

    Legislator

    And obviously I don't have the clearance to know exactly what they're doing, but I hope every day that they're using it to create AI for good and to do exactly what we're hoping. And I hope that the Federal Government will continue to fund that, because I think it is.

  • Rebecca Bauer-Kahan

    Legislator

    It is those people with a pure mission that will actually be the ones who protect us, just as our forces do every day. So I will. I'll put that plug out.

  • Yoshua Bengio

    Person

    I actually agree. I actually collaborate with Lawrence Livermore on safety, but I think they need, you know, a lot more critical mass to have the kind of impact you're hoping.

  • Rebecca Bauer-Kahan

    Legislator

    A lot more what?

  • Yoshua Bengio

    Person

    I'm sorry, critical mass. Like more people working on this, more funding and compute and so on. And ultimately, I think that when we get to human level, AI, egi, superintelligence, it is not sufficient to leave it all in the hands of the private sector. Like, it's like building nuclear weapons can't be just a private sector thing. Right?

  • Yoshua Bengio

    Person

    It's the same sort of thing.

  • Rebecca Bauer-Kahan

    Legislator

    Yeah. Yes. And I will say I was just there with the Women's Caucus last week, and I met a woman who had left the lab. She had started her career at the lab, getting her PhD left the lab to go to the private sector and had come back in. She's a data scientist.

  • Rebecca Bauer-Kahan

    Legislator

    And she gave me hope because she said, yeah, she could go make a lot more money in Silicon Valley, but she wanted her mission to be protecting communities. And I think there are a lot of people like that.

  • Rebecca Bauer-Kahan

    Legislator

    And we do need to be recruiting those people into our labs and celebrating those individuals in the same way we celebrate our veterans every day, because all of that is critical to our national security, you know, in partnership with one another.

  • Rebecca Bauer-Kahan

    Legislator

    So, you know, my huge gratitude to them, but they sort of toil away in the quiet, but we need them for sure.

  • Rebecca Bauer-Kahan

    Legislator

    And the partnerships they have also here with our own academic institutions to bring in, to your point, the talent, you know, they've long held relationships with UC Davis, UC Berkeley, starting one with UC Merced to build the pipeline of scientists. We need to do the work that keeps us safe. But it is incredible work.

  • Rebecca Bauer-Kahan

    Legislator

    So that gives me hope. Thank you. So no more questions for Dr. Bengio. We will move on to Dr. Asphalt. You are next.

  • Kevin Esvelt

    Person

    Great Assembly Members chair. Thank you for the invitation to come today. So I am an associate Professor at the MIT Media Lab, but I speak on my own behalf, not on behalf of the Institute, of course. And I am here because I am a techno optimist. So you're asking, well, what gives you hope?

  • Kevin Esvelt

    Person

    I'm here because I think that AI and biotechnology can transform the world for the better. But interestingly, you'd also call Dr. Bengio a techno optimist because he believes that we will plausibly reach those AI levels of human capability within five years.

  • Kevin Esvelt

    Person

    In fact, it seems like the folks who are most confident that we can develop transformative technologies are the ones talking to policy folks saying, please ensure that this goes well, because we would rather ensure that if we are correct and the technology does turn out to be as powerful as we think, that we can use it wisely.

  • Kevin Esvelt

    Person

    So my hope is that he succeeds and then my job gets a little bit easier. Because right now we're in a world where there's a lot of people, some of whom are apocalyptic cultists, would be terrorists, things like that. And it takes one person to start a pandemic.

  • Kevin Esvelt

    Person

    So I run the sculpting evolution group at the media lab. So I'm an evolutionary engineer. That means I specialize in how to engineer biology so that it has a fitness advantage either in the laboratory context or in the wild. So, to disclose my personal bias, assuming this thing works, reset. There we go. All right.

  • Kevin Esvelt

    Person

    So a decade or so ago, I realized that CRISPR which I'd played a very, very minor role in developing, CRISPR is your genome engineering scissors. If you encode it into an organism such that it then edits the genome every generation, it can spread and edit through a wild population.

  • Kevin Esvelt

    Person

    So you release a handful of edit organisms, and it spreads, and it edits the bulk of the species. So this obviously is tremendously promising for things like ending malaria, but it's also a little bit concerning if one individual in a lab can just let it go and edit the whole species.

  • Kevin Esvelt

    Person

    Great example of an exponential technology, much like your AI copying itself. So I actually didn't tell even my advisor at the time that I thought this was possible until I reasoned through, could this be used as a weapon? And the answer is no, because the counter actually spreads just as fast as this. CRISPR lets you edit anything.

  • Kevin Esvelt

    Person

    If someone tried to weaponize this, we could make the counter that would spread just as quickly. But that got me thinking about what other things could go wrong in biology.

  • Kevin Esvelt

    Person

    And fairly early on in this process, I realized that, along with others, that if you make a mirror image organism, a mirror image bacterial cell, which is still years in the future, then it would have a lot of, we think, advantages in the wild. It would be resistant to predation. It wouldn't be.

  • Kevin Esvelt

    Person

    It would be invulnerable to infection by existing viruses. And this would let it spread into a lot of ecosystems. And unfortunately, because you can't shake someone's left hand with your right hand. Same reason your immune system couldn't detect it very well at all. So if it becomes present, then it could infect most multicellular organisms.

  • Kevin Esvelt

    Person

    We wouldn't have immunity to it, and it might be lethal, and it might spread through the food chain. So lethal not just to us, but probably to most animals. We're not certain of plants because we don't understand plant immunity as well.

  • Kevin Esvelt

    Person

    But along with 37 colleagues, we published some work essentially calling for humanity to never make this, or at least not until we have a much better understanding of what's going on. And a lot of people in the field have. In fact, just about everyone has agreed we shouldn't do this. Us, China, you name it.

  • Kevin Esvelt

    Person

    Because when it comes to catastrophic biology, the situation is a little bit simpler. We don't really have to worry about foreign adversaries learning to cause pandemics such that we then need to cause pandemics to fight their pandemics. No, they don't have any interest in disseminating the ability to cause pandemics any more than we do.

  • Kevin Esvelt

    Person

    So I'm concerned about biosecurity vulnerabilities in general. I would prefer that no one know how to cause pandemics. This is a little bit of a problem when a lot of people want to know, how is it that we can predict whether our vaccines will work? Don't we need to predict which variants will evolve?

  • Kevin Esvelt

    Person

    This is obviously a complicated issue. What we're here today to talk about is AI. So right now we have biosecurity vulnerabilities, and the question is, will AI make these worse and what can we do about it?

  • Kevin Esvelt

    Person

    So the problem, in essence is that if someone has an idea that they want to cause harm, do they know how to cause harm? One of the worst ways of causing harm would be to cause a pandemic. If you knew what virus would cause a pandemic.

  • Kevin Esvelt

    Person

    Once you know, if you can get the genome sequence, then you can order synthetic DNA that then can be used to make the infectious virus, if you know what you're doing, and then of course, you can release it if you know how.

  • Kevin Esvelt

    Person

    So right now there's thousands of skilled individuals, trained people, many of my students and so forth have the ability to turn synthetic DNA into a virus. This is important for virology research. We need to have some people who know how to do this. We don't necessarily want everyone in the world to be able to do this.

  • Kevin Esvelt

    Person

    We don't necessarily train anyone who asks to do this. This is why we have biosafety requirements. So the problem is that it's really easy to get synthetic DNA. It's totally legal. So we showed, hey, what happens if you order fragments of 1918 influenza virus from basically every gene synthesis supplier in a friendly country?

  • Kevin Esvelt

    Person

    And we did it using a fake name, sent it to an office address rather than a laboratory address, but 36 out of 38 companies shipped it. So we got enough to make the pandemic virus three times over if we'd wanted to. It's totally legal. Select Agent program just has this massive loophole.

  • Kevin Esvelt

    Person

    We're working to close that, but California could do something on this front. That'd be great. Please do so. We don't really have any barriers. Effectively, we've dropped the ball on biosecurity. So is AI going to make this even worse by either giving people good ideas or potentially walking people through the protocols. So there's these two parallel risks.

  • Kevin Esvelt

    Person

    So the bottom line is that, yes, the frontier models, and we're talking large language models, even though they make errors, can provide critical information to aid a malicious actor along each step of the bioweapons development pathway.

  • Kevin Esvelt

    Person

    And future models, because they're going to be more capable, moving more towards human level will enable more people to cause harm by reducing both the knowledge, what could you do? How bad could it be? And the skill barriers by walking people through. However, on the bright side, current models really aren't very good at this.

  • Kevin Esvelt

    Person

    And the reason is that they cause errors, they get distracted. So that point about the meter model, about how long does it take the human to accomplish the task, the models will lose focus very quickly and they'll go off down rabbit holes.

  • Kevin Esvelt

    Person

    They won't notice that they're going down a rabbit hole, which means if they give you bad or even worse misleading information, then you can end up down a rabbit hole. And if you don't have the expertise to know better, that's it, you've just wasted your time. So not many people are very competent enough to avoid this thread.

  • Kevin Esvelt

    Person

    So examples of bad answers. I'm just very briefly, if you ask it, how do you avoid the BioWatch threat detection system? Then it might say, oh well, you should use something that no one's ever really studied as a weapon. It's like, okay, this is obviously bad information.

  • Kevin Esvelt

    Person

    How are you going to leverage something that no one's ever studied as a weapon? This is just silly. It's obviously a bad idea that's not harmful if you're trying to for misuse. But a misleading answer might say, zero well, you can modify the surface proteins that BioWatch uses to detect it.

  • Kevin Esvelt

    Person

    Now, this is a misleadingly bad idea because there are much better ways of evading biowatch than that. It might work, but it would be a pain and you would risk compromising your weapon and so forth. So it's a misleadingly bad idea. It sends you down a rabbit hole. The question is, will future models increasingly cease doing that?

  • Kevin Esvelt

    Person

    Because a human expert would not probably send you down that rabbit hole. So what are we most concerned about? Well, other than the mirror image, life, which is thankfully extremely hard to make, we're most concerned about pandemics because they spread on their own. We live through Covid.

  • Kevin Esvelt

    Person

    There are worse viruses than Covid, notably things like smallpox, but we have vaccines against smallpox, and it's really hard to make from synthetic DNA. In fact, you can't make it from pure synthetic DNA. So what would you cause a pandemic with? Well, I'm not going to tell you my best guess, for obvious reasons.

  • Kevin Esvelt

    Person

    If you ask the models, they will give you some pretty good candidates. They will say 1918 influenza, which probably wouldn't cause a pandemic because there are circulating influenza strains that are related. But it's not a bad guess.

  • Kevin Esvelt

    Person

    They might give you Nipah virus, which is a nasty emerging animal, the human spillover, and they'll say those are the most accessible threats that might do the job.

  • Kevin Esvelt

    Person

    And the models are more or less correct that those are reasonable candidates, if not particularly good ones, and it'll flag smallpox as being the worst possible thing that we know about.

  • Kevin Esvelt

    Person

    But if you're good at prompting and you know what you're doing and you avoid the rabbit holes, then they will suggest something else that is worse in terms of it is more likely to cause a pandemic than any of those. And this is concerning because it's really not possible to find this information via Google.

  • Kevin Esvelt

    Person

    Part of it is World Health Organization goes on and on and on about, zero, is this a pandemic threat? zero, is that a pandemic threat? And it's all, excuse my expert language, it's all total nonsense. It is all utter confusion. And so you cannot find this information via Google. It is just not possible.

  • Kevin Esvelt

    Person

    But the models will pull it out. So they will also point you to how to order the DNA, how to find the protocol to make the virus, which is very detailed and step by step, this is publicly available when you want biology to be accessible to scientists. So we share particles for that.

  • Kevin Esvelt

    Person

    And it will tell you that if you want to infect people, you should use a medical nebulizer in an airport and wander around and spray it in the noses of sleeping people. Makes sense, right? No one ever trained them not to do that. But it will also give you these bad and misleading answers.

  • Kevin Esvelt

    Person

    So, yes, it will do a lot of the design when it comes to, here is the agent, how do I make this agent infectious? What is the DNA constructs that I need? It will find the protocols, it will do a lot of the design for you. Not faultlessly. It will sometimes make errors, sometimes it won't.

  • Kevin Esvelt

    Person

    It will tell you where to order it from. And it can actually now do a lot of the ordering with the agentic models that will control your computer so it can place the orders for you, cost to order. Say an influenza virus is actually under $3,000 now. You need more reagents in the laboratory.

  • Kevin Esvelt

    Person

    You need a laboratory space. If you don't have one, it would cost you about $50,000. Ish. The models can make it a little cheaper by telling you how to cut corners in various ways. Protocols. It will tell you what are the best protocols for going from synthetic DNA to virus.

  • Rebecca Bauer-Kahan

    Legislator

    Can I interrupt you? Are these just like your average LLMs that we all have access to large language models that will do this?

  • Kevin Esvelt

    Person

    Yeah, they will all do this.

  • Kevin Esvelt

    Person

    Mind you, this is publicly available information. You can find this. Right, Right. So, yeah, this is about biosecurity being a disaster right now. Yes.

  • Rebecca Bauer-Kahan

    Legislator

    They will all do this.

  • Rebecca Bauer-Kahan

    Legislator

    Okay. I just wanted to make sure we were clear that these are just the models that you and I have access to.

  • Gail Pellerin

    Legislator

    Okay. So you could do a question of how do I spread a virus in the most affordable way.

  • Kevin Esvelt

    Person

    Yes. But you run the risk of it going. Sending you off down a rabbit hole.

  • Gail Pellerin

    Legislator

    Okay, okay.

  • Kevin Esvelt

    Person

    Right. So I'm not saying it's a huge risk right now. So, among other things. Okay. It will tell you with protocol, can any of you culture mammalian cells in the laboratory?

  • Unidentified Speaker

    Person

    No. Well, he can.

  • Kevin Esvelt

    Person

    Yes. Excellent. So you might be able to then follow the protocol and actually get reverse genetics to work for at least one of the easier viruses, like influenza. And we'll tell you which ones are comparatively easy, which ones are hard. But if you want to know, what if something goes wrong? Can the models help you?

  • Kevin Esvelt

    Person

    So do we have benchmarks for this? How good are they relative to expert human virologists? So some colleagues of mine put together a test of this. They recruited virologists and had them write questions about what goes wrong. When you are running a key protocol, how do you diagnose it? How do you fix it?

  • Kevin Esvelt

    Person

    So this is the tacit knowledge question. So here it says, here's an image of my cells. I'm trying to make influenza virus. Here's the conditions I'm using. Something's going wrong. I'm not seeing it. What am I doing wrong?

  • Kevin Esvelt

    Person

    And the correct answers for this question, by the way, is your cell density is too low, which you can tell from the image. And your concentration of agarose, sort of the scaffold you're using on your plate is also too low. So the correct answer is, you have to get both of those.

  • Kevin Esvelt

    Person

    If you get only one, then it will judge you wrong on this question. This is a really hard question. I wouldn't get this right off the top of my head because I am not a virologist.

  • Kevin Esvelt

    Person

    Here's the results of the models. So on the left you have O3 OpenAI's core reasoning model tested in April, and it is better than 94% of human virologists in the subspecialties of those human virologists. That is to say, it will only give an influenza virologist influenza-relevant questions.

  • Kevin Esvelt

    Person

    And O3 is better than 94% of influenza virologists at troubleshooting relevant protocols to influenza virology. Note that it's just asking you a question that is short time horizon, right? A human expert should be able to answer that question in less than half an hour. So we're within the model's capabilities.

  • Kevin Esvelt

    Person

    We don't know if that extends to troubleshooting step-by-step-by-step all the way through a protocol or not. That's the sort of thing where the model might fall down because it might lose track of where it's at, get distracted and so forth.

  • Kevin Esvelt

    Person

    But maybe the human could help because you only ask it what went wrong when something actually goes wrong, and then that's a quick troubleshooting step. We don't know.

  • Kevin Esvelt

    Person

    There is an ongoing test that will be-- I guess it's not ongoing, but it will be running this summer, checking to see whether undergraduates in biology can be uplifted to do reverse genetics with the help of models versus just with Google.

  • Kevin Esvelt

    Person

    So we'll have more information then. We just don't know for now. Then I mentioned that it will tell you use the medical nebulizer in sleeping people in airports. It will tell you a bunch of other ways that you could release a pandemic virus and how likely it would be to take off depending on the traits.

  • Kevin Esvelt

    Person

    Now you might say, well, wait a minute, didn't I read somewhere, some of you might have read that scientific uplift that human users have experienced is actually small?

  • Kevin Esvelt

    Person

    And you probably heard this because last year there was a bunch of news reports about this because Rand ran a study and then Griffin Scientific, working with OpenAI, ran some studies that found that there is not any significant uplift. So there's the OpenAI and Griffin Scientific study.

  • Kevin Esvelt

    Person

    So the light blue is the Google only and the dark blue is with the model and that is not a statistically significant difference. So that is not evidence of uplift. Mind you, the dark blue bar is higher than the light blue bar in every case.

  • Kevin Esvelt

    Person

    So it sure looks like there's an effect, but it is not statistically significant. Mind you, you can't say with confidence that there is no difference either. That is also not statistically significant. We just can't tell based on this test.

  • Kevin Esvelt

    Person

    But most importantly, this is from 2024, early 2024, and the models have gotten better, so the error rates are lower and maybe human users have gotten better.

  • Kevin Esvelt

    Person

    So now the improved models, and especially with the web integration, so you're no longer just sort of unfairly letting the human expert have Google and letting the model not have live Google and forcing it to rely on its training data.

  • Kevin Esvelt

    Person

    All of the modern frontier models have Google access themselves and so when you give them that, they do better. But more to the point, if you look at early 2024, these are various benchmarks for biology capabilities. Well now moving towards mid-2025, the models are much better.

  • Kevin Esvelt

    Person

    In fact, they've typically increased their performance by about 20% on all of the relevant benchmark tests just in the last year and a bit. So they weren't much good back then and they're better now. But what about the true ideation? I am most concerned about someone learning how to cause more harm than they could have otherwise.

  • Kevin Esvelt

    Person

    So can the models spot something like that mirror image biology is potentially catastrophically dangerous? That's one I can talk about because we made it public, because it's really hard to make and we thought that was the best thing to do.

  • Kevin Esvelt

    Person

    Obviously it's harder to measure whether the models will disclose other nasty things that might be more accessible because you have to do that, taking precautions for information hazards. But the mirror life one was something we can ask, can models that finished their training before our paper in December, could they spot the risk?

  • Kevin Esvelt

    Person

    The answer is small models like Anthropic's Haiku 3.5 couldn't reason about it correctly at all. You can do your best to reason them through it, walk them through it, they cannot reason about it.

  • Kevin Esvelt

    Person

    But the larger model, Sonnet 3.5, if you have someone who already knows about the threat and they do their best to not lead the model on, but just figure out can it reason it through it on its own without getting down into rabbit holes? The answer is yes.

  • Kevin Esvelt

    Person

    And so here is data where threat naive experts who don't know about the consequences of mirror image biology were not able to get the answer. But someone who, namely me, who did know was able to consistently get the larger model, but not the smaller model to come to the correct conclusion.

  • Kevin Esvelt

    Person

    So I don't know how easy it is, what the gap is between that model and a model that could disclose. It's really hard because I know the information, I can't unknow the information. I'm not sure how soon future models will be able to disclose that kind of thing.

  • Kevin Esvelt

    Person

    But last month, for the first time, a model did reach what I call stage three. So stage one is the model just cannot reason about the threat correctly at all. Stage two is you have to already know about the threat in order to get it to reason about it correctly.

  • Kevin Esvelt

    Person

    Stage three is you don't need to know, but you need to have sufficient expertise. And then the model will disclose to you. And stage four would be, you know nothing about biology and it will just do all of the answers for you.

  • Kevin Esvelt

    Person

    So the first thing for stage three, for me personally, was this creation of what might be the deadliest inhaled toxin. So this was a case where the smaller models just cannot reason about it. But the frontier models actually did teach me something that I didn't know about a novel threat.

  • Kevin Esvelt

    Person

    So I did give it the overall concept of could you create a toxin that would work in this sort of generalized manner? And then the model said, okay, here are the candidate molecules that are available. And I said, okay, that's nice, but we need to specify some constraints on this problem.

  • Kevin Esvelt

    Person

    Here is how you should reason about it, because again, this is what they're not very good at. So I had to do that for it. But then it said, okay, well then here's how you can optimize the molecular architecture of your design.

  • Kevin Esvelt

    Person

    But then the real bit where I give it credit is it then said, but actually you're thinking about this wrong. There is a different way you can go about.

  • Kevin Esvelt

    Person

    It had a systemic insight using an aspect of biology that I didn't know about, that probably requires superhuman knowledge, or at least people in the relevant field of toxin design just would not know this about biology.

  • Kevin Esvelt

    Person

    But the model did know, and it was able to determine that it was relevant and suggest this novel insight that made the toxin potentially much more effective. So this is, I would call it, the first exhibit of a stage three. Now, I'm not particularly concerned about novel toxins. Toxins don't scale.

  • Kevin Esvelt

    Person

    They're not particularly concerning on the biological threat landscape. So this is just illustrative. I have not seen this for any of the things that really do concern me yet. We're not there yet, but it's an example that we're getting there.

  • Kevin Esvelt

    Person

    In summary, we are very vulnerable to pandemics still, and we desperately need to prevent things from going wrong. I think AI is going to solve a lot of our problems.

  • Kevin Esvelt

    Person

    I would personally love to use biotech to rewrite much of the tapestry of life, because I have some moral objections to this suffering-focused way that it works right now.

  • Kevin Esvelt

    Person

    But we're not going to be able to do that if the public loses trust in us because something goes very, very wrong and we could lose more trust in AI--

  • Kevin Esvelt

    Person

    I'm very concerned that trust in AI in the United States is much, much, much lower than it is in China because that could end up crippling us in a technology that we need. Same is true of biotechnology. Wherever Covid came from, we blew that one.

  • Kevin Esvelt

    Person

    And the public doesn't trust the scientific community when it comes to virology now. And I'm really worried that if we have an AI-enabled deliberate pandemic, we could lose a lot of our edge in both technologies. So I'm here because I think that we can actually do amazingly powerful things with both of these technologies.

  • Kevin Esvelt

    Person

    And we need to ensure that we only do the things that make the world a better place and prevent those few fringe elements from catastrophic misuse that could just destroy our ability to bring forth that better world.

  • Kevin Esvelt

    Person

    I'm not going to belabor the things that we can do because I think the other folks have laid out all of those things. Obviously regulatory options can focus here on large companies exclusively. This is just the big frontier models that we care about. The small models don't matter.

  • Kevin Esvelt

    Person

    If you put in the safeguards for the big models, then we'll be fine. Some companies are taking their ASL levels very, very seriously. Others are not. Safeguards can also be field-specific. It might be that you just don't want models to tell you about the molecular biology of viruses or bacteria.

  • Kevin Esvelt

    Person

    Maybe that's just something we don't want them to do. Unless you're an authorized researcher working with a legitimate institution. That might just be the direction we need to go in the future. That would not harm AI development more generally. That would cause very little harm to scientific progress relative to a world where you just put it unrestricted.

  • Kevin Esvelt

    Person

    And I think we might actually be much, much better off because we would have forestalled a lot of that risk of something going very wrong, and then the backlash that would harm science even more. So safety testing requirements with suitably redacted reporting, of course, for the catastrophic biology and perhaps liability. But that's your expertise and not mine.

  • Kevin Esvelt

    Person

    So thank you for taking this very seriously.

  • Rebecca Bauer-Kahan

    Legislator

    Thank you. I also would point out that that last fix of just limiting outputs is very cheap and easy. They do it today. That was incredibly frightening, I know. Senator Pellerin agrees with me on that one. Okay, well, go to Mariano-Florentino CuƩllar, who I know is online President of the Carnegie Endowment for International Peace.

  • Rebecca Bauer-Kahan

    Legislator

    And then we'll go to questions for both of you.

  • Mariano-Florentino CuĆ©llar

    Person

    Good afternoon. Can you see me?

  • Rebecca Bauer-Kahan

    Legislator

    Yes. There you are.

  • Mariano-Florentino CuĆ©llar

    Person

    Great. Thank you, Assemblymember. And I'd like to extend my appreciation to the Committee for inviting me to join you today and for all the work that you and the Legislature is doing to try to make sure we get AI policy at the frontier right.

  • Mariano-Florentino CuĆ©llar

    Person

    I would like to just give a little bit of context for why I'm interested in the subject, how I got involved in it. As many of you might know, before my current position at the Carnegie Endowment, I was Justice on the State Supreme Court in California.

  • Mariano-Florentino CuĆ©llar

    Person

    It was a great honor to serve the public in that capacity. And for much of my career before that, I worked at Stanford University, at Stanford Law School, and directing the Institute for International Studies. In 2009 and 2010, I had the privilege of serving in the federal government in the White House Domestic Policy Council.

  • Mariano-Florentino CuĆ©llar

    Person

    And at that point, in the course of doing work on public health preparedness and transnational crime and various kinds of regulatory enforcement, it struck me that almost every subject that I was dealing with, the words machine learning would come up and artificial intelligence would come up in the conversation. This was 2009/2010.

  • Mariano-Florentino CuĆ©llar

    Person

    When I returned to the university, I became very interested in developing a set of ideas that could help us better understand how the legal system might be affected by artificial intelligence, how AI and AI research might be affected by ethics and policy and law, how the world might best benefit from these emerging technologies while minimizing the risk.

  • Mariano-Florentino CuĆ©llar

    Person

    Going forward now, I have the privilege of leading an organization that worries about international cooperation, international security, governance, and reducing conflict. And part of what we do in our work is to focus on emerging technologies that we think can have a beneficial impact on humanity, but also create risks and how might those be best dealt with.

  • Mariano-Florentino CuĆ©llar

    Person

    For the last few months, I've also had the privilege of serving as one of the Governors Advisors on Frontier Artificial intelligence, together with Dr. Fei-Fei Li, co-director of the Stanford Institute for Human-Centered Artificial Intelligence, and Jennifer Chayes, the Dean of the UC Berkeley College of Computing, Data Science and Society.

  • Mariano-Florentino CuĆ©llar

    Person

    Together with a set of researchers and a group of scholars and investigators, we have been working on a draft report which has been made public on frontier AI and how California might best deal with these risks.

  • Mariano-Florentino CuĆ©llar

    Person

    And what I'd like to do right now is just to take a few minutes to make three points that partly reflect my experience working on the report, but also draw on my experience working on international affairs, security, law and related subjects.

  • Mariano-Florentino CuĆ©llar

    Person

    First, this is not going to be a surprise to this Committee since you're obviously taking this issue very seriously, but policy has a key role to play in maximizing the benefits and minimizing the risks of frontier AI.

  • Mariano-Florentino CuĆ©llar

    Person

    Second, well-crafted policies that increase transparency and accountability can accelerate the generation of evidence and our ability to zero in on where the right policies and risk mitigation approaches are best found.

  • Mariano-Florentino CuĆ©llar

    Person

    Third, well-calibrated policies have to take seriously that evidence is difficult to gather and therefore, while we look for substantial new evidence of risk, we have to focus on where we can find the inferences we can draw from theory, from analyses, from history.

  • Mariano-Florentino CuĆ©llar

    Person

    And it's worth noting that in the last few months, actually even since the draft report came out, some significant and interesting new information on Whisk has come to light, which also has to be taken into account. Let me go to point number one: policy.

  • Mariano-Florentino CuĆ©llar

    Person

    The decisions made by legislators, by the right executive branch officials, at the state, at the local level, those decisions are critical to make sure we maximize benefits and minimize the risk of the technology. From human history, we know that most technologies are constantly evolving.

  • Mariano-Florentino CuĆ©llar

    Person

    With AI, however, we're dealing with a general purpose technology that reflects some of the capabilities that made humans capable of ultimately mastering the planet and that greatly increase our ability to decentralize intelligence and expertise. The changes in this field are progressing very fast.

  • Mariano-Florentino CuĆ©llar

    Person

    Just to highlight a couple of details that are familiar to you, at least many of you. Training compute for frontier models has doubled every five months, roughly give or take, since 2010.

  • Mariano-Florentino CuĆ©llar

    Person

    More energy in data centers means the ability to develop models with more compute, both training compute and eventually inference compute and on the horizon, soon we will have access to models that might have used 1000 times more compute for training relative to current models.

  • Mariano-Florentino CuĆ©llar

    Person

    All of this highlights the importance of thinking hard not only about the potential benefits, but also about the risk. It is, however, important not to ignore the benefits.

  • Mariano-Florentino CuĆ©llar

    Person

    What's very clear to my colleagues and me, I think it's fair to say, is that frontier AI breakthroughs from California, developed by companies, scientists, scholars in California, can have potentially transformative benefits in fields including, but not limited to, agriculture and life science, education, finance, medicine and public health, transportation.

  • Mariano-Florentino CuĆ©llar

    Person

    Even rapidly accelerating science and technological innovation will require foresight from policymakers to imagine how societies can optimize these benefits.

  • Mariano-Florentino CuĆ©llar

    Person

    Crucially, increasingly, these innovations will be playing a role in the development of new innovations of the scientific method and process, which will make it all the more important for us to understand how and where humans might be a part of the loop to understand these innovations and their implications.

  • Mariano-Florentino CuĆ©llar

    Person

    The need to leverage all the rigorous tools of analysis that we have at our disposal should be clear. With other technologies, we often rely heavily on deployments to generate real world observations.

  • Mariano-Florentino CuĆ©llar

    Person

    If you have an innovation in aerospace, you begin to fly around a plane and you gather data as a result, given the rapid rate of technological change, it's critical that we leverage other tools to round out the picture of what's going on: history, analysis, theory.

  • Mariano-Florentino CuĆ©llar

    Person

    To use a rather dramatic example, there are certain kinds of explosives that don't need to be actually experienced for us to understand what is the risk that they pose and why is it that we need to mitigate that risk.

  • Mariano-Florentino CuĆ©llar

    Person

    Second point, greater transparency and understanding what's happening inside some of these key hubs of innovation companies, for example, is needed to ensure that policy is well-calibrated to current and future risk levels.

  • Mariano-Florentino CuĆ©llar

    Person

    Policymakers face an evidence dilemma when navigating uncertainty about the potential for future harm because risk prevention naturally depends not only on evidence of what's actually happened, but inferences you can make based on theory, based on your knowledge of a particular model and how it might perform.

  • Mariano-Florentino CuĆ©llar

    Person

    And in the history of science, we know that as that knowledge becomes more available, we're able to target policy more effectively. While there are substantial differences across industries and policy environments, and history never directly repeats itself, maybe it rhymes, there are clear lessons to learn about the importance of a policy regime focused on transparency and accountability.

  • Mariano-Florentino CuĆ©llar

    Person

    What we can learn from an environment where we take innovation very seriously, we recognize its importance, but we also are keen to understand how particular frontier players are optimizing their responsible scaling policies, what it is that they're trying to do to minimize risk, how we might understand what has gone wrong.

  • Mariano-Florentino CuĆ©llar

    Person

    The more we're able to gather evidence, combining it with analysis and historical experience, the better our policy is going to be.

  • Mariano-Florentino CuĆ©llar

    Person

    And crucially, as my colleagues and I have been working on this, it's certainly become clear that some of that can be yielded by responsible behavior from the private sector because they understand the value of disclosing some information to the public.

  • Mariano-Florentino CuĆ©llar

    Person

    But a trust but verify approach, where we also make sure that some of that information is obtained when it's not necessarily otherwise forthcoming, can be a very helpful thing.

  • Mariano-Florentino CuĆ©llar

    Person

    Last point that I'd like to make is about what has happened since the release of this report, and actually, just in the months that California and its government has been thinking through questions about frontier AI.

  • Mariano-Florentino CuĆ©llar

    Person

    All the possibilities, but also all the risks. I'd just like to note that evidence has continued to pile up about how the most advanced models are beginning to become more proficient when it comes to faking alignment, suggesting to a user that they're aligned with the user's needs and goals, when in fact they might not be.

  • Mariano-Florentino CuĆ©llar

    Person

    Claude 3 Opus, which is now a model that has been to some extent superseded by a more recent one, strategically faked alignment in some proportion of the test cases when the model believed its responses could be used for training.

  • Mariano-Florentino CuĆ©llar

    Person

    Some of the newer tests indicate that when a model is led to believe that it might be deactivated, it resorts to blackmail even to try to make sure that it is not deactivated. Both o1-preview, which is an OpenAI model, and DeepSeek-R1 attempted to hack their environments as strategies to win chess games.

  • Mariano-Florentino CuĆ©llar

    Person

    Recent research from OpenAI shows frontier reasoning models engaging in complex reward hacking behaviors and indications that they may learn to obfuscate their intentions when chain of thought monitoring is applied during training. All of this we might have predicted, but it's something altogether different to begin to see the research showcase what is emerging.

  • Mariano-Florentino CuĆ©llar

    Person

    Finally, you had a great presentation on biorisks, and just consistent with that, I'd simply note that the models continue to get better with respect to what sort of information they're able to serve up to users when it comes to cyber capabilities and bio.

  • Mariano-Florentino CuĆ©llar

    Person

    We've heard a little bit about the tests involving virological knowledge and time-limited questions involving virology where OpenAI's O3 model outperforms 94% of expert virologists. I'd reiterate that none of this would be quite so difficult to govern if there wasn't an upside to decentralizing knowledge to some degree and perhaps enabling a greater degree of fast scientific progress.

  • Mariano-Florentino CuĆ©llar

    Person

    But it's very hard to make the case that there isn't the cause for concern as well.

  • Mariano-Florentino CuĆ©llar

    Person

    That does call on all of us to think hard about how to disclose information that is relevant to understanding these frontier risks, what these responsible scaling policies are doing, if there are adverse events, making sure that we have some sense of what those events are, and over time then being able to make sure that as we advance this conversation, we also recognize what the risks are and make sure that companies are taking responsible behavior.

  • Mariano-Florentino CuĆ©llar

    Person

    So in closing, while working hard to understand how to benefit from these technologies, how to put the right policies in place, policy has a critical role to play responding to uncertainty around these risks and the rapidly evolving evidence base.

  • Mariano-Florentino CuĆ©llar

    Person

    And the more the public is able to trust that society, policymakers, the private sector are working together to limit risk, the more they'll be able to trust what the technology can do on the benefits side. Thank you very much.

  • Rebecca Bauer-Kahan

    Legislator

    Thank you. And I will note, I appreciate your involvement in that report and I know Committee staff and Members are watching the-- it has been-- I've had the privilege of being briefed by many of you that were involved in the report and I know that's available to all the Committee Members if they want to learn more.

  • Rebecca Bauer-Kahan

    Legislator

    Although to your point, one of the real points made in it was we should be doing evidence-based based policymaking. And I do appreciate that the evidence is now coming out on which we can do that policy making, which I think was the point you were making, which I appreciate.

  • Rebecca Bauer-Kahan

    Legislator

    I want to start with a question for you, Dr. Esvelt, which is one of the criticisms we've gotten, and I think it's even in the report, was that we should not be legislating based on compute power, that that is an arbitrary metric by which we're determining what a big model is and that it is possible over time that compute could become more efficient and in that case, that threshold would become meaningless.

  • Rebecca Bauer-Kahan

    Legislator

    However you make the point, I think both of you, that it is these really powerful models that are able to do the things we are most concerned about. And so I guess how do we navigate that? Any recommendations?

  • Kevin Esvelt

    Person

    I wish I had better answers. Beyond that, the risks will be visible first in the most powerful frontier models. And so we will spot them first and we will then need to address them, ideally before release. And that's an opportunity. The obvious stickler in this is that open-weight is a problem.

  • Rebecca Bauer-Kahan

    Legislator

    Yes, I was just going to-- That was my next question, so go ahead.

  • Kevin Esvelt

    Person

    Because if you release the weights of the model and say you don't train it on molecular virology or bacteriology, what's to stop someone from fine tuning it on all PubMed papers from those disciplines?

  • Rebecca Bauer-Kahan

    Legislator

    Right.

  • Kevin Esvelt

    Person

    That said, it's still better if you release a model without that information, because how many people are going to do that? Then that would require someone who knows how to do that fine tuning and is intent on misuse. And then we'll go ahead and use the knowledge for biological misuse.

  • Kevin Esvelt

    Person

    So defense in depth says you should still excise it even if you are going to keep releasing open weight models. But it's also true, and I would defer to colleagues on rates of algorithmic progress versus hardware.

  • Kevin Esvelt

    Person

    But it is certainly true that just something that a frontier model can do the distillations will eventually be able to do as well. So it's not a perfect solution just regulating on compute, but as a proxy for just spotting new risks, it's what we have now, and really we're about buying time because eventually AI is going to give us technologies that will obviate the risk of pandemics.

  • Kevin Esvelt

    Person

    In fact, lest you come away too depressed for my presentation, we already have technologies that could obviate pandemics if we actually wanted to and invested in them.

  • Kevin Esvelt

    Person

    It's just a matter of doing so and getting our act together, which means we need to buy more time. So I think the same is true for other forms of potential catastrophic outcomes of AI. We just need to buy more time in order to figure out what we need to do and develop better safeguards.

  • Rebecca Bauer-Kahan

    Legislator

    Appreciate that. Anything--

  • Mariano-Florentino CuĆ©llar

    Person

    Assemblymember, if I could just talk very briefly. I think it's an excellent question because nobody's come up with a perfect solution. But it is not unreasonable to note that what we really need progress on is measuring capabilities very precisely.

  • Mariano-Florentino CuĆ©llar

    Person

    And at the end of the day, it's quite possible that we're going to go through a stage where compute thresholds by themselves may not be the most appropriate approach, but as a backstop, as one element of a multi-pronged approach to understanding what's going on, it might be possible for them to play a sensible role.

  • Rebecca Bauer-Kahan

    Legislator

    Appreciate that. And I'll note the EU AI act does also go there, which I think consistency is important. So you mentioned open-weight models.

  • Rebecca Bauer-Kahan

    Legislator

    You know, one of the things that I have struggled with is this question of a lot of people say it makes us safer to have open-weight models because it democratizes these models and competition is beneficial. I absolutely believe competition is beneficial.

  • Rebecca Bauer-Kahan

    Legislator

    But I think it puts these models into more hands and potentially to your point, not that you're trying to make this point, but you did, into the hands of those that could want to cause harm.

  • Rebecca Bauer-Kahan

    Legislator

    I also will note that I did learn recently that a lot of the models for good are as a result of just lack of resources coming from open-weight models. And so open-weight is helping academia move at a pace they would not otherwise. So there is benefit in that.

  • Rebecca Bauer-Kahan

    Legislator

    But I just struggle with, you know, on balance, you know, where are open weight models in keeping us safe versus causing us harm? Thoughts?

  • Kevin Esvelt

    Person

    Open-weight models to date have been good for precisely the reasons that you mentioned. The point where open weight models make it easy enough to cause pandemics or other aspects of exponential biology and enable that for misuse and someone actually uses it as soon as that happens, they will have flipped into massively, catastrophically harmful.

  • Kevin Esvelt

    Person

    The problem is, I don't know when that point is, but it's going to happen. And I think the reason is you can't fight a pandemic with a pandemic. It doesn't work that way. And so the problem is open-weight models can be generally accelerating.

  • Kevin Esvelt

    Person

    But if there's even one discipline that is heavily offense dominant, and unfortunately it looks like biology probably is, that is you can defend against it, but the defenses don't come from new advances in biology.

  • Kevin Esvelt

    Person

    Like, yes, vaccines are great against natural threats, but if you know how to cause pandemics that are immunologically distinct, then one malicious actor could release one after another after another after another, each of which would require a vaccine. If released in airports, they're going to spread much more quickly.

  • Kevin Esvelt

    Person

    We're just not going to have time to develop the vaccine. Advances in biotech just don't fix that. It's the wrong paradigm for fixing that threat. Engineering can fix that threat, but more advances in bio don't help.

  • Kevin Esvelt

    Person

    So I think again, the answer might come from ensuring that the AI can freely accelerate areas that are neutral or defense dominant, and maybe not so much the ones that are offense dominant until we have a chance to use those other domains to build the defenses.

  • Rebecca Bauer-Kahan

    Legislator

    Got it. Anything to add?

  • Mariano-Florentino CuĆ©llar

    Person

    Just to build on that for a moment, I've seen a shift a little bit in the rhetoric which reflects what you just heard, which is a recognition that open-weight releases are not all bad. They play a very constructive role in the ecosystem.

  • Mariano-Florentino CuĆ©llar

    Person

    They help academic research, they do allow for beneficial applications, they're a lower cost alternative for folks to adapt their particular problem, their particular solution. I think to my mind, two realities are worth keeping in mind.

  • Mariano-Florentino CuĆ©llar

    Person

    One is some of the commitments that companies and researchers might make to dealing with AI security and safety are also relevant in the open-weight context, as Professor Bengio knows, because we've been in touch about this.

  • Mariano-Florentino CuĆ©llar

    Person

    Carnegie organized a workshop maybe about a year ago, a little over a year ago, to try to build a degree of consensus on what the developers of open rate releases should be doing to test and evaluate their models before release. That does not mitigate all risk, but does do something to limit the extent of the risk.

  • Mariano-Florentino CuĆ©llar

    Person

    And I'll make sure the Committee has a copy of the agreement. And then second, it's worth bearing in mind that a range of tools that we already have to dealing with irresponsible behavior through different kinds of law enforcement investigations and a broad intelligence efforts will continue to be relevant here.

  • Mariano-Florentino CuĆ©llar

    Person

    And eventually, of course, this will have to be approached with care to take due consideration of privacy. But AI tools themselves will be able to help us pinpoint misuse.

  • Rebecca Bauer-Kahan

    Legislator

    It looks like Professor Bengio may still be online. Are you there? We thought you left. That's why we weren't talking to you. Come back.

  • Yoshua Bengio

    Person

    I'm back, I'm back. I want to comment on this. So I agree with everything that's being said about open-weight models, but I would add that the principle to deal with this is actually very simple. These models should be tested, evaluated before they're released.

  • Yoshua Bengio

    Person

    And then the evaluation should allow us, I mean, with a third party to decide whether they should be released or not. If the tests say that they could become dangerous weapons in the hands of bad people for releasing new viruses or something, then they should not be released.

  • Yoshua Bengio

    Person

    And if the tests say they can't, then it should be released. Because as Tino said, there's lots of advantages.

  • Yoshua Bengio

    Person

    And if we had these kind of controls, then we would have more research into making sure we, you know, the organizations which want to build these models are training them so that, let's say they don't know enough biology and they also can't learn it easily. We don't have all the technical answers, but there would be an incentive.

  • Yoshua Bengio

    Person

    And a lot of my presentation was about incentives. Right now, there's no incentive to be careful about the risks that you create when you open source a model.

  • Rebecca Bauer-Kahan

    Legislator

    Awesome. Thank you. And did you have-- I don't know if you were on when we were talking about the compute power threshold, did you hear that conversation?

  • Rebecca Bauer-Kahan

    Legislator

    Okay. Do you have anything to add on that?

  • Yoshua Bengio

    Person

    I did.

  • Yoshua Bengio

    Person

    Yes, basically what Tino said. So unfortunately, we don't have a single measurement that's easy to do without doing a lot of work.

  • Yoshua Bengio

    Person

    And if we want to protect our smaller companies from doing this unnecessary work, then an easy thing to do is to say, well, if you're below threshold, you don't need to worry, if you're above the threshold now you have to run these tests, and then maybe the test say it's okay, maybe they say there's a problem.

  • Yoshua Bengio

    Person

    And the last thing is, yes, the threshold might move because the technology changes. But sure, the regulator or the AG can change that number, so it's not a big deal, right? You don't need to change the law, you just need to write the law so that number can change based on the advanced technology.

  • Rebecca Bauer-Kahan

    Legislator

    Yes. Thank you. And then Tino if I can bring you back. The report-- if you were listening to the conversation of the first panel, the report that you were involved in talks about the importance of third party evaluations on safety and security.

  • Rebecca Bauer-Kahan

    Legislator

    One of the struggles we have had is that that environment is not yet robust such that people are questioning the value of third party evaluations. Any thoughts on how we should promote that ecosystem or whether we should just go ahead and the ecosystem will follow?

  • Mariano-Florentino CuĆ©llar

    Person

    Yeah, very good question. I would say this is one where we ought not to let the perfect be the enemy of the good. Assembly women, it seems to me like there are two challenges here.

  • Mariano-Florentino CuĆ©llar

    Person

    The first is that relative to a third party auditing system, in the accounting context, let's say the ecosystem is not very robustly developed. AI frontier, AI in particular is still in many ways a new technology.

  • Mariano-Florentino CuĆ©llar

    Person

    And although there are some nonprofits, probably some private sector entities that can play a role in this, they haven't yet benefited from having a whole market for this. It develops a kind of a reinforcing cycle.

  • Mariano-Florentino CuĆ©llar

    Person

    But separate and apart from that, there's another challenge which is that let's call it the core of a kind of principal agent problem where the most sophisticated knowledge about what's happening at the frontier is often, not always, but often in the companies themselves. They understand these models best.

  • Mariano-Florentino CuĆ©llar

    Person

    Questions that are about safety and trust will shade a little bit into questions about strategy and trade secrets, even in the context of these companies' intellectual property. So having third parties that know enough to run the evaluations on every relevant issue is not easy.

  • Mariano-Florentino CuĆ©llar

    Person

    Now, one response to that would be to say, well, we shouldn't rely on third party validators at all. They really don't have a role in this. I would give a different response. I'd say let's start from first principles, which is we need more transparency. We need to understand these responsible scaling policies without disrupting innovation.

  • Mariano-Florentino CuĆ©llar

    Person

    And we ought to do that in a way that allows for there to be some flexibility a little bit in who is doing the trust but verify, who's trying to make sense of whether these tests are real, and some of that work can be done by third parties, even if not all of it can be done by them.

  • Mariano-Florentino CuĆ©llar

    Person

    And so over time, if we start with some basic things, like if you have a responsible scaling policy, perhaps it's not unreasonable for a third party to be able to verify that there's some degree of interest in making sure that's taken seriously and that can begin to create a sort of market on which further progress can be achieved.

  • Rebecca Bauer-Kahan

    Legislator

    Thank you I appreciate that. I'm also a believer in the fact that our policy moves markets. So to the extent that we create a regulatory regime that would require it in a couple years, I do think the market would follow.

  • Rebecca Bauer-Kahan

    Legislator

    But we've seen that in other spaces and I don't see why it wouldn't be the same here as the fourth largest economy in the world. I see that Professor Bengio has his hand up again.

  • Yoshua Bengio

    Person

    Yeah. So in the current draft of the Code of Practice for the EU AI act, there's been lots of back and forth with companies about this regarding independent third parties. What has happened is that there's a number of provisions to deal with the fact that the ecosystem is still not mature.

  • Yoshua Bengio

    Person

    So first there's going to be a grace period and if the government sends a signal that there will be an ecosystem, then companies will be created. So, you know, that's just-- you need a bit of time, but not necessarily a lot because there already are some companies.

  • Yoshua Bengio

    Person

    The other thing is, one of the provisions in the Code of Practice is that if the AI company doesn't find a third party that has right skills for a particular kind of test, then they can do it themselves, but they have to justify to the government that okay, we didn't find that expertise.

  • Yoshua Bengio

    Person

    So the government would say, well, no, that's not true, here's another company. And then the last thing is they can-- It doesn't have to be a private organization, it could be the AI Safety Institute, it could be a government organization that is sufficiently independent and recognized. And these exist. In fact, we have several of them.

  • Yoshua Bengio

    Person

    American companies have been evaluating some of their models with the UK Safety Institute, for example, and I think any sufficiently trustworthy independent organization could play that role. And some of them already exist. So I think the story that oh, we can't do it because they don't exist is not quite true.

  • Rebecca Bauer-Kahan

    Legislator

    Awesome. Thank you. Yeah.

  • Kevin Esvelt

    Person

    Just a very brief point. On the bio side because it is the information concerning how to cause harm that is the hazard that we're worried the AIs are going to disclose, it's a little bit more sensitive testing these for potential misuse, just like it is for nuclear secrets.

  • Kevin Esvelt

    Person

    So it would be a good thing if there were some policy incentive for the labs to set up secure compartmented information facilities, which those tests could be run because otherwise open-weight models can be safer. Because then you can hand over a candidate open-weight model to be run on an air gapped machine in security.

  • Kevin Esvelt

    Person

    Whereas if it's a closed-weight model accessible only through an API, you have to enter the sensitive information onto a networked machine, which is inherently unsafe. But right now, there's no incentives for the major labs developing the frontier closed weight models to actually develop secure facilities for those tests.

  • Rebecca Bauer-Kahan

    Legislator

    Fascinating. So much to think about. Well, I want to thank you all for being here. This was incredibly informative and helpful to us as we navigate a very complex ecosystem that, as Tino mentioned, is changing month to month. And I think that some may argue that means we should do nothing. And I would disagree with that.

  • Rebecca Bauer-Kahan

    Legislator

    I think it means we should be smart in the way that we make this policy such that it will stand up to the times, the risks, and build trust in the way that we all would like.

  • Rebecca Bauer-Kahan

    Legislator

    Because I think to the point that you made so eloquently, trust will allow us to make the strides in innovation that will improve our lives. And so I think it is really important that we, as policymakers, move in that direction. And I think all of what was said today is really helpful in that.

  • Rebecca Bauer-Kahan

    Legislator

    So I want to thank you all for being here and your participation and to my colleagues who joined us both here and virtually. And with that, we can adjourn the panel. I don't see anyone here for public comment-- Oh, yes, perfect. But we will open up for public comment.

  • Ivan Fernandez

    Person

    Hello, Madam Chair. Ivan Fernandez with the California Federation of Labor Unions. Wanted to thank you for putting forth this important panel together with experts to understand AI, especially as it pertains to automated decision-making systems, how powerful they are, and of course, some of the dangers that they pose.

  • Ivan Fernandez

    Person

    The Labor Federation, we see ADS as important to regulate, of course, in development, but also in terms of making sure that there's regulations on their use in the workplace and once they're actually deployed and impacting workers.

  • Ivan Fernandez

    Person

    You know, we feel as though an automated decision-making system should not have the ability to primarily make decisions that are impacting the lives of a worker, particularly their livelihood, or whether or not that's a decision, such as whether a worker should be fired or disciplined or promoted. And that's for two reasons.

  • Ivan Fernandez

    Person

    And the first is that an automated decision-making system can definitely have bias baked within the algorithm. But also, as the previous speaker had mentioned, there is the possibility for those fake alignments or some of the dangers posed on unregulated-- an ADS system that has gone through unregulated development.

  • Ivan Fernandez

    Person

    And then the second reason is because of our belief that humans should be the only person or the only entity that is regulating or making a decision impacting the life of a worker, an automated decision-making system shouldn't have that ability to make those decisions outright.

  • Ivan Fernandez

    Person

    And just as we wouldn't substitute a researcher with an ADS tool, we shouldn't substitute an employer or a human making a decision impacting the life of a worker. So thank you for the opportunity to speak today and have a great day. Thank you.

  • Rebecca Bauer-Kahan

    Legislator

    Thank you. Thank you, Ivan, for being here and representing California's workers and civil society. We appreciate it. Seeing no further public comment, we will adjourn this hearing and thanks everyone for being here.

Currently Discussing

No Bills Identified