Assembly Standing Committee on Privacy and Consumer Protection
- Rebecca Bauer-Kahan
Legislator
Good afternoon. Ooh, that was loud. Good afternoon, and welcome to the Committee on Privacy and Consumer Protection's informational hearing on artificial intelligence. So we will let me start by thanking all of our panelists for attending and participating in our hearing today. We really appreciate the expertise you're going to bring to the conversation the we're going to have about artificial intelligence, which is obviously a hot topic in this Legislature and others.
- Rebecca Bauer-Kahan
Legislator
I want to thank the staff of the Committee for organizing the hearing, as well as the Rules Committee, the sergeants, as always, and the capital support offices who make all of this possible. Before moving on, at the conclusion of each panel, we'll open discussion to Members with questions. So we're going to have the first panel only has one person, but for the subsequent panels, we'll have both individuals testify and then take questions. But we'll also have Member comments after I speak and before our first panel.
- Rebecca Bauer-Kahan
Legislator
And we will be setting aside public comment time at the end for those that want to add in their public comment to the conversation. The purpose of the hearing is to learn about artificial intelligence. We will not be discussing any specific bills. Obviously, there are bills that have already been referred to the Committee and more to come. But today we want to talk about artificial intelligence, what it is, how to define it, and really frame the conversation that is before the Legislature.
- Rebecca Bauer-Kahan
Legislator
The rise of AI is creating exciting opportunities to grow California's economy and improve the lives of Californians. However, this type of rapid technological advancement, like any, comes with risks. AI has the potential to lift up disadvantaged communities by automating painstaking tasks, tasks that are often dangerous, and democratizing access to information. Right now, AI is driving cars in San Francisco, fighting fires in the Central Valley, and quickly changing Hollywood. 30 years ago, the invention of the Internet brought vast economic growth to California.
- Rebecca Bauer-Kahan
Legislator
AI is poised to do the same. But we have to learn from the advent of the Internet and what went wrong with that. Carelessly adopting AI could erode California's privacy and worsen existing inequities. AI can be used to replicate the work of California's talented working artists without permission or compensation. It can be used to create non consensual, deepfake pornography, including child pornography.
- Rebecca Bauer-Kahan
Legislator
Just today, the LA Times covered a situation at a Beverly Hills high school where naked pictures were created of students without their consent or knowledge. It can also be used to spread targeted political disinformation. Automated decision tools can perpetuate historical biases if their training data is not carefully curated. At the same time, we know that automated decision tools can be used to take bias out of decision makings, so the work ahead of us is critical.
- Rebecca Bauer-Kahan
Legislator
The Privacy and Consumer Protection Committee is going to see a lot of AI legislation that passes through the Assembly this year. This is by design. When I introduced AB 331 last year, it was only one of a handful of bills to directly tackle AI, and only one was signed into law last year. This year, more than 30 AI bills have been introduced in the Assembly alone. And this Committee is committed to crafting bills into a single regulatory framework.
- Rebecca Bauer-Kahan
Legislator
And that is the goal we have before us. Taking a consolidated approach to AI legislation will benefit everyone. The tech industry wants regulatory clarity as they innovate and expand. Californians want to know that their interactions with AI are safe and that their personal information will be protected. The framework that we create this year must be flexible and forward looking. As the technology continues to evolve, so will we. AI is not magic. It's just math. It can be understood, and it can be regulated.
- Rebecca Bauer-Kahan
Legislator
And that's why we're having the conversation today to begin the discussion about what AI actually is, to have a shared understanding of the problem before us and how we should begin to tackle it. Our second panel will explore various ways to define AI in California code. For those of us that have sat on this Committee for many years, we know the definition of AI has been a challenge, and it's one we're ready to tackle.
- Rebecca Bauer-Kahan
Legislator
We haven't held an informational hearing in this Committee on artificial intelligence since 2018, and a lot has changed since that time. The goal of this hearing is not to solve every AI problem. We're just here to start the conversation and to make sure the Members of this Committee have the tools they need this year to help have this conversation. We also invited chairs that have jurisdiction where we think AI will be a significant part of the conversation in their committees.
- Rebecca Bauer-Kahan
Legislator
And I want to thank those that are planning to come, including Chairwoman Ortega, Chair of Education, Muratsuchi. I know that the chair of arts and entertainment, Mr. Gibson, could not be with us, nor could Ms. Pelerin, who couldn't be here today, as well as Mr. McCarty, who has his own hearing that I believe may still be going on. But we wanted to make sure they were all a part of this conversation, because we know the overlap exists in many jurisdictions across the Legislature.
- Rebecca Bauer-Kahan
Legislator
So with the help of those who are joining us today, I really, truly believe that we can agree on solutions to AI's challenges, move California forward, to both innovate and protect Californians and community. We can support the emerging industry while keeping Californians safe. So with that, I look forward to this conversation, the work California has ahead of us as we set an example for rest of the country on how California can lead the way in setting AI policy and the conversation. And with that, I want to see if any other Members want to give opening remarks. Ms. Irwin.
- Jacqui Irwin
Legislator
It was you after all. I want to thank the chair for convening this informational hearing on artificial intelligence. And I also want to thank the Committee staff for pulling together this very impressive list of panelists, as well as for the thorough backgrounder. As co chair of the NCSL Task Force on Artificial Intelligence, I've been asking questions and developing an understanding over the past two years on the risks and benefits posed by AI and seeking ways to ensure California can remain a responsible leader in the industry.
- Jacqui Irwin
Legislator
One of our panelists today, Professor Friedler, actually spoke to the NCSL task force last year in Rhode Island, and I'm really excited to hear from her as well. There's been robust interest across the country on legislating in this space, and this demands a great deal of forethought to engage in the complexities of the technology. I applaud the chair for engaging with our fellow state lawmakers in our sister states on the issue and spending the time to deeply understand the issues of AI.
- Jacqui Irwin
Legislator
I'm hopeful that our colleagues here today will ask the panelists questions that will expand their understanding as we embark on a year filled with bills attempting to address AI from many different angles. I unfortunately, will have to excuse myself later on during this hearing to travel to another meeting, but I will certainly be listening to the panelists on live stream. Thank you very much.
- Rebecca Bauer-Kahan
Legislator
Thank you, Mrs. Irwin. Okay, then we will move to the first panel. Professor Nonnecke, if you could join us up here. The founding Director of the CITRIS Policy Lab will be doing an introduction to artificial intelligence to set the stage for our conversation. Yes, any seat? And I think your mic should be live. No. Is there a button?
- Brandie Nonnecke
Person
There is now. I'm live. Great. Thank you so much for having me. I am Brandie Nonnecke. I am the Director of the CITRIS Policy Lab. I'm also an associate research Professor at the Goldman School of Public Policy and also a Director of the Berkeley Center for Law and Technology. In the opening remarks, there were a lot of things said that I agree with and some things that I don't necessarily agree with.
- Brandie Nonnecke
Person
Of course, the advancements in artificial intelligence over the past few years have been profound, and I don't want to undermine those significant advancements that have happened, but if we hyper focus, especially on generative AI systems, these essentially marvels of machine learning right now, it distracts us from actually focusing on all forms of artificial intelligence, many of which have been around for decades, surreptitiously making decisions that affect everyone in this room and their ability to exercise their rights.
- Brandie Nonnecke
Person
So through this presentation, I hope to clarify what is machine learning? What are the types of machine learning in use today, and where do we need to really focus our efforts? Next slide, please. Next.
- Brandie Nonnecke
Person
You can go next. This is my the marvels of machine learning. So one thing I also do, I'm the host of a television show and podcast series called Tech Hype, where I sit down with experts. We debunk misunderstandings around emerging technologies to debate the real benefits and risks that enables us to better understand what are those technical or policy interventions that we need today. So I encourage everybody to check it out. Next slide.
- Brandie Nonnecke
Person
So what is AI? We talk about it all the time, and oftentimes we're talking past each other. There are many definitions of AI, whether they're in laws or regulations or as defined by the field. So let's dig into that and see what is actually AI. Next slide.
- Brandie Nonnecke
Person
Well, it's something that you experience every day. Just a simple recommender system is something that is an algorithm, that is a form of AI. Today, you probably search through X or Instagram or somewhere else. That's a recommender system. It's going to use data that it's collected on you to feed up content that it thinks will keep you engaged for as long as possible on the platform.
- Brandie Nonnecke
Person
There are obviously some benefits to this, of tailoring content to you and what you want to see, but also some risks, right? Like we've seen with the use of social media platforms in hyperpolarizing political groups before elections. Next slide.
- Brandie Nonnecke
Person
At the Citrus policy lab, we maintain a database of all AI related legislation at the federal level and in the State of California. We carefully document what stage that legislation is, know if it's actually gone to Committee and moved through Committee, whether or not it had bipartisan support, what area it's focusing on. Is it focusing on workforce effects of AI?
- Brandie Nonnecke
Person
Is it focusing on supporting education opportunities to ensure that the State of California or the United States has the workforce capabilities to meet the increasing demand in the AI space? Next slide.
- Brandie Nonnecke
Person
So it's, of course, I think, very important to talk about how we have been defining AI in legislation. You can see a couple of examples here from the National AI Initiative act of 2020, which is our most comprehensive piece of AI legislation at the federal level established, of course, the national AI initiative office. So here it's even a machine based system that can, for a given set of human defined objectives. That's really important.
- Brandie Nonnecke
Person
I'm going to explain why it's important to think about whether or not it's human defined objectives or defined by something else. Now, NIST also developed the AI risk management framework, and in their definition, they say an engineered or machine based system that can, for a given set of objectives. Look, they dropped off human, okay? So it's expanded. It might not be human defined objectives. Next slide.
- Brandie Nonnecke
Person
And in the EU act, and I know my EU Commission friends are behind me, so I hope I get this right. Now, they say an AI system means a system that is designed to operate with elements of autonomy and that based on machine and or human provided data and inputs. And that's incredibly important because that's where the field is moving beyond just human defined objectives to machine defined. And I'll give some examples. Next slide.
- Brandie Nonnecke
Person
We need to talk about how AI is actually defined by the field of computer science. So AI is a sub component of the field of computer science. And then what we have mostly right now in operation is machine learning. And there are essentially three main categories of machine learning. We have supervised versus unsupervised machine learning. We have deep learning, and we have reinforcement learning. And those three things don't have to operate in isolation from each other.
- Brandie Nonnecke
Person
You can have deep learning that has reinforcement learning on top of it. So, for example, Chat GPT is deep learning with reinforcement learning via human feedback. And if all of this sounds like what all of these terms actually mean, I'm going to get really concrete, simple examples. Next slide.
- Brandie Nonnecke
Person
All right, let's look at machine learning. So you've probably seen these terms either in media talking about the rise of machine learning in AI, or you've seen these terms in draft legislation, draft bills. We have supervised machine learning, unsupervised machine learning, reinforcement learning, deep learning, generative AI foundation models, General purpose AI. It's incredibly overwhelming, right?
- Brandie Nonnecke
Person
If we're thinking about how do we regulate this technology, and there are all of these different types, how do we appropriately write legislation that can actually address the potential benefits and risks of these different types? So let's break them down. Next slide.
- Brandie Nonnecke
Person
Okay, in machine learning, I talked about the main types we have supervised. Now, that's where you have a labeled data set that is used to train the algorithm. Unsupervised machine learning. The algorithm will analyze and cluster unlabeled data. It's going to find these latent factors. It's going to identify certain trends and cluster groups, and then reinforcement learning are algorithms that learn through trial and error using feedback from its actions. All right, let's dig a little deeper. Next slide.
- Brandie Nonnecke
Person
Okay, and I'm going to make this as simple as possible. So if you have something that is round and it has a stem and it's red, what is it? An apple. Okay, probably. Let's just guess. It's an apple. Next. Yes. Okay, next slide. Okay, now you have something, right? And then click three times. Now, if we have something that's round and it has a stem, but it's not red, according to our algorithm that we've developed, it is not an apple. Click.
- Brandie Nonnecke
Person
Thank you. All right, next slide. Again, something that's round and you can click three times. Again, it's round, it has a stem, and it's red. Well, according to our algorithm, this tomato is an apple. This is a false positive. The other one was called a false negative. All right, next. Now, this is the simplest way that I can explain supervised machine learning. Essentially, you will take your data. You take your data, you label it.
- Brandie Nonnecke
Person
So you would label the green apple as apple, the Green Tomato as a tomato, the Red Tomato as a tomato, and the red apple as apple. You hold out a set of test data. You put those labels into your training model, and you test to see is it accurate. How many times does it have a false positive where it says, a green tomato is a green apple, and you can test the prediction and see how accurate your model is. And I'm presenting them in this way.
- Brandie Nonnecke
Person
The field moved in this way. Essentially, we had supervised machine learning with human labeled data sets and then moved on to next slide. Unsupervised machine learning. That's where you take unlabeled data. You put it into this interpretation. It's going to identify latent factors. So it's going to say, anything that is round, that has speckles on it, and one tiny stem, that's an apple. Anything that is red, shiny, oftentimes has water droplets on it, and five shoots off the top, that's going to be a tomato.
- Brandie Nonnecke
Person
Well, the issue with that is you can actually accidentally get false positives and false negatives. Your air rate can be higher in this. That's one of the drawbacks. But you could use it to process large amounts of data, and you could identify latent factors that you and I don't even think about, that we don't even see, but it sees. Next slide. Okay.
- Brandie Nonnecke
Person
And then reinforcement learning. That's where we are, essentially. Now, this is the advanced side. So you have your unstructured data. You're going to enter that in the model, and you're going to reward or punish it off of trial and error whether or not it gets it right. So if it actually does say, okay, this red round thing with a stem, that's an apple, it gets rewarded. If it labels the tomato as an apple, it gets a punishment. It's based off of reward and punishment.
- Brandie Nonnecke
Person
And you want to do that over and over and over and over and over until your model gets better and better and better at predicting whether or not something is really a tomato or an apple. Okay, so that's pretty much the field of machine learning and where we are now. Okay, next slide.
- Brandie Nonnecke
Person
Now, some of the challenges, and I hinted at these in my remarks already, in supervised machine learning, you have to have humans labeling data. Well, humans are fallible, right? We make mistakes. We have implicit biases. We might not label data in a correct way. It's intensive. You have to have people doing it, which means it's limited. You couldn't apply it to lots and lots and lots of big, big data next to unsupervised machine learning.
- Brandie Nonnecke
Person
As I said before, some of the errors can happen a lot more because it's looking at these latent factors, or it might identify something that you don't understand why it's using that as essentially a proxy to say it is an apple, for example, now and then the reinforcement learning, one of those challenges. And this is a very classic example. So, sorry if you've already seen this, but if you press play now, this speedboat, this is using reinforcement learning.
- Brandie Nonnecke
Person
And it said, look, your task is to win the boat race. Well, what do you get when you win the race? You get points, right? Well, it was really smart, and it figured out, look, I don't really care about winning the race because I can just get stuck in this infinite loop and getting these boosts that give me all the points in the world and never win the race. This is where you're prioritizing for the wrong thing, right?
- Brandie Nonnecke
Person
It shouldn't have been prioritizing for the outcome of winning the race because that was just a proxy for points. And this can happen. It's called a faulty reward function, and this can happen in much, much more serious instances. This is just a clever, fun example to show you, but you could have a faulty reward word function on identifying individuals who may have cancer, and you're accidentally loading the decision on something else that's irrelevant. Next slide. All.
- Brandie Nonnecke
Person
Thank you. And then in the field of deep learning, which is really the AI du jour of what we're all talking about in the use of machine learning, where now we're using, it's more complex, you're able to process large amounts of data and identify relationships in data that may not be apparent to and me. Next slide. And it's based off of the way the human brain works.
- Brandie Nonnecke
Person
And as a promotion of Slater, your work, the primer, does a very good job talking about deep learning and what all these neural nets mean and these latent factors and all of the model weights them. Achieving an outcome. This is often when we hear about a black box algorithm, this is what we're talking about, where we might not fully understand why the model is making a certain decision that it's making.
- Brandie Nonnecke
Person
However, I will say that there are statistical methods that we can use to peer into that black box. For example, the use of something called counterfactuals, where you can look and see if we change certain things, does it change the outcome? So I wouldn't buy it if you hear anybody say that there are no ways to look inside these black boxes. Next slide. Challenges, of course, of deep learning. Large amounts of data, powerful computing.
- Brandie Nonnecke
Person
This idea of the lack of transparency, however, I think it can be overcome in many ways. And this idea of the faulty reward functions creating these unintended behaviors. Next slide. Of course, generative AI is the thing that we're all focusing on right now. Governor Newsom issued an Executive order pretty much specifically focused on generative AI.
- Brandie Nonnecke
Person
Yeah, while that's great that it's brought more attention to the use of AI, I caution that we should not just be hyper focused on generative AI systems, but that we use this as an opportunity to also focus on simpler forms of machine learning, what we often consider narrow machine learning. Next slide. And essentially the most disruptive advancement that has happened is that previously, when we developed machine learning models, we developed them for very specific tasks.
- Brandie Nonnecke
Person
Well, now comes along something we call foundation models, where you can build on top of this model various applications. Right. Given the name foundation models, chat, GPT, foundation model, GPT four, you're able to actually build on top of that text generation, image generation, video generation, and you've probably also heard a lot of talk about llms, large language models.
- Brandie Nonnecke
Person
Well, you're going to hear a lot more about lmms, and those are large multimodal models like Chat GPT 4.5, where you can build on top of it, text, image, voice, video. Next. I think we can go over quickly. Next. Okay, I want to raise up that NIST. When they issued the NIST AI risk management framework, which is a voluntary framework at the federal level to oversee AI to mitigate any risks, they did call for people to provide profiles of how do you actually implement this?
- Brandie Nonnecke
Person
And UC Berkeley, our team at CITRIS policy lab, and the Center for long Term Cybersecurity submitted a profile on how do you actually implement the NIST AI risk management framework for generative AI General purpose AI systems. It's no small document, of course, going through that, but it's, I think, a really strong demonstration of how you actually can implement that in practice. Next slide. And that's what it looks like if you want to go online and download it. Next. We'll skip that. Next for time.
- Brandie Nonnecke
Person
I'll actually skip this one, too, because I know that there are questions. Right. I just have one more point that I want to make. Next slide. This one, this, to me, is going to be a really big problem and a challenge for us that we need to address because there are various laws being passed. Of course, the EU passed the EU AI act at the federal level. We do not have a comprehensive law.
- Brandie Nonnecke
Person
We have various laws proposed in the California Legislature as companies are trying to deal with. How do we implement these risk assessments and whose risk assessment and what does due diligence look like? How do we do it appropriately rather than them bringing on those people in house, the talent in house?
- Brandie Nonnecke
Person
We're starting to see this burgeoning sector of third party auditors, certifiers, licensers, who will say, look, you give us access to your model, we will audit it for you to ensure it's compliant with the conformity assessment process in the EUA act or with doing a risk assessment alignment with one of the bills you might propose. Now, I'm concerned, how do we know that those third parties are doing an adequate job? So there must be oversight over them?
- Brandie Nonnecke
Person
Maybe they're going to be licensed themselves to ensure that they are doing a robust enough process, because some companies could essentially pass money over to these and say, look, give me a certification without doing a robust enough job, and it will give a false sense of security and customers, and the California public will definitely be hurt because of that.
- Rebecca Bauer-Kahan
Legislator
Awesome. Thank you so much, Professor, for that incredible overview. Yeah.
- Buffy Wicks
Legislator
Thank you for presenting today. That was really informative. I would just love to get your take. We've attempted to regulate technology in California. We're probably more advanced than most other states and arguably federally, potentially, we have a privacy Committee. We have Committee consultants. We're home of the tech industry. I think there's a lot of institutional knowledge in the building around it. Having said all of that, technology moves at a rapid pace and iterates very, very quickly, and lawmaking is not very, very quickly.
- Buffy Wicks
Legislator
So I think we have that challenge also there's, as mentioned, I think, great opportunities with AI, both economically for the region that I represent in the Bay Area, but also just for society and humanity, also risk. Right. And so given all of that and your knowledge base. What would be your recommendation to lawmakers in California and how we propose a regulatory framework around AI and what are the most important things you think we should focus on?
- Brandie Nonnecke
Person
Yeah, I'm really happy my EU friends and colleagues are going to be presenting today because I do think that they've done a good job by taking a risk based approach and putting in some additional safeguards for certain types of AI. But essentially, it's agnostic to the type of AI. It's whether or not it's being used in a high risk area, and that allows it to be able to be nimble as the technology advances. I would really advise against putting in place any legislation that is specifically focused on one of those types.
- Buffy Wicks
Legislator
More prescriptive, right. Yeah.
- Brandie Nonnecke
Person
And, I mean, we have established laws that give people remedy if they are harmed. Right. Like, if an HR tool is discriminatory, somebody can sue for that, but the harm has already happened. So another thing I like about the EUAI act is that it actually mitigates those risks before the product goes on the market, rather than like in the United States, where we would have standing to sue if harm happened.
- Brandie Nonnecke
Person
So I would recommend this kind of oversight mechanism of doing risk assessments, but making sure that those third party auditors, licensers, and certifiers are legitimate. Otherwise, it's just for not.
- Buffy Wicks
Legislator
Thank you. Yeah. Mr. Lowenthal.
- Josh Lowenthal
Legislator
Thank you so much for that presentation. I'm interested in the concept of how AI can help government in its delivery of services. We're not only reticent to understand technology, we're reticent to using it. And certainly when speaking of taxpayer money and organizing as a whole, the more efficient delivery of services, the way we can measure our effectiveness of delivering services, what's working, what's not is something I'm not hearing a lot of discussion about. What's your take on that?
- Josh Lowenthal
Legislator
How can government get involved in a way that is mindful of all the concerns that AI has and yet still be able to deliver more?
- Brandie Nonnecke
Person
Yeah. Thank you so much for raising this, actually, our team at the CITRIS policy lab in the Center for Long Term Cybersecurity worked with the State of California for a two year period through the California Department of Technology and with procurement to help them develop a responsible strategy so that when a new AI enabled technology were to come about, like Chat GPT, how can the state actually be fully prepared to use that technology in a way that mitigates potential risks or liabilities?
- Brandie Nonnecke
Person
Now, of course, the White House Executive, not the White House, the California Executive order has called for agencies to identify ways that they can implement generative AI. I know I'm on record, and that's okay. I'll say this. I think that that is a little bit short sighted. It's sort of chasing that shiny toy right now. Great. Generative AI systems hold a lot of potential to increase efficiency and effectiveness for the State of California.
- Brandie Nonnecke
Person
But all of those other types of AI that I talked about, you can use those also to help you become more efficient and effective. And also a lot of them are already in use across the state. So I think it's about recognizing where it's already being used across the state and making sure that the solution is fit for purpose to the problem, rather than having a solution where you're trying to find a problem that fits with it.
- Josh Lowenthal
Legislator
From a framework perspective, how is it that we can assess that, especially as legislators, what to utilize, in what scenario? And what should we be measuring? And to the point of my colleague from Oakland, how do we create a framework around that and oversight around that?
- Brandie Nonnecke
Person
Yeah, we've talked about this a lot. And actually, the University of California also set up its own responsible AI principles. And our first one is appropriateness. Is that solution, that technology solution appropriate to the problem? What you can do is go through your procurement when you're evaluating whether or not that third party vendor. Of course, I'm sure the procurement office right now is inundated with vendors saying, look, this generative AI tool will solve all of your problems.
- Brandie Nonnecke
Person
Well, maybe it's not that generative AI tool that we need to solve that problem. So I think it's also about creating more awareness within the state personnel and staff about the different types of these technologies and being able to critically evaluate them.
- Rebecca Bauer-Kahan
Legislator
And I will say that the Committee is working on a joint oversight hearing with budget on the rfps coming out of the California Executive order to have that conversation around both what is it looking at and the privacy protections that are critical. I think as we engage with new large language models and generative AI. Yeah. Mr. Hoover.
- Josh Hoover
Legislator
Thank you so much, and I do appreciate your presentation, and I appreciate that. We are really embarking on something very new here as a society and as a state. And so certainly a lot of issues to work through. I think my main question at this moment is, with the kind of recent problematic, I will say, rollout of Gemini, for example, how are we going to, or how do you feel that we should focus when it comes to accounting for bias? Right.
- Josh Hoover
Legislator
Accounting for bias in AI, obviously bias in these new systems by kind of putting an emphasis on accuracy versus other value judgments or value principles. So I just was curious on your thoughts on that.
- Brandie Nonnecke
Person
Thank you, Josh. I have a lot to say on this, because oftentimes we hear people say that AI creates bias. It does not. AI just exemplifies and replicates and demonstrates the biases that are there in our society. So it actually shines a light on biases that exist. And what's great about that is you can't hide from it. You have evidence, you can see that the bias is there. Now you can address it. We'll also caution that you can never have an AI system that is bias.
- Brandie Nonnecke
Person
Never ever. NIST issued a report where they talk about three types of bias. You have our human implicit bias, which we all have. We have institutional biases. The United States is rife with them. And you also have statistical bias. Well, you cannot simultaneously address all subcategories of bias under those three categories. It's more about which bias are we most concerned about in this use case, in this application area, and how do we address and manage that bias.
- Brandie Nonnecke
Person
So I would say doing these audits of these systems, doing risk assessments, doing continuous monitoring, after you have decided, okay, if we're doing equal employment opportunity, well, we want to make sure that it's not flagging women to be moved out and not considered for the job at a higher rate, disproportionately from men. Right. So that would be the bias that we're concerned about, and let's try to manage it and mitigate it from happening.
- Josh Hoover
Legislator
If I could just do one really quick follow up. What do you feel are the appropriate founding principles for creating AI systems? So, for know Google's principles, their top three, be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety. They have a number of others. Do you feel like that's the right focus? Part of me feels like there may be some other principles that might rise to more importance, but I wanted to just get your thoughts on that foundational.
- Brandie Nonnecke
Person
This is a good quiz. For me. So the common ones are the first. I said appropriateness. Like, is this even appropriate for us to use this technology? Transparency. So how is it making its decision? What data is it being used? Fairness and nondiscrimination. Safety and security, privacy, accountability. So that would ensure that essentially, if an AI system made a decision that disproportionately harmed individuals, there was an accountability mechanism where you could push back. And I'm missing the 7th one, but I did a pretty good job. Six out of seven, I think that those are essentially, there's consensus that those are the ones we should be concerned about.
- Rebecca Bauer-Kahan
Legislator
Risk assessments keep coming up in this conversation. And audits, can you touch a little bit deeper on what those look like and what they are? I think it's become sort of a buzzword, but it would be helpful to dive deeper.
- Brandie Nonnecke
Person
Yeah, definitely. It has become a buzzword and it's also become, to be honest, something that certain entities can hide behind because they could say, look, we did a risk assessment, we identified these risks and we mitigated them. Well, how do we know what risks they didn't look at? So there's a lot of research right now looking at, well, what are appropriate risk assessment strategies based off of the use case? First in what area we're using it, but then also based off of the technology.
- Brandie Nonnecke
Person
Now, NIST has their NIST AI risk management framework. That's voluntary. No company is required to implement the NIST AI risk management framework. However, we are seeing many companies step out and actually start to do it and implement it and then publish that. And as we see those being published, we can see what types of risks, how are they actually operationalizing the spirit of that framework?
- Brandie Nonnecke
Person
Additionally, my colleagues and I, the generative AI profile that we did using the NIST air risk management framework, it walks you through. How would you ask each of these questions, these risk assessments, as you're thinking about this technology? So, yeah, the field is growing right now, just like we did in cybersecurity in that group where we started to grow consensus on, well, what does it mean to do an appropriate audit of our system to ensure it's secure?
- Brandie Nonnecke
Person
The same thing is going to happen in this space. It's too early. We're going to move forward. But I do caution against having industry decide what a risk assessment looks like. There should be transparency in how they're doing that and a way to hold them accountable if we feel that they are not doing a robust enough job of identifying and mitigating risks.
- Rebecca Bauer-Kahan
Legislator
Thank you. So, one thing I think I've been really focused on is, what are the risks we understand today. And how should we be regulating those? And so I think that the foundational question about underlying that is, how is AI appearing in our lives today? I think it appears in a lot of ways that we don't think about. So can you talk a little bit about that? Like, when are Californians interfacing with AI today?
- Brandie Nonnecke
Person
All day. Your smartphone has algorithms in it that are making decisions about the news that's in your feed. You hop onto X if anybody still does, or Instagram, and you go on there and you look what's in that feed, what's being presented to you and why. That is an algorithm, everything city planning, like looking at where you would place bus stops. Right. You're going to use statistical models. That's the biggest thing here, is AI is based on statistics. A lot of it is logistical and linear regression.
- Brandie Nonnecke
Person
And if you remember some of your statistics courses now, the new advancements of generative AI, sure, there are some definitely huge capabilities that have emerged from that. But right now, there are more of those simple forms of machine learning and operation. It's everywhere in everything that we do.
- Rebecca Bauer-Kahan
Legislator
So I appreciated my colleague from Oakland's question around how we should be thinking about this from a framework perspective, sort of take that to the next level. What do you think the most critical AI related issues are that we should be focused on? Right. So that's sort of how do we address it? But what is it we should be focused on?
- Brandie Nonnecke
Person
Honestly, not chasing the hype, because generative AI right now is an incredibly hyped technology. Sure, there are risks, but I promise you, if we only focus on generative AI and we do not build in a robust process for all forms of AI, Californians will be hurt. Debunking the hype.
- Rebecca Bauer-Kahan
Legislator
Okay.
- Brandie Nonnecke
Person
Zero, yeah.
- Buffy Wicks
Legislator
Just to follow up on that. Also, is it your sense that the industry is self regulating effectively right now? It seems to be that they're trying to, or that's what we're hearing, and I just would love to get your take on.
- Brandie Nonnecke
Person
No, no. I would say no. They have incentives to do risk assessments in ways that allow them to continue to operate.
- Rebecca Bauer-Kahan
Legislator
Okay.
- Rebecca Bauer-Kahan
Legislator
Mr. Muratsuchi.
- Brandie Nonnecke
Person
Yeah.
- Al Muratsuchi
Legislator
Thank you very much, Madam Chair, for inviting me to join this very important informational hearing. So, a couple of years ago, I taught a class at UCLA, and 40% of the final grade was based on a final paper. But before that, I was having weekly writing assignments. But the final paper, the quality and the rigor of research reflected in the final paper was so different than any of the writing samples presented by the student before that final paper.
- Al Muratsuchi
Legislator
Me, a couple of years ago, without knowing what Chat GPT, or generative AI is about, I was thinking old fashioned ways, is this plagiarism? Because the writing style didn't look like the previous writing samples? But now, in hindsight, I'm thinking, was that one of the first examples of chat GBT that I came across? And so I'm wondering, are there issues related to generative AI in the education space? Are there concerns about the impact of AI on jobs in our schools, in our universities and community colleges? Can you touch upon these issues related to AI in the area of education?
- Brandie Nonnecke
Person
Yeah. This is something we're grappling with a lot, right. UC Berkeley and across the UC system, we have been grappling with, how do we allow students to be able to use this technology that will prepare them for a future workforce where they will undoubtedly be using these types of tools, but ensuring that they still have the foundation of understanding how to write an essay, how to make a logical argument. Now, we have been doing a lot of work in this space.
- Brandie Nonnecke
Person
We have the UCAI Congress and the UCAI Council. So the UCAI Council has representation from all 10 campuses where we work to operationalize our responsible AI principals, and we allow each campus to operationalize them in a way that works for their campus. Nothing is uniform. Right. We're a family, but like any siblings, we're all a little bit different from each other. We want to stand out, so we give the campuses that flexibility. And that's been underway for about a year.
- Brandie Nonnecke
Person
Tomorrow I'll be flying down to UCLA for the University of California AI Congress, where undoubtedly this issue is going to come up, and we will be discussing what we should be doing.
- Al Muratsuchi
Legislator
Included in those discussions is this Congress or gathering going to be discussing potential impacts of AI on teaching careers or on employment in the education.
- Brandie Nonnecke
Person
I'm unsure of that. I can't go on record saying whether or not we will discuss that.
- Al Muratsuchi
Legislator
All right, thank you.
- Rebecca Bauer-Kahan
Legislator
So one of the things that we have heard is that some advocates want us to just set hard limits on advancement of AI systems, like nothing should get more advanced than what we have today. That seems not feasible to me, but would like your thoughts on that.
- Brandie Nonnecke
Person
I agree with you. That's completely infeasible. I mean, the companies are going to move forward. Now, the one thing that I think is important, even the White House Executive order on AI focused on compute power as a proxy for risk. Well, guess what they'll just do? Compute power under that. Or the field itself is moving toward more efficient models that use less compute power and less data that will have better outcomes, stronger outcomes. So I wouldn't be focused on the capacity, but more on the potential risks in the risk areas where these are being applied.
- Rebecca Bauer-Kahan
Legislator
A lot of the EUAI act, as you said.
- Brandie Nonnecke
Person
Yes.
- Rebecca Bauer-Kahan
Legislator
So one thing that we're grappling with as a Committee, we are focused on privacy as one of our main areas of jurisdiction is the governor's recent Executive order created these pilot programs, as I'm sure you're aware, for agencies to begin experimenting with these generative AI tools that will contain Californian's data, potentially.
- Rebecca Bauer-Kahan
Legislator
So I was wondering if you had thoughts on sort of, as we start to engage with these, especially the generative AI large language models, what should we be thinking about as it relates to the privacy of California's data entering these models?
- Brandie Nonnecke
Person
Yeah, I think using any of the privacy principles, like data minimization, what data do you actually need to be able to achieve the outcome? Do you really need to use personally identifiable information, personal data, in doing so? And then, second, if you're working with a third party like OpenAI, which has formed contracts with state governments like the State of Pennsylvania, making sure that any data that they gain access to is held securely and not fed into their larger model.
- Rebecca Bauer-Kahan
Legislator
Okay, thank you. That's helpful. And then I know that one of the things I just wanted to give you an opportunity to talk about, because I know it's something that you care deeply about and have thought a lot about, is deepfake pornography and its impact on Californians and potentially California children and women. Do you have thoughts on what steps we should be taking in that space that you want to share?
- Brandie Nonnecke
Person
Yes. I mean, in the State of California, we have a law, right, on non consensual deep fake imagery, where if somebody distributes non consensual deepfake pornography, gives that other individual standing to receive compensation from damages. Now, how do we stop it from being able from being created in the first place? I think there's a lot of work about using open versus closed models. So some of the closed models can put in restraints that will mitigate your ability to create that type of content.
- Brandie Nonnecke
Person
I know right now that NTIA has this open comment period about open source models, where we have the known model weights available, which would essentially mean that a third party could remove those model weights and use it to create problematic content like non consensual deepfake pornography or child sexual abuse material? Yeah, it's a difficult area and it's really unfortunate that women, and especially teenage women, are targeted first.
- Rebecca Bauer-Kahan
Legislator
Yeah. And I think we need to stay on top of it because we just keep hearing story after story of girls are being harassed in this way. So I appreciate that.
- Josh Hoover
Legislator
Just follow up on the deep fake issue. I think how feasible is it to. There was this letter that was written recently by a bunch of experts on the deep fake issue and what needs to be done on it, but how feasible is it to actually hold companies accountable for stopping this? Versus is it kind of an area where it's going to be really difficult to actually stop this technology from being utilized?
- Brandie Nonnecke
Person
That's a really good question, but I think it's better for my colleague, Professor Hany Farid, who will come up soon.
- Rebecca Bauer-Kahan
Legislator
Okay, thank you. And I think first of all, I want to say for those that didn't have an opportunity to read the background or before the hearing, it also provides a really excellent overview of AI and the different ways that AI works in our society today. So I do recommend it as a good read for anyone who didn't have an opportunity. But I want to thank you.
- Rebecca Bauer-Kahan
Legislator
And I know as our partner at the UC, you remain available to, I think, anybody here who has questions or wants to learn more. I know we've spent time together, so I really appreciate you being here and providing all of these questions and answers. And with that, I think we'll move on to the second panel.
- Brandie Nonnecke
Person
Thank you so much for having me.
- Rebecca Bauer-Kahan
Legislator
So our second panel will get into the question of defining artificial intelligence in California code today with us we have Ashkan Soltani, the Executive Director of the California Privacy Protection Agency that has been working on a definition. We also have Gerard de Graaf, EU's senior envoy for digital to the US European Union office in San Francisco.
- Rebecca Bauer-Kahan
Legislator
So if you both want to join us, and I just want to say how grateful I am, we have our partners from the EU here to talk about how they've been thinking about this, because the UAI act, I think, is a real opportunity for us to learn.
- Rebecca Bauer-Kahan
Legislator
And one of the things that I sort of briefly touched on in my opening remarks, but I think is critical for California's companies that I think all of which operate in an international way, is for us to work on consistency so that the regulatory regime is feasible for them. And so obviously, working with EU to understand what they've done, I think is a critical piece of that. So with that, do we want to start? Ashkan, would you like to start? There should be an on button. There you go.
- Ashkan Soltani
Person
Great. Thank you. Thank you all. My name is Ashkan Soltani. I'm the Executive Director of the California Privacy Protection Agency. Appreciate chair Bauer-Kahan and Members of the Committee the opportunity to be here to discuss the agency's work with respect to automated decision making and artificial intelligence, and specifically our work to define these terms. The agency's mission is to protect the fundamental privacy rights of Californians with respect to the use of their personal information.
- Ashkan Soltani
Person
Specifically, we are tasked with the implementation and enforcement of the nation's first comprehensive privacy law, the California Consumer Privacy act. As an expert agency and one that is directed by statute to provide technical assistance to the Legislature on privacy issues, we're here to be a resource and look forward to working with you all on these topics. I should note the opinions I share here today are my own and do not necessarily reflect the views of the agency or our five Member board.
- Ashkan Soltani
Person
By way of background, our agency was created after California voters approved Proposition 24, the CPRA, which amends and extends the California Consumer Privacy act of 2018. The CCPA, as amended, gives California consumers fundamental protections regarding the collection and use of their personal information, including the right to access, delete, correct, and stop the sale of their personal information.
- Ashkan Soltani
Person
Personal information is also defined quite broadly to mean information that identifies, relates to, describes, or is reasonably capable of being associated with a particular consumer, although personal information does not include publicly available information. California was the first in the US to pass a consumer privacy law and still the only state to have an independent regulator guaranteed, sorry, still the only state to have these protections guaranteed by our constitution.
- Ashkan Soltani
Person
Our statute specifically specifies that our law can be amended as long as those amendments are in furtherance of the purpose and intent of the act. To strengthen consumer privacy. The CCPA protects California consumers by imposing obligations on certain California businesses that meet, for example, over $25 million in gross annual revenue, sell or share personal information of 100,000 or more California consumers, or derive 50% or more of their revenue from selling or sharing consumers information.
- Ashkan Soltani
Person
Importantly, the agency has three key responsibilities, issuing regulations and furtherance of the CCPA, enforcement of the CCPA through administrative enforcement, and we share that enforcement authority with the Attorney General, who has civil enforcement and promoting public awareness of consumers'rights, and businesses responsibilities under the CCPA, our agency is governed by a five Member board that consists of experts in privacy, technology, and consumer rights who are appointed by the Legislature, the Governor, and the Attorney General. Just background the board was first established in March of 2021.
- Ashkan Soltani
Person
Shortly thereafter, the board appointed me as the Executive Director in August, sorry, in October of 2021, and prior to my role with the agency, I served as the Chief Technologist at the Federal Trade Commission and briefly in the Obama White House Office of Science and Technology Policy, helping to shape its early work on privacy and artificial intelligence. I've also been recognized as a journalist and successfully founded two startups, including one in the data privacy space and one in the recommendation space, the music recommendation space.
- Ashkan Soltani
Person
I began this journey with an undergraduate degree in cognitive science in the late 90s, working on early machine learning models. Then and lastly, prior to my appointment, the board initiated an invitation for public comment, otherwise known as pre rulemaking, in September 2021, on the topics of privacy and automated decision making, well before even the public release of Chat GPT.
- Ashkan Soltani
Person
So with respect to AI, the Vodaback initiative specifically directs our agency to issue regulations that govern access and opt out rights with respect to businesses use of automated decision making technology, including profiling. It also requires businesses to provide meaningful information about the logic involved in that decision making process, as well as the description of the likely outcomes of those processes.
- Ashkan Soltani
Person
With respect to consumers, the CCPA also requires businesses whose processing consumers personal information presents a significant risk to consumers'privacy and security to perform a cybersecurity audit on an annual basis and submit to the California Privacy Protection Agency on a regular basis a risk assessment with respect to their processing personal information, a topic that the previous speaker recommended. I'll note that while there are substantial similarities to the GDPR, our mandate is much broader with respect to automated decision making technology, and we take this responsibility very seriously.
- Ashkan Soltani
Person
Staff recently published our latest refinements to the draft regulatory framework governing cybersecurity risk assessments and automated decision making just last week in advance of a board meeting on March 8th. The draft framework reflects nearly three years of work and informed by nearly 900 pages of stakeholder input with in response to our first request for preliminary comments in 2021 and over 1000 pages of responses to our request for comments in 2023.
- Ashkan Soltani
Person
In addition, we also received substantial comments provided during three day stakeholder workshops in May of 2022. This work was also initially shaped by a two Member Subcommitee consisting of the agency's legislative appointees from the Assembly and the Senate, respectively. The agency released its first draft regulations on these topics in September 2023, updated in December 2023, and have made further refinements in advance of our upcoming board meeting. All these materials, including the earlier drafts and public comment, are available on our website.
- Ashkan Soltani
Person
Now I'm going to provide a brief overview of the framework, and I'd like to start with key definitions. However, as I described the CPA's draft framework, please note that this is only a proposed regulatory text at this stage and remains subject to further amendment by the board, including at the upcoming board meeting in March 1. I'd like to explain the agency's overall approach in drafting these definitions.
- Ashkan Soltani
Person
While we just heard AI is an expansive concept and includes many different applications, the CPA's draft framework focuses only on certain activities that trigger substantial requirements. Our interest is in the nexus of AI and automated decision making technology, specifically the categories of machine based decision making that poses a significant risk to consumers'privacy, and security.
- Ashkan Soltani
Person
As such, the agency's definition of artificial intelligence closely tracks the OECD revised definition of AI as it's intended to be flexible and accommodate technological developments and be consistent with other frameworks, including the NIST AI framework, the EU parliament's proposed proposal for the EU AI act, and the White House Executive order on safe, secure, and trustworthy development and use of AI.
- Ashkan Soltani
Person
The agency's proposed definition of artificial intelligence means a machine based system that infers from input it receives how to generate outputs that can influence physical or virtual environments. The AI may do this to achieve explicit or implicit objectives. Outputs can include predictions, content, recommendations, or decisions. The framework accommodates various levels of autonomy and adaptiveness after deployment, and we clarify that AI includes generative models, such as llms that can learn from inputs and create new outputs such as text, images, audio, video, or facial and speech recognition.
- Ashkan Soltani
Person
However, the primary obligations that fall on AI are those that are used to make decisions about individuals that have a significant effect. This can include profiling individuals or those effects that substantially facilitate or replace human decision making that is used to make a significant decision, such as access to employment, education, or other essential goods and services.
- Ashkan Soltani
Person
This approach allows us to focus on the most impactful or significant harm stemming from automated decision making systems while maintaining an underlying framework that can allow innovation and adapt as new concerns emerge. This framework also requires businesses to provide meaningful notice to consumers when they encounter automated decision making technologies and in most circumstances, permit opt out of the use of it or request human appeal evaluation necessary in the cases of significant employment or educational decisions.
- Ashkan Soltani
Person
Note that there are still a great number of concerns with respect to AI that fall outside of automated decision making, and yet many of which we can address using our existing privacy framework that places limits on the collection, use, retention, and sharing of PI, which we also heard earlier. For example, companies are required to employ data minimization principles, purpose limitation principles on the collection and use of PI and permit consumers to delete their personal information upon request.
- Ashkan Soltani
Person
Our risk assessment framework further requires businesses to evaluate the risk to consumers'privacy, and security prior to developing or deploying AI systems, including when training models that can be used to profile consumers or develop deepfakes, defined as manipulated or synthetic media depicting a person saying or doing things that are presented as truthful but are not. So I often joke that there's no AI without PI and in fact, IP, although that's a discussion for another panel.
- Ashkan Soltani
Person
While there are certainly instances of development and deployment of AI that fall outside this framework, our agency is directed by the voters to vigorously protect their rights in the instances where their information is used in ways that negatively impact them. Lastly, two final quick points. The title of this panel is myths, magic and machine learning.
- Ashkan Soltani
Person
As this body contemplates solutions to govern this rapidly emerging field, I'd be remiss not to mention that without a skilled pipeline of experts in this field, the government will always be at a significant disadvantage with respect to the oversight of these technologies. I've seen this play out repeatedly in government, and with rare exception, unless you first address the skills gap and recruitment gap, this body and agencies like ours will be continually ill suited to engage meaningfully on these issues.
- Ashkan Soltani
Person
We've been lucky in that the voter initiative grants us certain unique roles, such as a chief Auditor that can provide meaningful oversight of businesses use of personal information in ADMT. But without new civil service classifications and improvements to hiring, the overall knowledge gap will be incredibly limiting. Second, even as this body contemplates safeguards with risks surrounding governments use of AI, it's important to point out that it's not typically governments developing these technologies.
- Ashkan Soltani
Person
The skills, the data, and the sheer compute power necessary to architect and train these systems surpass the resources that governments have, even with boundless coffers as mindful that fundamentally, it's regarding the developers, which are limited to a handful of multinational firms that ultimately built the foundation models that everyone relies on. So as we approach governance, even for a government's use of AI, we should be clear that we are also regulating commercial uses as well. Thank you.
- Rebecca Bauer-Kahan
Legislator
Thank you. I think your point is very well taken, that we need the talent in government to do is right. So anyone who's watching and wants to join us, we need all the talent we can get. Thank you.
- Gerard de Graaf
Person
Good afternoon, Madame Chair, Members of the Committee. It's an honor to be here. Thank you for the invitation. My name is Gerard de Graaf I'm senior envoy for the European Union to the United States on Digital, and I'm also the head of the EU office in San Francisco. Before I worked in the European Commission in Brussels and, inter alia, on the Digital Services Act, the Digital Markets Act, and also on the AI Act.
- Gerard de Graaf
Person
I think it's important to say from the outset that AI is global, the technology is global, the companies that are operating in this space are global, the opportunities are global, and the risks are global. And so when the EU started to design its framework, it was very mindful of the fact that it was moving as a first mover into this space, but that ultimately a lot of other countries and jurisdictions would follow the EU. And therefore, we are very interested in alignment.
- Gerard de Graaf
Person
And we were thinking about how to devise an instrument, a measure that actually kind of can become a model for the rest of the world to follow, much like the GDPR became a model for the rest of the world to follow in the area of data protection. And so we are very keen to work internationally and also to work with the State of California. Very pleased with the excellent cooperation that we have already built up with the State of California.
- Gerard de Graaf
Person
I can tell you that our colleagues in Brussels are following very closely what you're doing in California. They're fully aware of the bills that you have introduced, and they are very interested in these bills and in further cooperation. We prepared a few slides. Maybe they can be put on the screen. I'm going to try to keep it as simple as possible. I'm not sure I can meet the apple and tomato test, but I will do my best.
- Gerard de Graaf
Person
I think, first of all, maybe just give a slight, quick overview of what the EU AI Act is about before we then dive into the definition, which is, of course, critical, as I mentioned, maybe the next slide. It is the world's first and most comprehensive legal framework on AI. It's binding rules, and it's based on the premise that AI can bring many positive changes to us, to the European Union, and to the world.
- Gerard de Graaf
Person
The EU is definitely not AI-phobe, but there's also a need for some safeguards that are necessary to protect our fundamental rights. We have a tradition in the European Union to follow product safety rules because that helps the free circulation of goods in the internal market. That is our market that consists of 27 member states and 450,000,000 users. And that's the approach that we followed also for AI.
- Gerard de Graaf
Person
So it's a product safety regulation, ensuring that AI products and services that are put onto the EU market are trustworthy and safe, much like a lawnmower that is put on the EU market or a dishwasher that's put on the EU market. We want these products to be safe and trustworthy and not to pose risks to the users. I think, as was said before by Professor Nonnecke, we have followed a risk-based approach, and this is important, that we regulate according to risk.
- Gerard de Graaf
Person
So only high-risk use cases face strict rules. And often, I mean, when I'm here in this country, often people say, you regulate AI. Yes, we do. But we regulate effectively only 10% or 15% of AI when it is used in particular cases, that there is risk, and I will come to that a little bit later. We also do not regulate the technology as such. We are technology neutral. We regulate the use of the technology in a particular context.
- Gerard de Graaf
Person
So when AI is used in healthcare, yes, we think there are particular risks that need to be taken into account. When AI is used in, for example, the attribution of Social Security benefits, we think there are certain risks of bias and that therefore needs to be regulated. When AI is used by law enforcement, we think there are civil liberties that need to be protected. So we look at particular use cases and the use of that technology in those cases.
- Gerard de Graaf
Person
Overall, the EU thinks that the framework, or believes that the framework, will help the growth of AI, the use of AI in the EU, because if you do surveys in the European Union, and I don't think it's much different in this country, you find that about 50% of people don't trust AI very much. We believe that a market where about like half of the potential customers don't have trust in the technology is not a market that is going to grow very fast, very, very big.
- Gerard de Graaf
Person
So to put kind of trust and human-centric AI at the center of the European approach, we actually think that is pro-growth and pro-innovation, pro-responsible innovation, and also creating a level playing field. Maybe we can go to the key principles behind the definition, because the definition is, of course, absolutely critical, because it defines the scope of the instrument. And there's three principles that have kind of shaped the definition that you can find in the EU AI Act that it's been politically agreed.
- Gerard de Graaf
Person
It will be adopted in the middle of April, and then it will be entering into application by the end of April, beginning of May. First is legal certainty. In the European Union. We have 27 member states, a bit like the US, with 50 states, I mean, they regulate the member states regulate, they regulate in different ways. And so if we have a situation where the definition is different from one state to another, this creates problem for the free circulation of AI and of goods in general.
- Gerard de Graaf
Person
And therefore, there is an interest in having a single definition for the whole territory of the European Union, because that just promotes legal certainty. And particularly in an area where there's massive investments in AI and developers are developing AI solutions, predictability is critical. We also believe from a legal certainty point of view, that you must not define AI in a too broad manner, but also not in a too narrow manner. We don't want to overregulate.
- Gerard de Graaf
Person
We don't believe that every imaginable automated decision tool should be covered as an AI system. So to get the definition absolutely right is critical. And there's been a lot of time spent in the negotiation on getting that decision right and that definition right.
- Gerard de Graaf
Person
Well, the second point I already made, we were quite mindful that we were putting like we were trailblazers here, we're pioneers, and that what we would be putting forward would be a source of inspiration for other countries and jurisdictions around the world to follow. So we were very much inspired by international work. And so we've worked very closely in the OECD, the definition has already been referred to. It's certainly at the core of the EU's definition.
- Gerard de Graaf
Person
We've worked very closely in the G7 and in the G20, United Nations. We have with the US, the Trade and Technology Council, where this issue, and also the definition has been extensively discussed. So the EU hope is that with the definition, which is based on the global discussions that we've had, there is a good basis for other jurisdictions to align themselves to that definition. The third element, as I mentioned, also is we need to provide for a degree of flexibility.
- Gerard de Graaf
Person
The technology is moving very fast. Again, if we have a too fixed or too rigid definition, it will become outdated rather quickly. I think the implication is that the definition should accommodate, and this is like a bit of a segue into the next slide for varying levels of autonomy, adaptiveness and intelligence. And this brings me to kind of to the OECD definition and the European AI definition of AI systems.
- Gerard de Graaf
Person
It's maybe a little bit complicated to read, but I think what stands out, the main point is that the definition adopted in the EU AI Act is closely aligned with the OECD definition. There's a number of core elements. One is autonomy, the other one is adaptiveness. Then you have explicit implicit objectives and inference. They're the same. And I will come to that and give a few examples like what that means in practice.
- Gerard de Graaf
Person
I think what is noteworthy is that the OECD definition, which dates back to 2019, has recently been updated. Last summer it was updated, and it's clearer on a number of elements, particularly element of inference, which is crucial in the EU's decision also to adopt the OECD definition, because it raises the threshold somewhat and makes that not every automated decision tool is covered. And also the implicit objectives.
- Gerard de Graaf
Person
I mean, we see in some of the bills that have been introduced before the Legislature here in California, it is human-defined objectives. So these are like what is called explicit objectives. We believe that also objectives that were not human-defined, but that kind of, the system itself develops as it is deployed, should be covered under the definition. So implicit objectives, which has been kind of added to the OECD definition, is important for us. In terms of the drafting, you see some differences in drafting.
- Gerard de Graaf
Person
That is because we are drafting a legal instrument. We have our own traditions of drafting a legal instrument in the European Union, the OECD text and the definition is more of a policy text. So it's not kind of substantive, it is presentational. The changes in the definition, maybe we can go to these four kind of components of the definition and then look what it means in practice, what is then covered or not covered as a result of kind of taking these components.
- Gerard de Graaf
Person
And that's the next slide. So you have varying levels of autonomy. And I think the key point here is that there needs to be some degree of Independence from human involvement. The exact level of autonomy may vary. For example, if you take a Waymo car in San Francisco, it is more autonomous than if you drive in a Tesla. So the degree of autonomy may vary, but there needs to be some kind of Independence from human involvement.
- Gerard de Graaf
Person
So what we are excluding, by putting in varying levels of autonomy, are systems that have no ability to operate without human involvement. And that is, for example, automated credit scorecards for loan grants, because they are programmed on the basis of predefined criteria. So it's basically one plus one is two. This is not where an AI model kind of just makes an interference. It is basically executing criteria or instructions that were given by a human. So that would not be covered under the EU AI Act.
- Gerard de Graaf
Person
Because it has no autonomy, it is not independent from human involvement. The second factor is, may exhibit adaptiveness after deployment. And the consideration here is that AI systems may continue to evolve after their design. It's not a hard requirement because it says it may, but it may also not exhibit adaptiveness. But we think adaptiveness is important.
- Gerard de Graaf
Person
And if you look at certain examples, like voice recognition systems adapt to a user's voice. Recommender systems that was talked about here, when we watch Netflix or we listen to Spotify, they adapt to our individual preferences once they are deployed. So this adaptiveness after deployment for us is an important element in terms of the definition, and therefore also for the scope. The third point I already hinted at, which is this explicit or implicit objectives.
- Gerard de Graaf
Person
In many cases, of course, the developer programs the objectives of the system. But there are also situations where the system itself develops new objectives that were not explicitly programmed at the time of the design or the development of the AI system. And an example of that is a self driving car, which is of course, programmed. And again, San Francisco is a good example, to comply with the traffic rules in San Francisco. That's an explicit objective.
- Gerard de Graaf
Person
But that car that is driving around in San Francisco can also, itself, optimize its driving behavior, for example, for fuel efficiency or passenger comfort. This is not necessarily something that the programmer may have programmed, but it is something that the car itself develops as an objective. So that's an implicit objective that only kind of materializes once the system is deployed. And therefore, we believe it's important to cover both explicit and implicit objectives. But the last point, point four, is inference, and that, for us, is a critical element, acritical aspect of AI systems. And inference relates to the second stage of how AI systems work. We talked earlier, Professor Nonnecke talked about machine learning, so how programs are trained. This is about like what happens once programs have been trained. Interference means that an AI system classifies the input. It gets the tomato picture or the apple picture, and then generates the results. It is based on the models and the algorithms it has developed during the training.
- Gerard de Graaf
Person
If you take an example, for example, in medical treatment, an AI system can analyze pictures of melanoma or other skin diseases based on medical images it has been trained on. And the system will then output whether the person is likely to have that disease or not. And that would therefore be covered, because there's an inference made by the AI system. What is not covered is where the system doesn't make an inference.
- Gerard de Graaf
Person
And a simple example is like the spell checker on your iPad or on your iPhone. It is simply connected to a dictionary, and it checks whether the spelling is conformed to how it should be in the dictionary. And if it's not, it will correct it for you. The Excel AutoSum files, for example, you input data, and then the Excel file kind of gives you an outcome and that's not inference.
- Gerard de Graaf
Person
It is, again, simply, one plus one is two. Or an automated triage system for medical treatments which is fully based on predefined rules. So, for example, your age or your medical history or resource availability leads to a prescribed medical treatment that was pre-programmed, so there's no interference. So inference for the European Union, for the definition is an absolutely critical element. So that's the definition.
- Gerard de Graaf
Person
Now, like two other kind of overlays on that definition. We have a number of exemptions to the definition, and they appear on this slide. And so the exemptions are because we want to promote, we definitely don't want to hinder scientific research. So scientific research is out of the EU AI Act. We also eliminate or exclude the AI systems which are still in the research, the testing and development stage. And the logic, again, is product safety. These products are not yet on the market.
- Gerard de Graaf
Person
They are still in the laboratory, they're being tested or being researched. There is no reason, therefore there's no risks to the users, and we want to stimulate research and innovation. And the last point is a bit specific, maybe to the EU, because the EU has limited competence in the field of military, defense and national security. So that is also not covered by the EU AI Act. That's the first overlay. So we have the definition and then we exempt a certain number of areas from that definition.
- Gerard de Graaf
Person
And then the other overlay, which is as important, if not more important, is this risk-based approach narrowing down really to the AI systems that should be regulated. So you have the broader definition and then you overlay it with the risk-based approach, and then the risk-based approach actually defines how you should regulate--even though an AI system may be covered--the decision. If the risk is like, you see this pyramid of risk from unacceptable high risk, limited risk to minimal risk.
- Gerard de Graaf
Person
So you might be covered, or we might cover a particular AI system, but if it presents minimal risk, there's actually no, or hardly any rules. I mean, minimum transparency rules might apply, but not much. If it's more risk, I mean, like unacceptable risk, there's a number of AI systems that would not be acceptable anymore in the European Union, for example, in terms of facial recognition, for law enforcement, certain applications.
- Gerard de Graaf
Person
We don't want to live in a surveillance society in the European Union, so there are prohibitions in that area. Credit score, I mean, it's a system that probably you're familiar with in China, where kind of the citizens are scored for their loyalty to the Chinese Communist Party. That's not something that we would like to see in the European Union. Certain systems that exploit or use subliminal techniques to get people to take decisions against their interest or against their will.
- Gerard de Graaf
Person
So those systems are prohibited in the European Union, and you have the high risk. And the way product safety works in the European union, that also comes back to a bit of the flexibility of the definition, is that we set the essential requirements in the EU AI Act. So we define what safety and trustworthiness mean. And then you go to the principles of the fundamental rights, like transparency, like nondiscrimination bias, I mean, like the protection of your privacy. And then the industry actually develops that into standards.
- Gerard de Graaf
Person
The way product of conformity works or product safety works in the European Union, we set the essential requirements in a legal instrument. The industry then translate those into standards, and then the industry, by showing conformity to these standards, complies with the regulatory framework. So we will have--and there's already a lot of activity in the European standardization organizations to set standards in this space and to allow companies then to, if it's light-risk, to self certify, if it's a more kind of serious risk to have these independent parties. And I think Professor Nonnecke made a very valid point. In the European Union, we have these conformity assessment bodies, but they are regulated themselves. I cannot just say, "look, I'm a conformity assessment body. I mean, just ask me, pay me, and I'll give you a certificate and I can confirm that you're all right." Now, these bodies themselves also need to be regulated. So, in conclusion, international alignment is key for the European Union. Convergence is emerging. I mean, we have the OECD, we are negotiating in the Council of Europe, which is not the European Union, but it's an organization of 46 member states. And the US is an observer there, also a convention on AI.
- Gerard de Graaf
Person
And that is using the same definition as the EU AI Act and the same definition as the OECD. We are working with a lot of partners around the world, and they're also kind of inclined to follow the OECD definition. I think it's important in the OECD definition to take the latest version of the OECD definition. So, i.e. the one of, like last summer, rather than the one of 2019, because it puts more emphasis on two very important elements. One is the inference, so as to ensure that not every automated decision is covered, and also the issue about implicit, so the objectives that the system itself develops after it's been deployed. With that, Madam Chair, I yield the floor.
- Rebecca Bauer-Kahan
Legislator
Thank you. I can start? Unless anybody else has some questions. So, I mean, I think that what I'm gathering from this conversation, and my reading generally is there seems to be consensus around moving towards the OECD definition. And I think both of you, again, not exactly, but that's sort of the foundation of both definitions with some differences. So we'll start with the privacy agency definition. Again, it's very similar, but it is not directly adopted.
- Rebecca Bauer-Kahan
Legislator
And specifically, you call out generative AI in a way that is not called out in the OECD definition. Can you talk about why you made that change?
- Ashkan Soltani
Person
Certainly, and this is again a draft recommendation, but certainly we felt that one, our definition doesn't vary so much from the EU AI, the latest--I think it was October definition--from the OECD definition either. We just rewrote it for kind of clarity and simplicity, and for our statute directs us to also provide guidance to businesses. And we do that through APA rule-making. So through examples.
- Ashkan Soltani
Person
And so for purpose of being abundantly clear, we thought it was important to articulate that generative AI models are included and what the rights are with respect to that. So our definition is almost, I think, mirrors OECD. It's rewritten for multiple sentences instead of one long on. It's like, do you prefer Hemingway or do you prefer?
- Rebecca Bauer-Kahan
Legislator
Got it. I feel like all I did as a law Professor was cut words out of people's writing, so I'm a shorter writer. So, and then the EU AI Act adds in this idea that you spoke about displaying adaptiveness after deployment, and you spoke a little bit about the importance of that. As I read it, and correct me if I'm wrong, it may include that, but it doesn't necessary that it includes that adaptiveness. Is that correct?
- Gerard de Graaf
Person
That's correct, may or may not.
- Rebecca Bauer-Kahan
Legislator
Yeah, may or may not. I guess I understood what you were talking about around voice recognition, et cetera, in which it would apply. But why did you add that in? Why was that important?
- Gerard de Graaf
Person
Well, the adaptiveness refers to the ability of the system to learn after deployment, so once it's in use. And that's, I think, closely related to also the point, the element of inference. Although adaptiveness is more the deployment or the use stage, and the AI Act kind of, again, it covers both situations. So where the system doesn't adapt and where it does adapt. If it doesn't adapt, and if we would only focus on where it doesn't adapt, the definition would become too limited.
- Gerard de Graaf
Person
And therefore, we believe it's important to add the word "may adapt," because many current advanced AI systems, generative AI, voice assistance, recommender systems, actually do adapt after use. And so if we would kind of not make the definition kind of clearer in terms of may or may not, then we would probably miss out on a very important part of what we would want to cover, because, of course, the definition must zoom in onto areas where we believe there are certain risks that need to be mitigated.
- Gerard de Graaf
Person
And if we would have a definition that would be incomplete, we would miss out on these important elements. So adaptiveness is actually increasingly a feature of these systems. So that's why we got it emphasized "may adapt."
- Rebecca Bauer-Kahan
Legislator
And then one other change was that instead of just saying "it operates at varying levels of autonomy," you added in that it had to be designed to operate at varying levels of autonomy. That seems like it would actually narrow the scope, the definition. Can you talk about that?
- Gerard de Graaf
Person
Yeah, I mean, that's a bit like a Hemingway, kind of. It's a linguistic rather than a substantive difference. I mean, when we do legislation, we have lots of recitals around that explain what the legislation is intending to say. And actually, in the recitals, it's clarified that the definition doesn't consider the intentions of the designer, maybe because it might make you believe that design means there was an intention of the designer that should not be read into the text, and the recital clarifies. That is immaterial.
- Rebecca Bauer-Kahan
Legislator
Got it. Okay, that's helpful. And then think, you know, we see already, in just a short period of time, even the last year, how quickly this technology is moving--my colleague from Oakland mentioned that--and the privacy agency's definition expands from machine-based to engineered or machine-based. Our science fellow was teaching me about biological materials that can be used, organoids, that's a whole 'nother mind blowing idea.
- Rebecca Bauer-Kahan
Legislator
So, I guess, have you guys thought about whether your definition that you're thinking of adopting or have adopted will need to be reevaluated over time, or do you think it's nimble enough to really meet the needs of the future? And I asked both of you that question.
- Gerard de Graaf
Person
Well, the AI Act is a risk-based product safety regulation. So product safety regulation typically focuses on products that are already on the market or relatively near to the market. I mean, it's hard to kind of already imagine products that kind of, like, are 10 years away from us. So it is a pragmatic. It takes a pragmatic approach. It aims to regulate the systems that are on the market or soon will be on the market.
- Gerard de Graaf
Person
And of course, when we were reflecting on the definition, there is always this, like, "well, how can we make sure it's future proof, but at the same time, how can we make sure we provide the necessary legal certainty and predictability?" And so no definition can accommodate for all potential future developments. But that being said, machine-based refers to the technology being used, and that can be hardware or software.
- Gerard de Graaf
Person
And in principle, this could also encompass hardware which goes beyond our current technical understanding to cover other mediums or information or system. We have the opportunity under the EU AI Act to provide guidance on the definition. So we could also, over time, of course, not without changing. I mean, we cannot change the definition, but at least we could clarify if there are certain developments that kind of can be captured by the definition.
- Gerard de Graaf
Person
We can actually provide legal certainty by issuing guidance to confirm that that is the case. But there is a trade-off between future proof-ness and legal certainty and predictability, and the EU chose to regulate what it is aware of and what is on its way to the market, and not to regulate what might come in 10 or 15 years time.
- Rebecca Bauer-Kahan
Legislator
Thank you. Anything to add?
- Ashkan Soltani
Person
Yes, certainly. That's essentially our approach as well. Our statute directs us to update the regulations and definitions as necessary to address changes in technology, and we certainly would expect to do that as biological computation becomes introduced, widespread in the market. As I mentioned, my background was cognitive science. In the 90s, we did computation on squid neurons doing small case computation, but we haven't seen that deployed yet.
- Ashkan Soltani
Person
Certainly, in that case, we would either seek to either interpret our regulation as it applies to that or expand our definitions to accommodate those new technologies.
- Rebecca Bauer-Kahan
Legislator
Got it. Thank you. Appreciate that. Yeah. Ms. Wicks?
- Buffy Wicks
Legislator
Thank you, Madam Chair. For Mr. de Graaf, thanks for your testimony today. Would love for you to expand a little bit upon, as you all were engaging in creating a regulatory framework, what the conversations with industry were like, the concerns they raised, what they felt was doable. I mean, one of the things that we want to make sure we're doing here is that we're actually creating policy that can be implemented. Your framework seems like one that allows for that in a way, because of the risk assessment.
- Buffy Wicks
Legislator
So that's one question. Also, is it your sense that industry would like to see conformity in terms of what's happening in the EU and what's happening in California? Ideally, I would assume federally as well, but that's beyond my purview, and seeing some alignment would make it actually easier to implement if all of us are essentially doing the same thing rom a regulatory point of view.
- Gerard de Graaf
Person
When the EU legislates, there is a very intensive phase of public consultation and a very intensive phase of impact assessment. So it is something that kind of is well prepared. We take our time. We made a proposal in 2021, but we started the work in 2017-2018. And of course, there'd been a lot of discussion with the industry at an early stage. We focused, and I think Professor Nonnecke made that point very eloquently.
- Gerard de Graaf
Person
We focused very much on the problems or the risks that exist today, where AI systems are used and they could produce outcomes that would not be fair or not be transparent or discriminatory. That's how the original AI Act was conceived. And I think that's something that, of course, I think generally was endorsed, including by the industry.
- Gerard de Graaf
Person
And the industry in the European Union is very familiar with a product safety approach, because that's the approach that we generally use in order to facilitate the trade in goods across this single market of 20, 27 member states. Of course, then more recently, actually, during the negotiations, then ChatGPT came out, and then there was the whole question, like, "well, what are we supposed to do? What should we do with generative AI?"
- Gerard de Graaf
Person
Can it be captured under the framework that we have already developed, or do we need like a specific chapter on generative AI? It was decided we needed a specific chapter. So it was a bit kind of like working on expanding the legal text whilst we were negotiating it.
- Gerard de Graaf
Person
I think, in general, what the European industry fundamentally dislikes, and of course, this is one of the features, unfortunately, that we have experienced in digital, in particular, is a fragmentation of the single market, where if you are successful in one member state in Germany, and you want to take your digital product to another member state, you find out that the rules are different.
- Gerard de Graaf
Person
And rather than recruiting an engineer, if you're a startup, you have to recruit a lawyer because you need the legal advice, like, how do you need to adjust your product in order to be able to put it successfully onto the market in another member state? And if that happens like 20 times, then it slows you down and it drives up the cost.
- Gerard de Graaf
Person
And this is maybe one of the reasons why the EU doesn't have more big tech companies compared to the US, because it just takes a much longer time if we have legal fragmentation. So the EU industry is generally very much in favor of harmonizing regulation, then, of course, the question is, where do you put the bar? And we've put the bar where the bar has now been put.
- Gerard de Graaf
Person
I think when you look at the US, when we came here, when I came here a year and a half ago, the question was, or more the criticism, why is the EU regulating? It's too soon. You're stifling innovation. Technology moves too fast. I mean, the EU can't compete, so you're trying to hold us back. Well, then ChatGPT-3, 3.5, ChatGPT-4 came and the conversation in this country changed completely.
- Gerard de Graaf
Person
I mean, the criticism that we get now is, "why does it take you so long to regulate?" Because now we still have some kind of, we haven't adopted it yet. It will need a little bit of time to implement it. But there's definitely because, of course, insofar as this concerns activities of companies like OpenAI and Tropic and all the other companies we know in the European Union, they will be regulated by the EU AI Act.
- Gerard de Graaf
Person
And so they have an interest in avoiding a situation where, in multiple jurisdictions around the world, because these companies, of course, will operate at a global level, they will have to cope or to address different frameworks that kind of may in some cases even be inconsistent. So they definitely would welcome an alignment between the EU and the US. Also very supportive of all the work that we've been doing in the Trade and Technology Council.
- Gerard de Graaf
Person
And I guess also kind of working to see the extent to which California and the EU can align its methods. I think that one of the critical points will be on the enforcement side. And I think the point has been made different times. The enforcement in the European Union is done by market surveillance authorities. I mean, these market surveillance authorities nowadays are responsible for checking that a lawnmower meets the technical safety standards that, like a dishwasher, all of technical equipment.
- Gerard de Graaf
Person
And so to get these surveillance authorities up to speed, because to assess and the conformity of an AI piece of software or an AI device, a medical device with AI in it, for example, or a toy with AI in it, requires a lot of expertise that needs to be built up very quickly. So I think from a public policy point of view, there's a challenge for policymakers to understand. Say, I think what you're now also working on with hearings, et cetera, like what is the technology?
- Gerard de Graaf
Person
Where is it going? Where is the market failure, where is the potential need to intervene and regulate? So that's definitely a challenge. And the EU has kind of worked its way through this. But then the other challenge is, once the regulation is on the books, is in order, is to make it work and to have the necessary enforcement capacity. The European Union, for example, is also establishing now an AI safety office to supervise these very powerful, generative AI models.
- Gerard de Graaf
Person
Well, I can tell you that is a huge challenge, also to recruit people that have that necessary expertise in the public sector where there is a war for talent going on, and the public sector, unfortunately, can't pay the kind of salaries that the private sector can pay. That is a significant challenge. We will manage, but it is not to be underestimated.
- Rebecca Bauer-Kahan
Legislator
Thank you. Anything else? Perfect. Anybody else? Yeah, Mr. Hoover, no worries.
- Josh Hoover
Legislator
For Ash, thanks for being here, both of you. Just quick question, from your perspective on why do you feel we have kind of two simultaneous attempts at regulation going on between CCPA and the Legislature? Where do you see your agency's kind of limitation of its authority and how that will interact with the chair's legislation and kind of the legislative efforts this year?
- Ashkan Soltani
Person
I appreciate that. I'm not certain that I see kind of a competition or I see kind of an opportunity for collaboration between our agencies. The initiative, the ballot initiative, directed us as of 2021 to begin rule-making in this area. And certainly we've done multiple years of work in that space and are about to promulgate our regulations in that specific application where the agency and the board have felt automated decision making technology introduces risks to consumers privacy and security.
- Ashkan Soltani
Person
Certainly, as I mentioned in my testimony, there are areas outside of that where we, I believe, can be a resource to you all as technical experts. We have an audit authority. We have kind of, I'm a technologist myself, I'm not a lawyer. We can help this body figure out how to govern these other important areas that fall outside our framework, and there's kind of opportunities for kind of harmonization between those two frameworks.
- Ashkan Soltani
Person
Our statute actually directs us to harmonize where possible, to ease implementation of the law. And I think for this reason and what Mr. de Graaf mentioned, this is why kind of a uniform definition is important, why interoperability is important, why I think having a common vocabulary is important, and technical expertise. And so I don't necessarily see it as kind of competitive. I see that as a great opportunity to collaborate.
- Rebecca Bauer-Kahan
Legislator
Thank you. And I agree. And we've already begun our collaboration, and I really appreciate the agency's expertise. It is hard to get, and so the more we can leverage the expertise at our fingertips, the better. And there's no question that I believe it's our role as a Legislature to always protect our constituents, and so that's what we'll continue to do. So thank you, and thank you both for being here and for your answering all of our questions and providing your important input.
- Rebecca Bauer-Kahan
Legislator
The definition is so critically important, so we really appreciate your insight. And then with that, we will move on to Panel 3, which is Issues and Policy Solutions and Artificial Intelligence. We have Professor Farid, a Professor of Computer Science at the University of California, Berkeley. I am married to a Cal ECS major, so welcome. We have Orly Lobel, the Warren Distinguished Professor of Law, Director for the Center for Employment and Labor Policy from USD, University of San Diego.
- Rebecca Bauer-Kahan
Legislator
And then online we have Sorelle Friedler, the Shibulal Family Associate Professor of Computer Science at Haverford College, who was at the Office of Science, Technology and Policy at the White House for some of their work as it relates to artificial intelligence prior to becoming a Professor at Haverford. With that, Professor Farid, would you like to begin?
- Rebecca Bauer-Kahan
Legislator
Button, mic, button.
- Hany Farid
Person
I should turn the mic.
- Rebecca Bauer-Kahan
Legislator
There we go.
- Hany Farid
Person
Thank you. So we've been talking a lot about AI and machine learning. I'm going to sort of narrow in a little bit and talk specifically, specifically about deepfakes, generative AI, and talk about a little bit of how they're made, what they are, where are we, how they're being weaponized, and how we should be thinking about this. And I want to get back to some of the discussions we've been having about the non consensual sexual imagery, because I think that's an important one. Next slide, please.
- Hany Farid
Person
So, in the space of generative AI over the last few years, we've seen phenomenal advances in the ability to create highly realistic images, audio and video. This on the screen here is an example from May of last year. It is an image that was created using generative AI with a text prompt that reads, an explosion outside of the Pentagon. It went viral on Twitter, and in about two minutes, the stock market dropped a half $1.0 trillion. That's a T, trillion.
- Hany Farid
Person
When you talk about generative AI, you should think beyond the actual content. The issue with generative AI is not just that you can create this stuff, but that you can distribute it widely via Twitter and Facebook and YouTube and TikTok. And so we have to think about the entire online ecosystem, not just the creation, but also the distribution. This is a pretty crappy image, and it moved the stock market half $1.0 trillion.
- Hany Farid
Person
And today I could go to any number of open source or very low cost services and generate highly, highly realistic images that can do real damage. And that technology is out there and there is no bringing it back. Next slide, please. I think it's just worth. I'm a professor, so I like telling people how things work. And so I think it's just worth to understand how these things work.
- Hany Farid
Person
And so I want to just take a minute here to talk about how that type of image is made. So the way these generative AI systems work for images is that they ingest billions and billions of images on the tune of around 6 billion images with captions. So these are annotated images. This one is a group of people having wine up in, up in Sonoma County. And what the models do is they start degrading the image.
- Hany Farid
Person
So from left to right, you see a degradation with what's called additive noise. You make the images less and less and less visible. And then it learns to reverse the process, conditioned on the caption. And so 6 billion times it has learned, how do I take a pure noise image, what you see on the right there with a caption, a description, and go backwards into something that is semantically consistent. And it's just done that 6 billion times.
- Hany Farid
Person
And so now when you give it basically any caption, it's pretty good at it. This is, by the way, why when you give the same caption over and over again, you get a different image. It's because it's starting with a different random seed, and that just gives you a sense of the power of the underlying technology. Next slide, please. On the video front, just. I think it was last week. Can you play the video, please? Sora released text to video, so now we have the.
- Hany Farid
Person
OH, we didn't have that video. No. Okay, go look up Sora. We now have text to video, so it will no longer be an image of a bombing of the Pentagon. It will be a video of the bombing of the Pentagon. And we now will create highly realistic videos of people saying and doing things they never did. On the audio front, can we try to play the audio? Okay, we can now synthesize people's voices from about 60 to 90 seconds of their speech.
- Hany Farid
Person
So I can take my representative here, take any of your interviews, 60 seconds, upload it to a service I pay five minutes for, $5 for, and I can type and have you say anything I want you to say. So if we can try to go to the slide with Anderson Cooper on it and play an audio. Yeah, so if we can play that audio, we can't hear 20 for three, because I didn't get the mic right either.
- Rebecca Bauer-Kahan
Legislator
We need a computer science expert here.
- Hany Farid
Person
God damn it. Hey, Brandy. No. Okay. She's taking pictures of this, by the way, just for the record. Okay? So we can clone people's voices. Highly realistic. You saw this in New Hampshire with the Biden robocalls, which, by the way, a magician in New Orleans was paid $150 to create that audio. You can't make this new story up, by the way, on the video front. Next slide, please. I'm not even going to try to play the video.
- Hany Farid
Person
Not only can I generate audio of Anderson Cooper speaking, you can go to the next slide. I can now get him to say those words so I can create so called lip sync deepfakes where I modify just his mouth from another video and I can make him say anything I want. And we've seen example after example of this being used for scams and frauds. We're seeing face swap deepfakes where the entire face is replaced to create non consensual sexual imagery and push disinformation campaign.
- Hany Farid
Person
Next slide, please. You can skip the video. It doesn't matter. It's just basically Anderson calling in preparation for this. It's okay. It's basically Anderson calling me a dipshit because this is what my students do for fun. So you've already talked about non consensual sexual imagery. This stuff is being used. You have to understand that this technology is not hard to use. For most people, it's just a matter of going to a website.
- Hany Farid
Person
Either it's free or you pay a few dollars and you can now create almost anything. Next slide, please. So if we can play this, this would be really great. If you can. All we are is dust in the wind, dude. Okay. The video didn't play, so we can also create over four. We can also create deepfakes in real time. So this was a Zoom call I had with one of my former students, Elliot. And as he was talking to me, his face looked like Keanu Reeves.
- Hany Farid
Person
And he was doing that on his personal computer in real time. And if you missed the story a few weeks ago, there's a guy in Hong Kong who lost $25 million. That was not the first time, and it will not be the last time. We have HR problems where people are interviewing for jobs, and it is not who they purport to be. We are seeing massive fraud. And there was one in the UAE, there was one in the UK.
- Hany Farid
Person
And I can tell you this is happening here in the US, but nobody's talking about it publicly because it's embarrassing. And so now we can create deep fakes in real time, both audio and video, which means the phone calls you're going to start getting and the Zoom calls you're going to be on, you are not going to no longer know whether that person is real or not. Next slide, please. Okay, that's the Hong Kong one. Next slide, please.
- Hany Farid
Person
So I want to spend the last three minutes that I have talking a little bit about threats and mitigation. So I've already mentioned a few of them. You are seeing small scale and large scale fraud. Individuals. Grandmothers are getting phone calls from their grandsons, purporting to be their grandsons, and then scamming them out of thousands of dollars saying they're in trouble. We got the big multimillion dollar scams happening at the organization level.
- Hany Farid
Person
We are seeing disinformation campaigns being pushed around, the campaigns around elections, not just here, but around the world. We are seeing this general erosion of trust. And here's the big one that you need to think about is that if we enter this world where anything can be fake, anything you read, anything you see, anything you hear, well, nothing has to be real. We have plausible deniability.
- Hany Farid
Person
The access Hollywood tape of then candidate Trump in 2016 saying what he does to women, he doesn't have to cop to that anymore. It's a deep fake. Right. How are we going to deal with this in the courts? How are we going to deal with in elections? How are we going to trust anything that we read and see online? And that erosion of trust, I think, is worrisome. Next slide. And so, in the last few minutes, I just want to talk a little bit about interventions.
- Hany Farid
Person
So there's a couple of places that you can come in. But let me just start with the challenges. I think they're obvious. It is a fast moving space. There are some big players in the space.
- Hany Farid
Person
There isn't the open AIs, but there is a huge, enormous open source community around the world developing these technologies, making them freely available, and so regulate all you want, but the 17 year olds outside the US don't care about your regulation, and that's going to be a very hard lift, and that I don't have a great answer for. But we should talk about it. I think that there are legislative interventions and we should talk about that.
- Hany Farid
Person
But one of them that I want to point to, and this is going to get back to the non consensual sexual imagery, is you're not going to stop 15 year olds from creating this content. You are not going to pull down every single open source model that creates non consensual sexual imagery off the Internet. You're not going to be able to regulate your way out of this. But there are choke points, and the choke points are that people are monetizing this.
- Hany Farid
Person
There are four large financial institutions, Visa, MasterCard, American Express, PayPal, that if they cut off the feed to these services, then downstream, now they got to move to crypto, and I'm going to declare success. So you should think about choke points in the technology sector, because it's not just that people can create this, they have to have a presence on the Internet. Somebody is hosting their service, they're using cloud computing, they're using financial services.
- Hany Farid
Person
So think about the entire ecosystem, not just that core AI set, because those are going to be bad actors, but there are good actors, either intentionally or unintentionally propping them up. Also on we should be talking about, very seriously about copyright infringement and intellectual property, because I have 17 seconds and I saw you looking at your watch. So we should be talking very seriously about content creators rights.
- Hany Farid
Person
The vast majority of what we are talking about in the generative AI space has been created by indiscriminately scraping the Internet for a lot of people's content. Thank you very much.
- Rebecca Bauer-Kahan
Legislator
I mean, we're lax around here. We give people some extra minutes. If you had anything to add.
- Hany Farid
Person
No, I think I got all the videos didn't play, so I was faster.
- Rebecca Bauer-Kahan
Legislator
Okay. I appreciate that. I think your point around the choke points is a really important one. And I would say that some of the stuff is more than worrisome. I think our democracy is really on the line, and I think we saw that coming out of the recent war in the Middle East. At the beginning, nobody knowing what information flowing was legitimate and what wasn't. And that's a really scary world to live in.
- Hany Farid
Person
I think that's right. And just to contrast that, when Ukraine started two years ago, we saw a little bit of disinformation, a little bit of deep fakes, but it was nothing. It was nothing compared to what happened in Gaza. And so you saw that over the last two years and it just poisoned the well. So things that were real were being claimed as fake, things that were fake were being claimed as real. And everybody just gives up at that.
- Rebecca Bauer-Kahan
Legislator
Yeah. Awesome. Thank you. Okay, Professor Lobel, on to you. Thank you. zero, your mic button.
- Orly Lobel
Person
Okay. Thank you, Chairman. Thank you members of committee. My co panelists, I'm really happy to be here to talk to you about AI, technology, privacy. I don't even have a PowerPoint. So Professor Farid was talking about his tech fails. I failed to send it four days, calendar days in advance, and when I sent it, it was not approved. So you'll have to imagine all my images, but I'm also going to step back.
- Orly Lobel
Person
Professor Farid was moving to the more narrow question of deep fakes and images and trust of generative AI. I want to talk as a behavioral scientist and somebody who studies law and policy and collaborates with economists and social psychologists to think about how we think. We're thinking about regulating AI and to think very seriously about how we can direct law and policy to positively impact human and organizational behavior in human and machine interaction.
- Orly Lobel
Person
So we've heard already about the dazzling range of bills that are right now before the legislature, the race to regulation. There is also very much kind of this pendulum swing that we heard from our colleague from the EU about why has the EU moved so quickly to legislate? And now why hasn't it moved quick enough? And I think that there's also this kind of pendulum in how people think about AI for good. Is it producing more risks and problems?
- Orly Lobel
Person
And what is very important to me is that we have informed conversations about what we are talking about. So we had a lot of good presentations about the definitions of AI. But I also think that we need to think about how do we think about the risks, the benefits and the potentials? How do we think about the balance of shifting to automated systems.
- Orly Lobel
Person
I, in my research have been documenting vast potential of AI, so potential to increase accuracy and access and knowledge, efficiencies, safety, consistency in really every field of our lives and in both the public and the private sectors. Now to give just a few examples of the numerous developments that are right now happening in AI that have potential to improve lives and regions. These really, again, span every aspect of our lives.
- Orly Lobel
Person
But I think very obviously health and medicine, the FDA has already approved hundreds of AI based medical devices, spanning from radiology and diagnostics to procedures and ongoing patient care. In my work on employment processes, I've been very interested in how companies are using AI to create broader pools of applicants, to create more diversity in hiring and retention, to use AI to see pay gaps, gender and racial pay inequities that are not seen perhaps by human decision makers, and many other examples.
- Orly Lobel
Person
Again, I was a law professor. I think a lot about access to justice and legal tech that has the potential to increase the services that are provided to low wage workers when they need access to justice, to immigrants, examples in education, and also alleviating government work and helping create more compliance and efficiencies in enforcement. And at the same time, there's of course, concerns. And a lot of the times the concerns about AI risks or AI harms or fails are justified and real.
- Orly Lobel
Person
But at times we've documented how they are distorted and some of the fears are irrational, or they're focusing not on the AI that's here, but kind of the terminator effect of the AI that's not here. We've already kind of mentioned this, wanting to regulate what we do have and not kind of fearing the technology that has not been developed. Not that we shouldn't think about it, but we should definitely be most focused on what it is actually before us.
- Orly Lobel
Person
So I think it's helpful to present five different ways that legislators and policymakers can think more rationally about AI readiness for deployment and thereby how to regulate it. What to regulate? So one is to insist constantly on a comparative advantage question in a lot of these areas of regulation, there is a tendency to hold a double standard, to demand perfection from AI applications that is not demanded from human decision makers. You see this, for example, in the question of bias.
- Orly Lobel
Person
Again, as already mentioned, there's many types of biases. But as somebody who's been teaching employment discrimination for two decades and being very frustrated, for example, by the stagnating gender and racial pay gap, you see that there's potential here actually to close the gap and to be more inclusive in hiring processes. And the question, again, is not whether AI is bias free, but whether it's outperforming the human system, the HR person who's doing sorting.
- Orly Lobel
Person
And in fact, the comparative advantage question also relates to this question of whether AI is really a black box. And again, our first speaker said she would deny the fact that it is a black box. There are ways, there are many advanced ways to audit and to see what's behind the recommending system. Whereas again, studying human behavior, we know that our tiny algorithm called brain is a black box, and it's very difficult, even for our own selves, to understand our unconscious biases and to debias.
- Orly Lobel
Person
So again, insisting on a comparative advantage. A second way that we need to be really careful in how we're thinking about AI when we want to legislate and regulate, is understanding the source of fails and not thinking about AI as static.
- Orly Lobel
Person
So I see a lot in my research, the kind of rehashing of the same stories of fail, for example, with facial recognition, you'll see an agency report that's citing decade old study about a facial recognition system that does better on certain demographics and worse in other demographics, that actually, if we understand the source of the fail and the bias is, that's a relatively easy correction of feeding more data to this system and creating more accuracy across the different recognition the demographics.
- Orly Lobel
Person
Some other sources of bias, of course, are not the ones that the technology can correct, because they're simply kind of being reflected by the AI, but they are sourced in deep societal inequities. And there too, we can't expect kind of everything from the technology. If our education systems are inequitable, we can't expect to now have an AI that removes all of early education at the point of higher education, and have a system that doesn't kind of mirror back our inequities. We need to address them.
- Orly Lobel
Person
But it won't necessarily be a technical solution, right? It won't be a technology fail or success in that sense. The third thing that I think is really important for our legislators to always keep in mind is scalability and consider scarcity. I think it's hubris, even when there's kind of this understanding of a comparative advantage.
- Orly Lobel
Person
To think, for example, of introducing automation in the medical field or the legal field, to say, well, if you took the two most competent litigators in the nation, or the two best radiologists in the nation, they're still outperforming this bot that is doing the work, this Chat GPT, or whatever it is that's creating a document.
- Orly Lobel
Person
We have to be very conscious and pragmatic and frank about who has access to these professionals, and we need to think about these costs and the ability to scale new technologies to create more inclusion. The fourth way that I think is really important to kind of insist as a policymaker on how to legislate AI is to avoid conflating the readiness and the safety of the technology itself and its impact and its societal impact. Both are important, but they're separate questions.
- Orly Lobel
Person
And the example that I would give is the most recent process that happened here in California, where there was this question about self driving autonomous trucks and the question about the safety of these vehicles on the road. Being autonomous is separate and different from the question of the loss of jobs. Both, again, are important, but I would urge always asking the question of what it is we are trying to prevent. And when we're talking about both kind of aspects together, we conflate them.
- Orly Lobel
Person
I think there's a lot of inconsistencies and harms that can be introduced by legislation. And finally, I think that we see a lot of these kind of assumptions about privacy. Again, this committee is very interested in, of course, in privacy as it should be. I think that there are many different definitions of privacy and kind of ways to create privacy.
- Orly Lobel
Person
We've heard already the term data minimization, but I think that in many ways, there needs to be more of a focus about misuses of information these days because these technologies are moving so fast, because of their such strong competencies, it's become more and more difficult to blind systems. So this is kind of what we call the input fallacy. It's hard to just tell them, don't consider different aspects, don't collect different information because of all the proxies that are collected and detected.
- Orly Lobel
Person
I think it's really important to look at how do we think about equities, in effect, in how they're making their recommendations and auditing kind of the outputs rather than simply the inputs. And also, I think it's a real opportunity right now to be insistent on not just having the language of privacy, which is a fundamental right in California and across the nation, but there is also a right to be counted, to be in our system, to be seen.
- Orly Lobel
Person
And I think that it's a moment where we can think about AI as an opportunity to correct a lot of missing data, a lot of skewed data, as I mentioned, for example, with facial recognition. But much more importantly, like years of missing data sets in health that don't count, for example, in clinical trial trials, different demographics that have been traditionally excluded.
- Orly Lobel
Person
So I think we need more language of using AI to both protect personal information and privacy and autonomy, but also to create more knowledge and to see and to include communities and individuals that have not been included. So I'll end here, but I'll just say that I think about regulation as a pyramid. It always has been a pyramid, but even more so with AI, where the technology is so new, changing, dynamic, complex, applied in all sectors.
- Orly Lobel
Person
I think it's very important, as we've heard today, to think about test beds and sandboxing and all these kind of experimentation terms, and not just kind of think in a binary. Let's either slow down or ban certain technologies, certain applications, versus kind of allow private industry to run free. Thanks.
- Rebecca Bauer-Kahan
Legislator
Thank you. And I appreciate that. I will say that one thing that I think we are very cognitive of is some of the data sets that you are discussing that don't contain the diversity that would deal with ideal outcomes comes from generations of mistreatment and harm that have caused those individuals to choose not to be part of, say, clinical trials.
- Rebecca Bauer-Kahan
Legislator
And that it is really critical for us to recognize that, to respect it, and to make sure that we are ensuring that people's rights to be included or not included are acknowledged from a place of where they're coming from.
- Rebecca Bauer-Kahan
Legislator
And some of that is from generations of trauma. With that, I will turn it over to Professor Friedler, who should be online. I see SF on the screen, so I think once you talk, we'll see you. Professor. Hi.
- Sorelle Friedler
Person
Hi, everyone. Can you see me and hear me okay?
- Rebecca Bauer-Kahan
Legislator
We can. Good to see you.
- Sorelle Friedler
Person
Okay, great. Thanks so much for having me here, and I'm sorry not to be there with you in person. I'm Sorelle Friedler. I'm a computer science Professor at Haverford College. And I'm the former Assistant Director for data and democracy at the White House office of Science and Technology Policy. I'm also a researcher in AI. I've been doing responsible AI research for about 10 years, and before that I was a California resident where I was a software engineer at Alphabet.
- Sorelle Friedler
Person
So I'm speaking from all of these perspectives. I think there should be some slides. There we go. Okay, great. So I wanted to start by just making it a little bit concrete what some of the current AI applications are that we're really talking about. So, a few from my own research. I've done research on accelerating the development of materials that can be used for solar cells.
- Sorelle Friedler
Person
Working with material scientists, I've also done work that tries to understand the ways that predictive policing systems fall short and just frankly, sometimes don't work right. If you go to the next slide, there are also now some interesting applications coming out of agriculture where we see the potential to really make food production more efficient, to better use the existing resources.
- Sorelle Friedler
Person
On the other hand, there are concerns that that might come at a cost, that, for example, an AI system that is not appropriately tested and appropriately trained to understand the environmental impacts of over fertilization will lead to environmental impacts and overuse of the land in a way that will, over the long term, be inefficient, if you go to the next slide. We're also seeing a lot of concern about and use of AI in employment, in hiring.
- Sorelle Friedler
Person
So, for example, there are pretty standard resume screening tools that try to identify resumes, for example, based on language in the resume that matches existing employees. And we've seen that the public is pretty uninterested in being screened for a job using these tools. These same types of impacts are already happening all over the place, across sector. Whatever sector you are most interested in, it is probably already being impacted by AI.
- Sorelle Friedler
Person
So, for example, in housing, we've seen facial recognition used to sort of gate entry to housing complexes. We've seen these systems used in education. So I think LA Unified is currently testing the use of chatbots as sort of a way to have school advisors that are not really advisors, they're chatbots. We've seen the use of this in criminal justice, both with predictive policing, but also with risk assessments to determine who's high risk and help to determine bail.
- Sorelle Friedler
Person
Within the government, we've seen the use of this to try to identify fraud for benefits allocation, in some cases in ways that have gone terribly wrong, so that people have been denied benefits that they actually wrote. And that's just a small taste of examples, just to give you a flavor of, really what we're talking about here. So if you go to the next slide, the obvious question is, what can you all do about it? And I would say that the answer is quite a lot.
- Sorelle Friedler
Person
So I wanted to give you sort of an overview of some of the tools, one that I was involved in creating and then some others that I just admire from afar. So if you'll go to the next slide, you'll see the basic principles from the AI Bill of Rights. And this came out of the White House office of Science and Technology Policy in October of 22.
- Sorelle Friedler
Person
And as other speakers today have described, there is really a lot of consensus about the principles that we should really be expecting of these technologies. So in our case, there were five, other speakers have described seven or nine. Right. But there's a lot of overlap between these principles. So the first is that you should be protected from unsafe or ineffective systems. I think of this really as the technology should work. The second is that you should have protection from algorithmic discrimination. There should be data privacy.
- Sorelle Friedler
Person
You should have notice and explanation. And this is especially when a system is being used in a way that directly impacts you. And finally, there should be human alternatives, consideration and fallback. And I see that last principle as really helping to support the other principles. Right. Because even if we give our best effort to try to make safe and effective systems, there are going to be some small percentage of times where they do fail. And so then the question is, what forms of human based redress?
- Sorelle Friedler
Person
What avenues for appeal do you have? And you should have such mechanisms in place. If you go to the next slide, you'll see that in addition to these five principles, when we released the AI Bill of Rights, we also released a much longer document. It's like a 70 plus page document, and in it, we try to really help policymakers and others think about these concerns in three ways.
- Sorelle Friedler
Person
So first, we give a lot of examples of places where AI is used and some of the problems that we've seen. So if you're interested in hearing more about any of the examples that I sort of mentioned, admittedly in passing. That's another place to find them. In the second section, we talk about what should be expected of automated systems. These are really a description of sort of common sense guardrails that can and should be put in place that align with each of the principles, right?
- Sorelle Friedler
Person
Essentially, how could we get there? And then finally we talk about ways that these principles are already being put into practice. Right? So these are examples of laws or policies or voluntary industry practices that have already been put in place that start to get at putting some of these protections into practice. Next slide.
- Sorelle Friedler
Person
So thinking about where you all should start, if I were going to make recommendations, I think that one question that I do think it's worth asking is whether there are any red lines that you have for AI use. Because of course, deciding on a yes or a no based on use can sometimes be easier than these sort of more complex regulations, right?
- Sorelle Friedler
Person
So I think it's worth thinking about, as the EU did, whether there are any uses that come under an unacceptable use category. But for the majority of uses, what I would say, which I think is also in consensus with everything that we've heard today, is that you should really focus on the impacts of the technologies and not on the technical details. Right.
- Sorelle Friedler
Person
It really does not make sense to care about whether somebody used a neural net or some other system and how many layers the neural net had. Anything like that is really beside the point. What we care about is how it actually impacts people. Right? How it impacts our rights, how it impacts our safety and so on. And so there's a few categories that I find it useful to think about those impacts in terms of.
- Sorelle Friedler
Person
The first is, of course, civil rights, thinking about discrimination, thinking about forms of redress, especially given that these are being deployed in employment and housing and education and criminal justice. It's important to really take that into account. The second area is thinking about safety and efficacy. Do these systems work? What does it mean from a consumer protection standpoint, if they don't? Of course, you all are all very well versed in concerns about privacy. There's also other sector specific concerns.
- Sorelle Friedler
Person
So we might think about what does this mean in healthcare? What does it mean in education? Are there extra protections that we might want specific to a sector? As you all in California have seen, there's also a lot of impact on workers.
- Sorelle Friedler
Person
We can think about this both in terms of the potential replacement, so the writer strike, but also in terms of the recent tech layoffs, some of which are being spurred by the desire of companies to be able to put more money into funding their AI systems as opposed to funding their workers. Right. And we can also think about the sort of humans who serve as humans in the loop for part of these systems, right.
- Sorelle Friedler
Person
The humans who are painstakingly labeling the data so that AI can be effective. And finally, I want to encourage you to think as well about the environmental impacts. I think that can sometimes get lost, but I think it's a really important part of the picture, especially as these systems get bigger. They take more and more energy and they take more and more water. So I wanted to make sure to raise that with you all today if you go to the next slide.
- Sorelle Friedler
Person
So, getting slightly more specific, what does this mean? This means that you could put in place provisions that prohibit algorithmic discrimination. You could require impact assessments, right. So these would be testing that could be done before deployment. Similarly, to ensure safety and efficacy, you could require duty of care provisions that make sure that systems are tested so that they're safe and effective before use. And of course, in an ongoing way. In terms of privacy. Obviously, you all have very strong privacy protections already.
- Sorelle Friedler
Person
I would encourage you to also think about what this looks like in terms of AI when it comes to AI based inferences and whether it makes sense to think about destruction of models or other sort of future protective remedies. For transparency, it's really important to have this notice and explanation to individuals and broader disclosures that helps to support accountability. I mentioned already the importance of human alternatives so that people can opt out but also appeal or receive other remedies from AI harms.
- Sorelle Friedler
Person
And then finally, of course, oversight and enforcement right, including regulation and potentially a private right of action. If you'll go to the next slide. There we go. So just quickly, I wanted to give you two models for what this has looked like at the federal level. So in addition to the AI Bill of Rights, of course, you all are familiar with the fact that the President released an AI Executive order.
- Sorelle Friedler
Person
In that Executive order, he directed the Office of Management and budget to create guidance for Federal Government use of AI. And that draft guidance came out this fall. So this is a brief snippet from that, and in it it provides much more specific instructions for what minimum bar protections need to be in place for Federal Government use of rights-impacting AI and safety-impacting AI. Those are the two categories that you'll see on the screen.
- Sorelle Friedler
Person
And broadly, you can think about rights-impacting AI as relating to civil rights, equal opportunities, access to critical resources like health care, right these are sort of the people focused concerns. And then safety-impacting AI includes, for example, climate and the environment, critical infrastructure and so on. And if you go to the next slide, one of the interesting things that they did is in addition to these definitions, they provided a list of purposes that are presumed to be rights impacting.
- Sorelle Friedler
Person
So again, this is a very impact of the technology focused mechanism for guidance, where there's essentially a long list of possible use cases. And any AI that comes under one of these possible use cases is then required to meet certain minimum expectations. So if you go to the next slide, I believe I have just a few of the minimal expectations. Right.
- Sorelle Friedler
Person
So it's obviously a much longer memo, but for example, it requires that the agency using AI incorporate feedback from affected groups, that they use representative data, that they proactively identify and remove, factors contributing to algorithmic exploration or bias and so on. And it requires that they do this or stop using the AI until it becomes compliant.
- Sorelle Friedler
Person
And I think it's also really useful to note that OMB is doing this and will be putting it in place as a requirement because to the earlier concerns and questions about how to avoid a fragmented landscape, it does look like OMB is moving ahead with this. And so it's sort of useful to keep potential alignment with that in mind. If you go to the next slide, there's also an interesting model of what this could look like at the federal level coming out of the Lawyers Committee.
- Sorelle Friedler
Person
And again, there they take an impact based approach to thinking about the coverage. So there's a consequential action definition, which of course, also bears some resemblance to AB 331. And you'll see that it lists employment and education and housing and all of these other important sectors. And then finally, also from on the next slide. Yes, thank you.
- Sorelle Friedler
Person
Also from this model legislation, they include a duty of care provision that gets at some of these safety and efficacy concerns and requires preemptive testing in a way that I think is a really useful model. So if you go to the last slide. Thank you. So that's just a few places in case you're interested in learning more from these various resources. Thanks so much.
- Rebecca Bauer-Kahan
Legislator
Thank you. I have a few questions. We'll start with Professor Farid. One of the things that is really hot in the Legislature right now is the conversation around watermarking. So I wanted you to touch on that. Can you talk about watermarking strategies now? How effective are they, and is this something we should be thinking about?
- Hany Farid
Person
You should definitely be thinking about it, but I don't think it's a silver bullet. So let's talk about detecting AI generator manipulated content. There's two broad categories, what we call proactive and reactive. So reactive is you wait until something gets online, reporter finds it, they send it to somebody like me, we analyze it and 24 hours later we set the record straight. Meanwhile the stock market has dropped half $1.0 trillion or voters have not gone out and voted.
- Hany Farid
Person
And so it's more or less a post mortem at that point. For things that move very fast on social media it's important we got to set the record straight. Extremely useful in courts of law where we have time, but it's not going to respond in that split second that you need when things are spreading online. The proactive techniques of which watermarking falls into the category can, with some caveats which I'll describe, respond very quickly.
- Hany Farid
Person
So importantly there are two technologies to talk about here and often the second one gets dropped and it's really important. So what is a watermark? A watermark is exactly what it sounds like. If you look at a piece of currency you inject something into a piece of content. In this case it's digital in order to identify it downstream. That can be done with generative AI content and it can be done with real content.
- Hany Farid
Person
So the C2 PA standard is the open source Linux Foundation multi-stakeholder standard that has been developed and for full disclosure, I'm part of that organization. It's a not for profit organization and they are setting a standard for what these watermarks, how they can be implemented. But importantly, what we know from the last 20 years of trying to protect copyright is the watermarks are going to get ripped out.
- Hany Farid
Person
If you put a digital signal into something, somebody's going to figure out how to get the digital signal out and we know this, it's going to get attacked. And so a sophisticated adversary is going to figure that out. So the second technology you need is what's called fingerprinting. So whereas watermark injects something into the content, fingerprinting extracts a distinct signature from a piece of content and that is stored server side securely.
- Hany Farid
Person
And what that means is that if you have an asset with a watermark that goes out in the wild and somebody puts it up in their browser and the watermark is intact, you say this is AI generated or this is real done, but if it's not there then you can still query back to the service and say was the watermark ripped out from this? And then you can reattach the credential with the signature that is stored server side.
- Hany Farid
Person
It's a more complicated system because there's a communication that's required. And if you think about an OpenAI that is generating how many images a day, tens of millions, it's a big lift. So we have to build the infrastructure for that. So I think it's an important technology around things like frauds and scams and disinformation. But understand that it doesn't do anything for the non consensual sexual imagery because once that content is out there, it's game over.
- Hany Farid
Person
It's not a question of whether it's real or not, it's the harm of the content. The second thing to understand is that sure you'll get the open AIs, the Microsoft's, the Google's and some of the big tech companies to comply with that C2 PA standard it's fine. But there are hundreds of organizations outside of the US Open source that are not going to comply. So that doesn't mean you shouldn't do it, you should, but it's not that hey, let's just do this.
- Hany Farid
Person
And we've solved the problem with generative AI. I think it's part of a solution, but I don't think it's all the solution.
- Rebecca Bauer-Kahan
Legislator
And so going back to what you said earlier around the choke points. Yeah right. So if the choke point here, as I would think about it, would be the dissemination point versus the creation point. And so putting the obligation on the disseminator to ensure what is being put out there is real, which I think is good for consumers because it builds trust, which as you point out is really critical I think would be the smart way of doing it. Thoughts on that?
- Hany Farid
Person
And I don't, look, there's nothing fundamentally wrong with a fake image. I love the videos of Tom Cruise, TikTok. They're fantastic and I encourage more of them. So it's not so much about what you publish has to be real, it's that you have to have transparency. And the way I think about this is as a nutrition label. I go to the grocery store and I can buy all kinds of junk food.
- Hany Farid
Person
Right, but the government has said that if you are going to buy potato chips then you need to know what you're putting into your body. It's not saying you can or can't do it and we can choose to tax it more or not, but it's a nutrition label. And I think information should have a nutrition label that this is not a real image of President Biden, this is not a real video and not a real audio.
- Hany Farid
Person
And I 100% agree with you that if you really want to ask, where is the harm? I really do think it's the distribution channels and that we have all fallen asleep at the wheel, at. At the state level and at the federal level around dealing with social media.
- Hany Farid
Person
And we should not forget that they are escaping a lot of scrutiny these days because we're all talking about AI, but that is very much part of the problem, and particularly, by the way, around non consensual sexual imagery, which is just spreading like wildfire on platforms like Facebook, Instagram and Twitter.
- Rebecca Bauer-Kahan
Legislator
Yeah. Thank you. I appreciate. So I guess the follow up to that, or maybe I should ask this first. It seems like right now, proactively, there's not a way for consumers to know what is okay.
- Hany Farid
Person
No, this would be no. And I get this question all the time. How do I protect myself from a phone call or what I watch online? And my answer is, you can't. You just can't. The stuff is moving way too fast. You're ingesting content way too fast. And whatever I tell you today to look for or to listen for six months from now is going to be a solved problem. Right.
- Hany Farid
Person
It used to be the number of fingers on the hands and the AI generated images, and now that's a solved problem. And so consumers are really struggling to get reliable information. And the fact is that the majority of Americans now, for better or worse, mostly worse, get the majority of their news from social media. You can't have a stable society or democracy if people are getting their news from Facebook and Twitter. Got to help us all, honestly.
- Rebecca Bauer-Kahan
Legislator
Yep. Okay. Well, Professor, on that note, Professor Friedler. So I think, I really appreciate you showing us sort of what is happening at the federal level. I think there's some incredible thought going into what is coming out of OSTP and the White House and the EO, et cetera. So I think it's really helpful to think of that. One of the questions I get a lot, and I want to put it on, you know, what is the appropriate role of us as a state Legislature?
- Rebecca Bauer-Kahan
Legislator
And we heard, I think, an important conversation around consistency. And I always say I would love for Congress to act. I think that is our ideal situation we have in an effective Congress right now. So that's not going to happen. So barring that, where do you see us as California fitting into building this responsible AI that you spoke of?
- Sorelle Friedler
Person
I mean, I think know it makes sense for you all as the home of many of these tech companies. Right. And also as a Legislator that has history innovating in this way to go ahead and move forward. Right? I don't think that you need to wait for the Federal Government and Congress to act in order to protect Californians and by the way, the rest of us from these.
- Sorelle Friedler
Person
And, you know, I also think that, as I mentioned, I think that there are some other issues that are especially heightened in California. Right. When you think about labor in terms of the writer strike and the recent layoffs and also the environmental impact of these systems,
- Rebecca Bauer-Kahan
Legislator
I appreciate you bringing up the environmental impacts. One of the things that we have talked about is under our right to privacy, you have the right to delete, right. That's a fundamental principle built into the California Privacy Initiative. And yet the idea that you would delete from these large language models would require, I mean, you'd have to rebuild them, if anything, which the energy required to do that. And the environmental impact, what does that right mean anymore?
- Rebecca Bauer-Kahan
Legislator
So I think the environmental impact is an important part of the consideration as we think about how to even think about the things we believe Californians already have a right to. And as the former water chair, who knows how desperately we need California's water, obviously that's critical. So I appreciate that. And then as we start to go on this journey, and I've sat on this Committee for five years, and our prior chair, who is now on the bench, tried to move AI policy four years ago.
- Rebecca Bauer-Kahan
Legislator
I'm trying to remember four or five years ago. So this has been a conversation we've been having for a long time. Obviously, the technology has evolved, but to Professor Nonnecke's point, it's been around much longer than ChatGPT. I think we're sort of trying down this path. I think the definition is critical. It's why that was a huge part of the conversation today. But I'm just curious, Professor, your thoughts on what roadblocks are ahead? How do we sort of see what's coming in front of us?
- Sorelle Friedler
Person
Yeah. So I think that by taking an impact based right, by taking a rights based approach to thinking about what protections you want to put in place, I think that that both targets the interventions where they're needed, right where people are currently being harmed. But I think it also has the effect of helping to future proof some of the legislation.
- Sorelle Friedler
Person
For example, if you look at the work that the EEOC has done, they have recently been putting out guidance that makes it clear that their existing civil rights law still applies in the case of AI, right, That's a case where the existing law was written, obviously, long before these modern AI systems, but they're still able to look at that and say, okay, what are these rights protections and how can we interpret them?
- Sorelle Friedler
Person
Now, I think that there are some updates that need to be made for civil rights and to protect people from harm from these AI systems that are more recent. But I think that we can still figure out how to craft those in a way that are long lasting and technology neutral so that as new systems are developed, which they will be, these laws are still relevant.
- Sorelle Friedler
Person
Yeah, I think it's interesting because often when we think about the things we most need to do, I think in the AI space, it's things that we've already agreed on for decades were principles and protections that Californians have. And the question is, how do we extend those now that it's not people making the decisions of doing the acts, it's artificial intelligence, to your point.
- Sorelle Friedler
Person
So there's much work to be done there, although I am of the position that a lot of the laws apply despite the actor. So I will say that from my own perspective. So I think that the fundamental balance that we hope to achieve here in California is we are the home of innovation. We are so proud of the companies that come out of California and what they've done, and they've done so much good and they've done so much harm. Right?
- Rebecca Bauer-Kahan
Legislator
So both are true today of the innovations that have come out of this state. And I think that we hope that we can learn from those mistakes of the past that have allowed the harms to come to pass. Putting the genie back in the bottle on a lot of what has happened, be it the mental health crisis of our youth or the attacks on our democracy coming out of misinformation on social media.
- Rebecca Bauer-Kahan
Legislator
We're trying our best and we're failing to put that genie back in the bottle. And so as we face AI, I think it is my perspective that we need to learn from that, and we need to make sure we're protecting Californians. We also want to continue to allow for innovation. I think that many of the speakers today, if not all of you, have talked about the incredible benefits. Right?
- Rebecca Bauer-Kahan
Legislator
I mean, I just went and got my annual mammogram, and it is being read in a smarter way than it was a decade ago. They're going to catch cancer earlier. That is amazing. Right?
- Rebecca Bauer-Kahan
Legislator
So there's no question that there are risks to AI and there are incredible benefits to society. And so I think the real thing that we are grappling with is how do we balance that? And I think what I have learned, and I think what was highlighted here today was that really taking that risk approach that we saw coming out of the EU, that I think is a part of the blueprint that came out of the White House is really one way to do that.
- Rebecca Bauer-Kahan
Legislator
But I'm wondering if any of you have additional thoughts on really how we strike that balance. Well, Professor, I don't know if you want to go first.
- Sorelle Friedler
Person
Well, sure, I'll jump in quickly. I think. I would really say that. I think that sometimes it is presented as though innovation stands opposed to guardrails, and I just don't believe that to be true. Right. I'm a responsible AI researcher. It's all about innovation in exactly this space. There are a lot of startups that are doing that type of work. Right. I think that right now, a lot of AI companies are simply focused on making bigger AI.
- Sorelle Friedler
Person
I think that there's the chance that putting in place reasonable guardrails could help redirect some of that focus to making better AI. Right. I don't think it needs to put a stop to innovation. I think it redirects innovation in ways that I think are frankly fascinating. Right. And exciting. And so I guess I would encourage you all to think about it in that vein. Right. What is not being created right now because people aren't feeling the pressure to create AI that is better.
- Rebecca Bauer-Kahan
Legislator
I love that. Anybody else have anything to add on this question?
- Orly Lobel
Person
Yeah, well, I agree with all of this, but I think that an emphasis on infrastructure, on thinking about new ways in which competition might be impeded. So big versus small, open source versus proprietary, we haven't even talked about the kind of IP aspect, but really investing not simply on the development of the technology itself, but who is developing it. Having everybody involved, having children and adults be more skilled and investing in digital literacy will really create more of that.
- Orly Lobel
Person
I think balance and frank conversation about how we can get more of the good and mitigate the harms.
- Hany Farid
Person
I think we can learn something from the physical world here. I'm old enough to remember in the 1970s there were no seatbelts and airbags and the brakes. And when Ralph Nader was pushing for more safety in the cars, what did the auto manufacturers say? This will kill auto manufacturing in the US. China is going to destroy us. And it was a big fat lie because safety is actually good. We want safe products. And this idea somehow that it's at odds, as Professor Friedler was saying.
- Hany Farid
Person
I think she said it very nicely with innovation is simply a lie. It's simply a lie to move fast and break things. And I think you're right. Let's learn from the mistake of the last 20 years and let's do better for the next 20 years.
- Rebecca Bauer-Kahan
Legislator
Yeah. No, I think our market is healthier when consumers trust it, which I think.
- Hany Farid
Person
And when it doesn't kill us. Yeah, I'm just saying.
- Rebecca Bauer-Kahan
Legislator
No, it's true. And I do think, I think the point about competition is a really important one and one I've been focused on. As a former regulatory lawyer myself, who worked on compliance programs in the private sector, one of my fears around sort of intense regulation is when you make it too expensive, only the big players can play. And we do need to make sure that startups are able to be healthy in this environment, because that competition is really, really healthy.
- Hany Farid
Person
And we should be talking about antitrust, because in order to compete in AI, you need data and computing power. Who has that? Five big tech giants. So they have a phenomenal advantage in the AI war right now. And we should absolutely be making room for the smaller companies to innovate here.
- Hany Farid
Person
And by the way, when Google tells you that government regulation is bad, would you please remind them that when Microsoft was kicking their butt up and down the street using their market dominance, they went crying to the Department of Justice telling them to intervene. And they did. And that's why we have Google. You can't have it both ways.
- Hany Farid
Person
You can't have been created with the government making sure there's a healthy ecosystem and then complain when we're trying to do the same thing for the next small player.
- Hany Farid
Person
That was for you, Google.
- Rebecca Bauer-Kahan
Legislator
I love that.
- Rebecca Bauer-Kahan
Legislator
They're in the room. Don't worry. So I appreciate that. I think that one of the things that is interesting about drafting this law and the one that I have proposed does draw a distinction between developers and deployers. Right. And there are different roles being played by different entities in this space. Right.
- Rebecca Bauer-Kahan
Legislator
And I just learned about the folks that sell the data sets, which I didn't even understand was potentially a different actor in the space. So we have people who sell the data sets, the people who train the models, the people who deploy the models. And so I think as we think about how to regulate this, each of those entities need to be thought about individually. And I wondered if any of you had thoughts on sort of where the best way and how to think about that.
- Hany Farid
Person
I think that's the right breakdown. So let me just give you a couple of examples where we've seen problems. The LAION-5B data set, for example, which is used to train all the generative AI images was recently we found to have thousands and thousands of child sexual abuse images in them, and they had to pull that data set. I don't think somebody who licensed that and trained a model has liability for that. I think it's the people who created the data set.
- Hany Farid
Person
I don't think somebody who deployed a model that has been trained on that data set has liability. So I think that's exactly right. There's the creators of the data set, and they're also the ones who are violating all the copyright infringement too. They own that. And then it's the model training and then it's the deployment. So I think that's exactly right. But you do have to treat them quite differently because they're in different parts of the pipeline.
- Rebecca Bauer-Kahan
Legislator
Okay. Anything to add, Professor Friedler, to that?
- Sorelle Friedler
Person
Yeah, I would also just say, know, I think that when we're thinking about these high risk areas, we've been talking about how important it is not to just focus on generative AI. And I absolutely think that's true. But when we're thinking about generative AI, one of the things that companies will sometimes say is, oh, well, we can't even know if we're going to have problems with this impact or that impact. Right. And that's just not true. Right. We can, of course, test these systems.
- Sorelle Friedler
Person
So, for example, we could test them for medical misinformation. And so I think that for me, a lot of this comes back to that question of a duty of care, making sure that these products have actually been tested before they're released. And I think that there's going to be slight distinctions whether that's a developer or a deployer for what type of testing is expected. But I think that there should be testing expected.
- Rebecca Bauer-Kahan
Legislator
Thank you. Well, I really appreciate it. And I really appreciate sort of having academia represented here. I just was sitting with artists in my office who were teaching me about night shade. And so to your point about the computing power and the ability to really compete and create AI for good, we know our universities do have this capacity and they are sort of a force for good in all of this.
- Rebecca Bauer-Kahan
Legislator
So I really appreciate you guys being here, being so open and honest, putting some of our tech companies on check and, yeah, thank you all with that. We will conclude the hearing, but we will go to not conclude the hearing, conclude the panel, and go to public comment. So if anybody wants to join us to provide public comment, we will welcome that at this time. Come on up. Thank you all for joining us. That was awesome.
- Rebecca Bauer-Kahan
Legislator
My mic's off, so make sure name organization when you speak. Thank you. Try now.
- Ivan Fernandez
Person
Hello. Okay. Hi. Ivan Fernandez, California Labor Federation. Thank you, Madam Chair, for this important hearing. As outlined in today's testimonies, the advancements of AI will have profound impacts across all sectors and industries. As the Legislature works to regulate this technology, we urge Members to consider the importance of forming policies that have a human based and worker centered approach. The benefits or the harm of AI on workers and the public depend solely on the guardrails put in place regarding AI's development, its procurement, and its use.
- Ivan Fernandez
Person
Workers must be at the forefront of the conversation surrounding AI at every stage, not just once the technology is fully developed and used by employers. Being in the loop is not sufficient when AI has the capability of indiscriminately displacing millions of workers over a short period of time. From public sector workers serving our state to artists in the entertainment industry, every worker will be affected by AI. And we need to protect our workers and ensure industries use AI as a tool, not a replacement.
- Ivan Fernandez
Person
As a tool, AI can increase productivity and safety for workers. But AI should not replace human decision making. There are some decisions that are far too important to be made by AI alone. Skilled human workers must stay at the center of this crucial decision making. We are at a key point in time where we have the chance to regulate this technology for the public.
- Ivan Fernandez
Person
We cannot leave workers behind, and we must take advantage of the opportunity to create guardrails for the benefit of millions of working class Californians. Thank you so much.
- Rebecca Bauer-Kahan
Legislator
Thank you.
- Samantha Gordon
Person
Good afternoon, Chair and staff of the Committee. Nice to see all of you. My name is Samantha Gordon. I'm the chief program officer at TechEquity Collaborative. And our mission is to raise public consciousness about economic equity issues that result from the tech industry's products and practices and advocate for change that ensures tech's evolution benefits everyone. So thank you for your leadership in this space and for bringing this conversation together today.
- Samantha Gordon
Person
Many of the comments actually made by Professor Nonnecke reflect our vision for how best to regulate this technology. And I would underscore the conversation that was just happening from Professor Friedler and Professor Farid. I was at a panel recently where someone from industry who's an advocate of guardrails for this said, you know Ferrari didn't put brakes on a car so that it never moved. They put brakes on the car so it could go fast.
- Samantha Gordon
Person
And I would say that that's our vision as well, is thinking about how do we really center people in the development, design, and deployment and oversight of this technology allows us the opportunity to innovate in a way that is productive for everyone, not just for a handful of companies.
- Samantha Gordon
Person
And so a couple key points that we would encourage the Committee and all legislators to consider is that people who are impacted by AI, they must have that agency to shape the technology that dictates their access to critical needs like employment, housing, healthcare, so on. The second is burden of proof must lie with the developers. I would add to your list the vendors.
- Samantha Gordon
Person
So not just the people that buy the data, but also that package the model and send it and sell it to the deployers to demonstrate that their tools do not create harm prior to releasing them on the market, and that regulators as well as private citizens should be empowered to hold them accountable. And third, is really talking about some folks got into this around the compute power question, but that concentrated power and also data asymmetries have to be addressed in order to effectively regulate the technology.
- Samantha Gordon
Person
So, like I said, we believe the technology is most useful when it's designed based on the needs of real people. And as advocates for the responsible use of technology, we don't believe that it's feasible to completely stop the development of technology. However, we believe that shaping an affirmative vision for technology that centers humans and ensures that we are protecting their existing and future rights.
- Samantha Gordon
Person
We must also set clear guardrails and put regulations in place, in particular to deal with the unacceptable uses of technology that have a large potential for harm. And in those cases, in some of those cases, that technology we believe should be prohibited, as was recently done in the EU AI act. And we encourage the Committee to consider these principles to strengthen our resilience against products and practices that could further exacerbate problems in our communities.
- Samantha Gordon
Person
And we believe that strengthening our resolve against harmful uses of technology allows us to open up to new opportunities and approaches that are reliable, safe and beneficial for all Californians. Thank you.
- Rebecca Bauer-Kahan
Legislator
Thank you.
- Julian Canete
Person
Thank you, Madam Chair. Julian Cañete, California Hispanic Chambers of Commerce. First, let's just say we are encouraged by the fact of holding these type of briefings to both educate the Legislature and other members of the public and et cetera of what we need to be doing and how to approach this regulation of AI. I do want to start about the efforts to regulate AI.
- Julian Canete
Person
As expected, as we discussed, the Legislature has already introduced over 20 bills addressing the issue and we expect Governor Newsom and of course, the CPPA is about to announce its AI regulations as well. We encourage that there is collaboration. As we discussed during the second panel, we fully encourage for collaboration and coordination between the Legislature and the CPPA. And we encourage this because we want to avoid any issues or conflicts that could adversely affect a small business here in California.
- Julian Canete
Person
We do want to paint a little picture here about from where small businesses are standing. If CPPA adopts or an opt out regulation this spring and mandates compliance by October of 2024, and then the Legislature passes a bill on opt out conflicting with CPPA in 2024, effective in 2025. Now what happens then is we have regulatory issues that are likely to eliminate small business or hurt small business because they do not have the resources to comply with the multiple and potential conflicting laws.
- Julian Canete
Person
And this is something we hope to work with you all and avoid in the light of California's significant budget deficit. Now more than ever, we need small businesses to be able to thrive and grow here in California. I do want to close with a quote, in part from Governor Newsom's small business proclamation. California's small businesses account for over 99% of total small businesses of all businesses in the state and employ more than 7 million people, nearly half the state's private sector workforce.
- Julian Canete
Person
Our small businesses are global leaders in innovation and economic competitiveness and embody the entrepreneur spirit that drives the economy of the Golden State. It is our hope that these small business statistics continue to grow and do not shrink due to us not approaching the regulation of AI in improper fashion. I thank you. We look forward to continue working with you as well as be a resource as we move forward. Thank you.
- Rebecca Bauer-Kahan
Legislator
Thank you.
- Leora Gershenzon
Person
Madam Chair, Members- staff. First, thank you very much for the hearing and for the great background paper. I'm Leora Gershenzon. I'm the policy director for CITED. It is the California Initiative for Technology and Democracy. It is a project of California Common Cause. California Common Cause created CITED to protect our elections from the perfect storm of generative AI turbocharged disinformation spread through targeted social media. CITED uses a nonpartisan, interdisciplinary, and pro innovative approach to reform.
- Leora Gershenzon
Person
While disinformation is nothing new, I'm sure the earliest know happening in ancient Greece or the first contested election in the United States used disinformation. But today, as you've heard, anyone from foreign states to online trolls can instantly create perfectly realistic deepfakes, whether images or video of audio, and they can deliver them to millions of people instantaneously and for almost no cost. When they are intended to fraudulently sway our elections, they can and must be restricted in some fashion.
- Leora Gershenzon
Person
We look forward to working with you to protect us from the onslaught of AI generated, deceptive content that will try to sway our elections and undermine our democracy. Thank you again for the hearing. It was really, really useful.
- Rebecca Bauer-Kahan
Legislator
Thank you.
- Vinhcent Le
Person
Hi there, Madam Chair. This is Vinhcent Le from the Greenlining Institute. And I'll keep my comments quite short. I think The Greenlining Institute sees a lot of need for requiring risk assessments and don't really see this as an impediment to innovation whatsoever. Right. This is the due diligence that you need to do when training your staff to make decisions or training your AI to make decisions. So I think that's an interesting angle to say that this is going to hurt small businesses.
- Vinhcent Le
Person
Moreover, I think a lot of the regulations that are proposed or the legislation proposed accept small businesses. Right. It looks at the number of employees, it looks at the revenue. So I do really think this is really not impacting small businesses and doing action on these matters really help small businesses, helps Californians that are impacted by these systems when they're trying to get loans and whatnot.
- Vinhcent Le
Person
And my final point is, there is a real need to make sure the definitions are right, as we've heard all through today. And I worked with the former Chair Chao on a lot of the definitions that eventually made into AB 302. And what we noticed quite a bit were that a lot of industry folks would come to us and say, don't apply this regulation where there is a human with meaningful control. And I see this to some extent with controlling factor, this type of language.
- Vinhcent Le
Person
And I really think those things need to be properly defined so that they don't become an easy loophole that can be taken advantage of when you simply have a human that has automation bias clicking yes to whatever the recommendation puts out. And they're like, well, this isn't a controlling factor. The human controls it. I think that's what we really need to avoid in a lot of the regulations and the legislation that we're developing here, and there's ways to improve that language. And a greenlining is always available to help flesh that out. Thank you.
- Rebecca Bauer-Kahan
Legislator
Thank you. Thank you all for being here and for participating in the hearing. I'll add that anyone can provide comment on our portal or through the Committee's email, which you can find on the Committee for Privacy and Consumer Protection website. Thank you all. With that, we will be adjourned.
No Bills Identified