Senate Standing Committee on Governmental Organization
- Bill Dodd
Person
Welcome to the California at the forefront, steering AI towards ethical horizons. This is a joint informational hearing of the Senate Government Organization Committee and the Senate Budget Committee, number four on State Administration and general government. Today, we'll be discussing artificial intelligence and how we as policymakers should proceed while new technological capabilities are quickly becoming available to be integrated into and utilized by the public sector. Everyone here knows this, but it's always nice to repeat it.
- Bill Dodd
Person
California has long been at the forefront of global advancements in education, research and innovation. And now each one of those fields is fueling the fire of a new kind of technological revolution. A revolution that is sure to impact every aspect of our lives.
- Bill Dodd
Person
Much like the creation of the printing press, the dawning of the industrial revolution, and the invention of the computer and later the internet once did. Artificial intelligence is emerging as the world's most significant technology advancement of our time, while quickly transforming industries and individual Californians daily lives. We see this every day as our great state has once again established itself as a world leader in AI and generative artificial intelligence.
- Bill Dodd
Person
California is home to 35 of the globe's top 50 AI firms and a quarter of all AI patents and conference papers. However, the growth and integration of AI brings significant challenge and risk to every one of us. Just as humans have explicit and implicit bias, AI has the capacity to act as a mirror by reflecting and amplifying those biases where technology is implemented without proper guardrails or safety precautions.
- Bill Dodd
Person
For these reasons and more, last September, Governor Newsom issued an executive order addressing AI in order to prepare California for the progress of these technologies. The governor's executive order announcement referenced the order's focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the world's artificial intelligence leader. This morning is our opportunity to discuss artificial intelligence and the executive order, which addresses some of the most pressing issues of our time, occurring right here in our own backyard and in our districts across the state.
- Bill Dodd
Person
We will be hearing the globally recognized California based academic experts in AI and technology policy. From a globally recognized California company with leading AI technologies, and from our own leaders in California State departments working to implement the governor's executive order. I look forward to discussing some of the most exciting applications that AI has to offer for the state while we continue to appropriately manage and discuss risks. And I'd like to invite our Co-Chair, Senator Padilla, to make his opening remarks. Thank you.
- Steve Padilla
Legislator
Thank you, Chairman Dodd. And first, I want to thank you for your leadership and for the collaboration and putting the joint information hearing together. I think it's going to be very informative and valuable. I think we all recognize those that are engineers to those of us that are mere consumers and laypeople attempting to try to understand and make good public policy, that the evolution and application of these algorithms, of these deployment models is revolutionary.
- Steve Padilla
Legislator
And it is something that is probably the most revolutionary recent development with incredible potential to impact the global conditions since the dawning of the digital age. Frankly, it has incredible potential that has clearly been demonstrated in improving efficiencies and outcomes and having truly global reach. It is incredibly unique in its nature in that it is already deployed across many sectors, things that we aren't really aware of or think about in our daily lives.
- Steve Padilla
Legislator
It is truly capable of real time deployment and application on a state, community, national and international level. And hence, there are conversations in the European Union, in other parts of the globe, in Washington, with this administration, and in many states. You don't need to look very far to see the incredible number of bills in the California Legislature that have been introduced in this session that attempt to both understand and find an effective place for the state, an effective role for the state in this conversation.
- Steve Padilla
Legislator
This has incredible potential, but it also comes with incredible risks. Risks that revolve around inequities about access, disproportionate impacts, adversarial and otherwise, to communities least able to be heard or to have influence in the process of development of guardrails, standards or regulatory frameworks. It has incredible risk with regard to the disclosure of sensitive information, national security, personal privacy, intellectual property, and so on. It is rapidly growing and evolving every day.
- Steve Padilla
Legislator
So much so that every day that goes by, those of us in the world of policy and government are already behind. There are great opportunities and there are great vulnerabilities.
- Steve Padilla
Legislator
And as one of the largest purchasers on the globe of technology and applications, the State of California has an obligation and a role to continue to be a leader in this space, much as so many of our great think tanks, educational institutions and revolutionary technology development interests, many of whom are represented here today, have always been on the forefront, as Chairman Dodd mentioned, of this technology.
- Steve Padilla
Legislator
So the purpose of the hearing today is to make sure that we don't constrain that innovation and development, but also that we have an intelligent and thoughtful understanding of what the great risks are and make sure that we can avoid those in a reasonable way without stifling innovation. And with that, I look very, very forward to, much forward to hearing from our distinguished panelists. Thank you, Chairman Dodd.
- Bill Dodd
Person
Thank you, Chairman Padilla. At this point in time, I'm going to ask members of this, they're seated at the diocese if they want to make any comments. Recognizing that we have some very important people in the audience that are going to be presenting to us, and they've got time limitations, I'm going to start. The Judiciary Committee chair, Senator Umberg, asked to participate today, and both the Chairman and I thought that was appropriate. So would you like to make a few comments?
- Thomas Umberg
Legislator
Thank you, Senator. Thank you, Senator Padilla, for allowing me to sit in today. This is an incredibly important issue. It's incredibly important issue, both policy wise as well as with respect to legal implications. The technology surrounding artificial intelligence is a challenge. It's a challenge for us policy wise. When this legislation, these bills come to Judiciary Committee. We have jurisdiction over artificial intelligence legislation for the most part. Fortunately, I'm an expert in artificial intelligence and technology.
- Thomas Umberg
Legislator
For those of you watching at home, most of the audience is now smiling, but the legal implications are huge. For example, how do we resolve copyright issues, patent issues, trademark infringement? How do they apply to AI creations? Is it clear who owns the content when generative AI creates some content?
- Thomas Umberg
Legislator
Before we embrace the benefits of generative AI, legislature and business and other government entities need to understand the risks and how to protect themselves and how to be transparent so that we have a full understanding of the implications, or at least as full as we can today, of artificial intelligence. We've got a lot of work before us, and I'm grateful again to Senator Padilla and Senator Dodd for embracing this challenge. Gentlemen, thank you.
- Bill Dodd
Person
Thank you very much, Senator. Anybody else? Senator Archuleta?
- Bob Archuleta
Legislator
Yes, thank you, Mr. Chair, both chairs. As the chair of the Military and Veterans Committee, my concern obviously is the security when it comes to calvet and all its information it has with its veterans, their mental, physical and history, and also within the VA and our military. And it's so important that security is just the top priority in all AI that we do. And I'm so honored to be here.
- Bob Archuleta
Legislator
But more importantly, that you're all here to understand how important this is, the future of California and to the men and women who work every day to defend our country and participate in uniform, we've got to protect their privacy. So with that, thank you, Mr. Chair.
- Bill Dodd
Person
Thank you. Anybody else? Seeing none.
- Bill Dodd
Person
We'll now move to our first panel. I'd like to invite the participants to the microphones and ask you to please introduce yourselves and proceed when ready. First, Michael Karanicolas from UCLA and Daniel Ho from Stanford. Thank you both for taking time out of your busy schedules. For coming here today and addressing this informational hearing. Senator Padilla, I guess they're going to.
- Steve Padilla
Legislator
Yeah, they'll present first, I think.
- Michael Karanicolas
Person
Thanks very much for this opportunity. My name is Michael Karanicolas. I'm the Executive Director of the UCLA Institute for Technology Law and Policy and an affiliated fellow with the Yale Information Society Project. The Institute of Technology, Law, and Policy was established in 2020 as a collaboration between the UCLA School of Law and the Samuel School of Engineering.
- Michael Karanicolas
Person
And as our inaugural Executive Director, I established our mission to foster research and analysis to ensure new technologies are developed, implemented, and regulated in ways that are socially beneficial, equitable, and accountable. On the law side of campus, that means bringing a law of public policy and engineering expertise to focus on emerging technology policy challenges. On the engineering side of campus that is mostly focused on developing ethics curricula for computer science and engineering students, though we also do some pure computer science work related to AI misinformation.
- Michael Karanicolas
Person
The project that I'm working on that's most relevant here is with the administrative conference on the United States, which is a year long project that we kicked off in October to develop regulatory and administrative safeguards around the use of AI systems across the federal bureaucracy. And I also want to emphasize my own background is in law and policy. Although I teach on the engineering side of campus, I supervise researchers from the engineering graduate program.
- Michael Karanicolas
Person
I'm a copi on projects for exploring machine learning applications with engineering and computer science collaborators. But my own background is not on the technical or engineering side, though I do have access to resources on that side of campus and would be delighted to follow up if there are specific technical questions that we can be helpful with.
- Michael Karanicolas
Person
With that said, I was also told that it would be helpful to kick this off by offering a definition of AI that can be tricky, especially as a result of the delta between expert and lay understandings of the term. AI systems include a number of different technical concepts, but are broadly unified through their aim of automation, which approximates human capacity and their ability to learn from experience.
- Michael Karanicolas
Person
Probably the best known type of system within this category is a machine learning system, which is a form of artificial intelligence algorithm that improves itself based on training data. The way the machine learns depends on the algorithmic makeup of this system. Deep learning and active learning are more advanced techniques in which a system learns how to learn either with predetermined data sets, in the case of deep learning, or without those data sets.
- Michael Karanicolas
Person
In terms of active learning, machine learning systems can essentially be understood as enormous statistical inference engines with the capacity to generate outputs from the analysis of large inputs of data. Importantly, the data dependent nature of machine learning technology forms the basis of both the potentials and pitfalls of contemporary artificial intelligence. Many cite the transformative potential of these technologies, including in the public sector.
- Michael Karanicolas
Person
Innovation in the provision of public services is very welcome, and California should embrace the potential of these technologies as a force multiplier for an administrative state where resource constraints are a way of life. And there's also interesting potential to develop effective response structures to challenges like bias, since these systems can be recalibrated and altered more easily and effectively than a human decision maker can. However, the data dependent nature of these technologies means that almost by definition, they are backwards looking and resistant to progress.
- Michael Karanicolas
Person
Moreover, public support for these technologies depends not only on their performance, but on public confidence of their efficacy and fairness and their ability to maintain certain key aspects of procedural fairness. NIST's AI risk management framework put this quite succinctly when it states trustworthiness is only as strong as its weakest characteristics, despite low understanding of AI systems,
- Michael Karanicolas
Person
they are being rolled out to perform a variety of tasks across different levels of government, including both in house products as well as those developed by third party contractors, each of which presents novel challenges to preserving fair and accountable government and maintaining public legitimacy. In addition to well documented challenges with accuracy, bias and drift, as well as more general concerns about a loss of the human element, there are also more general concerns about losing the human element in official decision making.
- Michael Karanicolas
Person
I'll conclude with a few thoughts about appropriate regulatory and government responses to these challenges. To start regulatory attention to private sector companies that are experimenting in this space is welcome and necessary. However, I would suggest that the expectation on the public sector with regards to things like transparency rules, ethics and social responsibility rules, all of that should be higher in a public sector context than it is in a private sector context.
- Michael Karanicolas
Person
Due to the heightened impact of public sector uses on things like human rights, I think that initiatives towards transparency and disclosure of these uses are welcome and necessary as a first step towards trying to get our arms around this issue, but it's not sufficient as a solution.
- Michael Karanicolas
Person
I would also suggest that the breadth of these technologies, as well as inconsistencies in how they are defined and used, suggests that rather than seeking to boil the ocean, legislative and regulatory frameworks should be targeted and application specific in order to ensure that they adopt a posture which fits the potential use cases being considered. I welcome the notion of iteration in standards development, especially given the speed at which these technologies are progressing.
- Michael Karanicolas
Person
I would urge the senate to consider trade offs inherent in some of the values that are being pursued, for example, between interpretability and privacy, or predictive accuracy versus interpretability, or privacy versus accuracy potentially impacting fairness. There has also been significant emphasis on traditional procedural due process protections, such as transparency, review-ability, separation of functions, provision of reasons for a decision there.
- Michael Karanicolas
Person
I would suggest that rather than seeking to port over the same values that we expect from human decision makers, the question should be what underlying values we're aiming for in a robust administrative state and a robust system of due process, and whether and how automated processes are capable of filling these same values. Thanks again for the opportunity to appear, and I very much look forward to this discussion.
- Daniel Ho
Person
Chair Dodd, Chair Padilla, Members of the committees, thanks for the opportunity to testify today. My name is Daniel Ho. I'm a Professor of law, political science, and computer science at Stanford University. I'm a senior fellow at Stanford's Institute for Human Centered AI, and I serve on the National AI Advisory Committee. And I direct the Stanford Reg Lab, which works with a wide range of government agencies around AI demonstration projects California is the innovation capital of the world. It's the fifth largest economy in the world.
- Daniel Ho
Person
It houses 35 of the top 50 AI companies, and its educational institutions are the envy of the world, from UC Berkeley to Stanford to Caltech. And since World War II, the nation and California's partnership between government, universities, and private industry catalyzed fundamental advances that gave us microchips, GPS Systems, and the World Wide Web, each of which sit accessibly in your pockets today. Yet this post war model of innovation is under threat when it comes to AI, for three reasons.
- Daniel Ho
Person
One, AI research has become so capital intensive that only a handful of private companies are at the frontier of the field. Vast amounts of computational power and data require huge capital investments. The innovation ecosystem has increasingly become more closed, concentrated, and opaque. That hurts science, accountability, and innovation. Two, governance of this technology, as called for by the governor's executive order and much of the proposed legislation you're contemplating before you, is fundamentally challenging because government lacks AI expertise.
- Daniel Ho
Person
Fewer than 1% of AI PHDs pursue public service, and government cannot govern AI if it does not understand AI. Three, many governmental systems still rely, in the words of one of your assembly member colleagues, on dinosaur technology. Some 46 million Americans turned to unemployment insurance during the pandemic. But 12 states, including California, still rely on software language dating to 1959.
- Daniel Ho
Person
The system, as a result, buckled, going from a timeliness rate of 97% prior to the pandemic to just above 50% right when people needed unemployment insurance the most. So my core message to you in my opening remarks is this AI presents an extraordinary opportunity for the government and the people of California, but we must refrain and reject science fiction images of technology replacing humans. The best AI systems will support humans, not replace them. Let me give you one example.
- Daniel Ho
Person
During the pandemic, Stanford Reg Lab, which I direct, worked extensively with Santa Clara County to use technology for COVID response. For example, we developed a system to help bilingual contact tracers connect with patients that needed language assistance. That simple system improved same-day contact tracing by 12%, helping to reach those hit hardest. Early in the pandemic, the AI tool didn't replace, but augmented the pandemic's first responders. And this kind of innovation is what we need in spades and could modernize systems like that of unemployment insurance.
- Daniel Ho
Person
However, realizing these benefits requires responsible application, and California must lead the nation by example in responsible governance. When developed poorly, AI systems have the potential to wreak significant harms, from amplifying biases to spewing misinformation to eroding privacy. Just as California has led on consumer privacy, California can help to rebalance our innovation system. Let me offer three recommendations on how we get there. Three recommendations around people, infrastructure, and governance. First, California must nurture, develop, and attract technical talent into public service.
- Daniel Ho
Person
Most importantly, that means upskilling, namely, providing opportunities for California's great civil servants to learn about the potential and risks of AI. Similarly, California should follow federal models of how to bring other talent in, like the US digital service, the Digital Service Corps, and presidential innovation fellows to hire AI talent. California's great universities offer an extraordinary mechanism for upskilling civil servants and building a talent pipeline into government.
- Daniel Ho
Person
The governor's Gen AI Executive order mandates that agencies consult with UC Berkeley and Stanford and Stanford HAi is excited to co sponsor California's AI summit. But California should also follow the lead of the Federal Government, where agencies have built out academic partnerships to bring in expertise and develop pathways of public service for AI students.
- Daniel Ho
Person
Senator Padilla's call for the California AI Research hub and a call for a state talent exchange to help upskill and augment the civil service are exactly the kinds of initiatives that can be of central importance in helping the government seize this moment. California has one, if not the highest, concentrations of AI talent in the world and is in the perfect position to lead the country in building nimble programs to strengthen the public sector at this moment.
- Daniel Ho
Person
Second, a major reason why AI has become so concentrated in a small number of tech firms lies in technology infrastructure. The Federal Government has responded by piloting the so called National AI Research Resource, or the NAIR, to democratize access to computing and data resources. Based on a proposal and white paper from Stanford, HAI and a federal task force, California should lead the nation in these efforts.
- Daniel Ho
Person
Proposals for computing and data resources like calcompute and the AI research hub could broaden access to a wider range of Californians that are increasingly finding themselves left out. Government has one major advantage to realign technology with human values, the reliability of government data. Currently, AI is powered by hoovering up all data on the Internet and learning from it. But garbage in, garbage out toxicity, falsehoods, and risks from Gen AI are a result of this particular choice.
- Daniel Ho
Person
By providing secure, privacy protected access to much higher quality government data, the Nair and California proposals can turn AI to solve much more socially useful problems and also engage the future AI talent of the world, which, in the words of one startup founder, live in an ecosystem that right now has gotten the world's brightest minds and get them to work on the socially important problem to get people to click on ads.
- Daniel Ho
Person
One example of how we rebalance this is that when the US Geological service made satellite imagery available for free, it generated three to $4 billion in value annually and drove forward our ability to understand and respond to environmental threats. The state that provides this infrastructure will continue to lead in AI. Third, when it comes to governing AI, California must address the huge information asymmetry about new and heightened risks posed by AI.
- Daniel Ho
Person
The most urgent and lower budget item regulatory solution, hence lies in a form of adverse event reporting. Just as in cybersecurity, where parties are required to report vulnerabilities and attacks, we need such a system for AI, enabling government to monitor, investigate, and respond to emergent risks of AI systems. It's a real problem when only a small number of self interested actors have the information necessary to describe these risks.
- Daniel Ho
Person
Over the past year, for instance, there was a lot of worry about how Chat GPT like systems could help hostile actors create bioweapons. But the evidence was paper thin and has been debunked. RAND, for instance, showed that such systems a offer no information beyond what's readily available on the Internet and b yield no difference in the ability to create a bioattack plan. Government must be able to separate hype from reality.
- Daniel Ho
Person
The way we've addressed this in other areas, cybersecurity, drugs, medical devices, pathogens, and design defects is in adverse event reporting, and that's what's needed for AI. Government procurement of AI tools and systems can also be a powerful lever for increasing transparency and other trustworthy AI practices.
- Daniel Ho
Person
California cities are actually in many ways leading the way, like San Jose, which has been a leader in building a national AI coalition to standardize procurement and make sure that vendors are sharing the appropriate information to generate public trust in the government's use of AI systems. Let me conclude with just two last words on AI regulation, which I know is top of mind for all of you.
- Daniel Ho
Person
One is that we must be extremely careful about regulation that has the potential to entrench incumbents and quash open innovation. The poster child of that kind of ill conceived regulation is a licensing regime whereby only a few well equipped companies would get licenses to develop advanced AI models. This is wrong and would reduce oversight and stifle innovation. Two is that many regulatory proposals single out AI systems as being uniquely risky and subject to controls.
- Daniel Ho
Person
But AI, just as in the opening remarks by Senator Dodd, can be a mirror and also expose vulnerabilities of existing human systems. In developing AI systems for the IRS, for instance, our team uncovered disturbing racial disparities and tax audits of existing legacy systems, not fancy forms of AI. Concerns about AI may actually point us to opportunities for the reform and improvement of existing systems. In some, AI has tremendous promise.
- Daniel Ho
Person
But just as post war California set us on the path to become the innovation engine of the world, we must take these steps now to ensure that we retain this leadership, both for technology and our human values. I welcome your questions.
- Steve Padilla
Legislator
Thank you both very much. A couple of questions, first for Director Karanicolas and then for Professor Ho. First and foremost, my understanding is that basically the foundation models that have a broader application and kind of the generative engine, if you will, in lay terms, or maybe the modern version of AI operating system that allows the application to be multitask in a very broad sense, is very data intensive. Massive might be a better word. And I'd like to get to the Director.
- Steve Padilla
Legislator
I'd like to get your thoughts about sort of the status of the conversation around misalignment, because government applications of this technology for government and public purposes has a more specific scope than the general applications that are developed for the foundation models, which basically drive everything from the private innovative sector standpoint. And what is the conversation around how we address that misalignment in application and how we appropriately segregate those?
- Steve Padilla
Legislator
And then secondarily, I'd like to hear your comments about specifically in terms of the state's regulatory framework, how should we be thinking about mitigating outcomes that do contain bias against vulnerable populations, the most vulnerable populations who more rely on government solutions, access information program, and what is the thinking around those two points. First, to the Director.
- Michael Karanicolas
Person
So I'll. So in terms of the misalignment question, yeah, certainly these applications are enormously data intensive. They're also enormously energy intensive, which is an underappreciated aspect of that conversation. I think that the core question comes around the degree to which governments are, the public sector generally is relying on third parties to develop the applications that they're looking for, as opposed to developing them in house. I think, as I mentioned, as I may have mentioned in my introductory comments, each of those presents unique challenges.
- Michael Karanicolas
Person
Developing more specific applications internally allows for a more specialized context and allows for the people that are developing the new technologies to be more intimately familiar with how they're going to be used and the risks that would potentially apply. But of course, it can be more expensive and labor intensive to develop this stuff in house, and you need to attract expertise, which is costly and requires longer term investments. On your second question regarding bias, I think one of the trickiest challenges relates to facilitating robust community input into the development and implementation of these technologies.
- Michael Karanicolas
Person
Quite often what we've seen in how they've been rolled out previously, is that the organizations that are on the sharpest edge of the implementation of these tools, or, sorry, the people and communities that are on the sharpest edge of implementation of these tools, can often spot that they're returning problematic results, even if the audits are coming back clean, or even if the folks that are implementing the system aren't seeing an issue. So, robust structures of community engagement.
- Michael Karanicolas
Person
I would suggest is one aspect of that, as well as ensuring that there is sufficient opportunities for these technologies to be assessed against the efficacy of human decision makers or the alternatives, and where they're not returning the desired results, be allowed to fail. You don't want to end up in a situation where particular interests are so invested in the success of these technologies and have sunk so much resources or cash into them that they don't allow them to be allowed to fail.
- Steve Padilla
Legislator
Thank you, Director, and to Professor Oh. We had the conversation in the innovation space, and you talked about the lack of how do we better democratize that innovative space, given the limited number of broad portfolios that have a majority ownership stake in the development of the technology to date, we can probably point to three or four major portfolios that exclusively hold rights to this data, information and technological development. Innovation implies a more multifaceted, open and integrated conversation. How do we avoid that monopoly in that space?
- Steve Padilla
Legislator
And secondly, just briefly, your thoughts about how the state could approach procurement, because we are a very large purchaser, and I think one of the things that the State of California has going for it is we're a major purchaser and we have some substantial market influence, and how do we leverage that in a way that's positive?
- Daniel Ho
Person
Thank you, Senator. To take the first question first. How do we deal with a kind of concentration that exists in this sector? Because the rise of foundation models, as you had noted in your first question, has meant that the fixed costs for training these leading models is extremely high, requiring vast amounts of computing and vast amounts of data. I think there are two things, really, that are related to what I'd noted in my opening remarks that I think are really important interventions here.
- Daniel Ho
Person
One is to figure out mechanisms to democratize access to compute and this kind of data. So that is why the CREATE AI Act, which is a bipartisan proposal at the federal level for the NAIRR, the National AI Research Resource, really has the potential to significantly democratize access, particularly with public use or public sector data in mind.
- Daniel Ho
Person
The second is that I think we should be really careful when we think about the regulation of AI systems, that we be clear that we want forms of regulatory systems that have safeguards, but also don't quash forms of open innovation. Because the history of cybersecurity, for instance, has been one where the sort of transparency of cybersecurity standards has enabled a large number of individuals to identify vulnerabilities, or in the words of one leader of that movement with many eyeballs, all bugs are shallow.
- Daniel Ho
Person
On the second question, how should the state approach the procurement of AI technology? I think that's where there is tremendous potential. To give you a little bit of a sense of this, the Federal Government increased its AI spending by two and a half fold over the course of five years. It's up to $3.3 billion now. And I think there are really significant ways in which, as a purchaser, you can impose standards on what safeguards vendors are required to take.
- Daniel Ho
Person
Now, I think there are actually some real challenges in the procurement system of how to procure AI, most notably that procurement officials have to actually be trained up to know what the right questions are that they should be asking. And many of our procurement systems treat software acquisition, or sort of like hardware acquisition, and AI is fundamentally different from purchasing a stapler. AI systems adapt over time, so it's not possible to ex-ante, specify everything that you need of a particular system.
- Daniel Ho
Person
And we need more agile forms of procurement systems to really be able to ensure that those safeguards are right, that there's appropriate monitoring. And I think some of the most interesting innovation initiatives in the procurement space have actually come from agencies like the Department of Homeland Security, who've stood up things like procurement innovation labs to adapt procurement to AI practices, and localities like the City of San Jose, which has been a model in terms of the transparency of how it goes through the procurement process.
- Steve Padilla
Legislator
Thank you. Thank you very much. Are there Members participating have questions for this panel? Senator Dodd?
- Bill Dodd
Person
Yep. For one or both of you, what is needed to build AI literacy among public sector leaders and workers to ensure they are equipped to make informed decisions about AI procurement and use frankly.
- Daniel Ho
Person
Thank you, Senator Dodd. I think there is a lot of work that needs to be done. I think on the federal level, there was an AI training act that mandated the design of an actual sort of series of courses particularly geared towards procurement officials. To go back to Senator Padilla's question, to train procurement officials in how to really think about these kinds of questions.
- Daniel Ho
Person
I had the fortune of actually teaching one of these myself, and the Federal Government ended up opening this up to the entire civil service, and there were 2,000 attendees because there's such eagerness by the civil service really to learn about this emerging technology. So I think what needs to happen in this space is we need to figure out technical talent pipelines into the civil service. That includes both upskilling. It includes thinking about different models.
- Daniel Ho
Person
Some of the very interesting models that the Federal Government has taken advantage of is the use of things like Presidential Innovation Fellows. There are a lot of Stanford sort of graduates, for instance, who don't necessarily want to spend the entirety of their careers in the public service, but might be willing to think about a rotation and a service model where they're there for a year or two years and as a result, transfer a lot of their knowledge to the civil service.
- Daniel Ho
Person
It's also why I'm such a big fan of the proposal for something like the AI research hub that provides, that could be coupled with a talent exchange, really, to draw on the expertise of California's great universities to build out that kind of talent.
- Michael Karanicolas
Person
Yeah, I would very much agree with that. I definitely support a greater emphasis on interdisciplinary engagement and bringing computer science and engineering graduates into the public sector. I think that there's robust programming to try to focus on law and public policy graduates in this space, and there's less of a pull within STEM.
- Michael Karanicolas
Person
I would also add that moves to empower the California Department of Technology to develop standards in this space are a good start, but would suggest that a sector specific approach would be a positive development to working there.
- Daniel Ho
Person
If I could say one more thing, just in response to that question, the National Security Commission on AI had a really important note in its chapter on how to actually get technical talent into government. And oftentimes we lament the sort of pay scale differences. But what the National Security Commission on AI said is that it's not just pay scale differences, it's also the perception and often the reality that it's really hard to do meaningful technical work within government.
- Daniel Ho
Person
I'll give you one example from a PhD student who wanted to devote his career to public sector technology, a machine learning PhD student from Stanford. We had him involved in a collaboration with a federal agency, and it took a year simply to get that person access to the requisite computing, to be able to run anything that would be sort of modern forms of machine learning. And as a result, that student got so frustrated and ended up going into the private sector.
- Daniel Ho
Person
Those are the kinds of things that I think we need to level. And I'll add one other sort of note here, which is that we have a lot of federal funding for AI research institutes. We have AI research institutes for agriculture, for weather, for education. We need one for government services as well.
- Bill Dodd
Person
So just a quick follow up. You've talked about really the public sector, meaning the people in our departments and all throughout, but you're looking at a whole bunch of policymakers. With all due respect to my colleagues, generally, we are jack of all trades and masters of none. So what kind of recommendations do you have for us to come better stewards of this particular policy area to make better decisions on behalf of the State of California and the people we serve?
- Michael Karanicolas
Person
I don't know that you necessarily need to have your own expertise in this space. You just need to have people that you can rely on who have that expertise. So I think that building up specialized agencies and providing that pipeline, yeah, I very much agree, especially because just awareness of opportunities within the public sector and within government is just so much lower in terms of the school of engineering.
- Michael Karanicolas
Person
You talk to anybody at the law school, half of them will tell you that they're interested in public service in particular ways, or they're interested in government or running for office. And there's a huge awareness of that space. And I teach first year engineering students, and the level of awareness is much, much lower about opportunities, and there's a range of reasons for that. But I think there's a huge amount that can be done in order to develop programming to get the word out that these opportunities exist and how to pursue them.
- Bill Dodd
Person
Thank you.
- Steve Padilla
Legislator
Thank you.
- Daniel Ho
Person
Quick notes.
- Steve Padilla
Legislator
Sorry, I didn't mean to interrupt you. Please proceed.
- Daniel Ho
Person
I'm sorry. I think to connect the earlier question by Senator Padilla to your question, Senator Dodd. It is a real challenge, given how rapidly the technology is emerging. Senator Padilla had asked earlier about the emergence of foundation models. The EU AI Act was negotiated before the rise of foundation models, and the entire risk based framework really sat very uneasily with the rise of foundation models.
- Daniel Ho
Person
And so there was a scramble towards the tail end to basically staple some other provisions on there that are specific to foundation models. So I do think that how to actually legislate and regulate in ways that are agile and don't regulate to yesterday's technology will be really important. I do think that what we do across government, right. Is we have agencies where we bring in expertise to really help to govern in this space. I'll tell you one story.
- Daniel Ho
Person
In the Social Security Administration, there was one gentleman by the name of Gerald Ray who was the head of the appeals counsel there. And he was very early on in this space, and he recognized that all we were doing at this agency some 20 years back was writing a bunch of documents, and we were capturing none of the data that would actually improve a system that has been sort of rife with delays, that adjudicates some half million kind of claims annually for disability benefits.
- Daniel Ho
Person
And he figured out how to build out the data, sort of infrastructure, capture that information, bring on talent. Right. And he did that in spite of all of the challenges that existed at that point of time, so much so that colleagues referred to him as the Steve Jobs of the SSA. It should not require Steve Jobs. We need to figure out how to learn from those kinds of innovation use cases to level those kinds of barriers, and get more of that kind of responsible innovation within our agencies and attract that talent in.
- Bill Dodd
Person
Thank you.
- Steve Padilla
Legislator
Senator Rubio.
- Susan Rubio
Legislator
Thank you. Good morning. I have a question in regards to some of the information you just shared. You talked about AI research hubs and how to create experts in this field and perhaps have rotations to work in the civil service sort of model. And it may be premature to ask you this, but the reason we're sort of here, one of the issues we are seeing is just the dangers that AI poses.
- Susan Rubio
Legislator
How are we thinking ahead in terms of creating these experts with expertise that are going to be in our civil service? How do we safeguard this information from all of a sudden, you have these individuals that become experts and they leave the field. How are we thinking of safeguarding the information? Because this is not like a floppy disk where you can just copy and leave. Now you have the information with intellectual property that you cannot prevent someone from taking with them. How are we thinking ahead? And I know it may be premature to ask you this question, but it scares me to think we're creating experts. They can just leave at any time, and the information leaves with them.
- Daniel Ho
Person
Yeah. Thank you, Senator Rubio, for that question. Within the Federal Government, there is an actual kind of open source policy that sort of directs agencies to actually have open source standards so that when you're going through a procurement system, you're not simply sort of outsourcing that expertise, and then a private vendor may run away with that and try to sell it back to you. And so I think setting those kinds of standards, I think, is going to be really important.
- Daniel Ho
Person
Now, I think, actually the sort of AI research hub idea and this kind of a talent exchange, actually, in my view, really helps build the muscles for responsible innovation. Let me give you one story of a presidential innovation fellow who did a rotation with a federal agency and was in a kind of procurement setting where a department was being sold something.
- Daniel Ho
Person
And sorry if I'm not giving you details here, but there was a vendor who was making all sorts of claims, and it was the Presidential Innovation Fellow who actually had some engineering experience and said, listen, you're just telling me stuff that's off of your website. That sort of sounds more in the vein of PR. I want to talk to your engineers to actually see what safeguards have been taken.
- Daniel Ho
Person
And that really led the agency to make a much more informed decision, to know how to ask the right questions and to figure out whether or not this was actually worth procuring. So I actually think these talent exchanges are exactly a way to build internal capacity and talent so that we don't see further hollowing out of this kind of capacity.
- Michael Karanicolas
Person
Yeah, I very much agree in that strengthening the public sector, building as much of this expertise within the public sector as possible, is a good answer to that. I also acknowledge, though, that the risks that you note are real, particularly where these systems are used for regulatory enforcement purposes. I think it was your Professor Ho's co authored publication in 2020. I want to say the one that was done with ACUS, with Catherine Sharkey and the others that mentioned 2018. Right.
- Michael Karanicolas
Person
That mentioned, for example, use of these tools for SEC enforcement to spot potentially suspicious trading practices that needed to be investigated. If the person who's designing those tools can then turn around and sell their services to major banks that are looking to avoid regulatory scrutiny, that creates a really dangerous situation. I'm not an expert in sort of conflict of interest rules or cooling off periods. That's less my area. But certainly I think that it's important to note the risk, especially as these tools are employed for regulatory enforcement.
- Susan Rubio
Legislator
Well, and I just want to add to that, I remember when just technology computers came around. I remember being a Council Member where our Police Department was held hostage, one of our police departments, and they literally had to buy the information back millions of dollars because someone figured out how to hold them hostage. And I'm thinking along those lines, like, what don't we know now that may cause this kind of concern in the future? But again, thank you for your explanation, and I think that you gave me enough. Thank you.
- Daniel Ho
Person
I may add to that. I think it's absolutely right that we have to think about the right standards for government to actually continue to own this data. Part of what we're seeing in the turn to generative AI is that their services being sold, and then there are some vendors who want to claim that data as their own and then sell it back to government agencies. That's exactly, I think, situation you're describing of being held hostage, which I think we very much want to avoid.
- Daniel Ho
Person
Just to give you the other kind of example of why in house capacity is so important. When the Security and Exchange Commission had its own innovation hub to figure out how to sift through all of the filings that it receives to spot signs for insider trading, there were a bunch of technical machine learners who were building at risk models for the risk of, say, insider trading. But it was the line level attorneys who said, I don't care what the risk score is.
- Daniel Ho
Person
You have to tell me why this risk is being activated, because I ultimately have to defend this in front of a judge. And it was really that internal kind of capacity that led to the sort of due process protections that we really need, so that it's not just based on a kind of unarticulated black box risk score, but something that is really explained and hence amenable to actually being investigated by a line level attorney.
- Steve Padilla
Legislator
Thank you, Senator. Senator Umberg.
- Thomas Umberg
Legislator
Thank you, Senator Padilla. Two observations, then. One question. Observation one is that in the early 2000 s, the state, with controller Steve Wesley, I think, at the helm of the controller's office, decided we were going to revamp our HR payroll system. That hasn't been done. It's 2024. I was involved in three years, maybe four years of litigation with SAP. What was very clear is that the state was outmatched by the expertise on the other side. We subsequently settled for $83 million.
- Thomas Umberg
Legislator
But it was very, very clear that those on the other side of the table, making three times what state employees were making at the time, with many more years of experience, really ate our lunch. That hasn't been resolved so far as I can tell.
- Thomas Umberg
Legislator
One, the system is still a problem, but two, more importantly, is that we haven't overcome that challenge to make sure that the folks who are negotiating on behalf of the state and procurement issues are at least somewhere in the same range of compensation as those on the other side of the table, and that needs to be fixed. Secondly is that we have our own challenge here internally. As I mentioned at the outset, AI legislation is all coming to Judiciary Committee.
- Thomas Umberg
Legislator
We have incredibly smart lawyers on Judiciary Committee, have done an excellent job. But this is, at least from my perspective, hard stuff. And we internally need to have our own expertise internally to be able to make judgments without a particular bias. So notwithstanding UCLA and Stanford's academic basically standing in the world, relying on basically anybody outside subjects us to our own bias. And so we need to get over that, and we need to have our own technological expertise, I think, within committees. Third observation is in a question is within the Department of Technology. Do you think we need to have AI Department within the Department of Technology here in California?
- Michael Karanicolas
Person
Oh, go ahead.
- Daniel Ho
Person
Thank you, Senator Umberg, for that question. I think you're entirely right that building that internal capacity is going to be absolutely critical, particularly in the procurement setting. I chaired the National AI Advisory Committee's working group on procurement. And when we did a series of interviews, one of the real challenges has been the allocation of responsibilities between procurement officials and business units.
- Daniel Ho
Person
So, for instance, in some agencies, the procurement officials would say, well, responsible AI, that's entirely up to the business unit, and we'll just write the contract and work it through the procurement system. The business unit will say, well, no responsible AI, that should be up to the procurement officials. And the reality is that we're not at a state where you can ex-ante specify all of this. AI systems adapt over time.
- Daniel Ho
Person
So a computer vision system that's trained on a hospital patient population in year one, as the patient population changes, will degrade in performance. And so we need better forms of kind of monitoring over time that, at least in some procurement rules, are just not or a little bit more foreign to kind of have that kind of quality control going forward. So I think I 100% agree with you about the need for internal procurement expertise.
- Daniel Ho
Person
That's why I'm such a fan of the procurement innovation lab at the Department of Homeland Security that has really tried to modernize these kinds of systems. As to your question on the Department of Technology, I don't know enough as to where the exact right location of building kind of more AI expertise is within the Federal Government. We stood up an AI Center for Excellence under the AI and Government act that sits under the General Services Administration.
- Daniel Ho
Person
I think one thing I would be mindful of is that going back to what I said with the National Security Commission on AI, quote, we have to figure out how to actually attract technical talent and give them meaningful things to work on. And that does not necessarily mean that we sort of house them in the way we've conventionally thought about information technology services. And so I think that's one of the interesting institutional challenges of where something like this sits appropriately.
- Michael Karanicolas
Person
Yeah, I absolutely think that it is important that the Department of Technology has AI expertise and an appropriate capacity and mandate to focus on these issues. As to whether that should be stood up as an independent unit within that. I mean, these technologies are transformative and they're going to be everywhere.
- Michael Karanicolas
Person
So I think it's an interesting question as to whether or not I'm not intimately familiar enough with the structure of that Department, but it strikes me that it's less a question of understanding the specifics of this technology as its transformative impact everywhere. And I feel like that's going to touch every aspect of their mandate regardless. I definitely agree that there needs to be a coordinating agency around tracking and assessing and providing expertise around public uses of AI.
- Michael Karanicolas
Person
Whether the most appropriate place for that is to house it within the Department of Technology is a separate question. And again, I don't know that I'm intimate enough with the expertise within that Institute to know if that's the appropriate home for it. But I definitely agree that there is a need for a hub for expertise and a coordinating agency.
- Daniel Ho
Person
If I could say two more comments on the really important question.
- Steve Padilla
Legislator
Professor Ho, if I might excuse the interruption, if you could be brief in your responses. We have two more panels to get to and probably additional questions from Members with regard to their presentation. So very grateful for your participation. But if you could be brief in addressing the Senator's question or following up, and we'll move.
- Daniel Ho
Person
Sure. I apologize just very quickly. I think it depends on the functions as well. So the cybersecurity integration center might be a better place to locate something like adverse event reporting, because they already have the sort of infrastructure to receive reports about vulnerabilities and attacks.
- Steve Padilla
Legislator
Thank you. Any additional Members want to question this panel? Senator Wiener, welcome.
- Scott Wiener
Legislator
Thank you, Mr. Chairman. I thank both chairs for convening this important hearing today. Thank you for coming and for your work. I will say proudly that I believe, and I think I'm right, I represent the heartland of AI innovation in San Francisco, and I'm very proud of that. I think AI has incredible potential to make the world a better place, to make life better for humanity on so many levels.
- Scott Wiener
Legislator
We also know that there are real risks, and I think it's really important not to be alarmist, because being alarmist could result in choking off innovation. And that's the last thing we want to do. We want to be pro innovation, but it's important not to be denialist about the very real risks that we need to get ahead of. And that's really, in my view, what we need to do, get ahead of the risks.
- Scott Wiener
Legislator
We have a long history, not just in California, but in the US and on this planet, of not getting ahead of risks created by new technology, partially for fear of stifling innovation, but partially because I think sometimes policymakers are ginger about getting involved and not understanding the implications and also understanding that the legislative process is not always the most nimble one, which is why it's so important to have administrative capacity in order to be nimble, because technology changes so quickly and we want to make sure that we can change with that.
- Scott Wiener
Legislator
But that doesn't mean that we should just throw up our hands. We've done that with some technology issues in the past, which has resulted in some very bad outcomes around privacy, around social media and so forth. And then 10 years later, we finally get around to it and it's too late to deal with the severe harms that have resulted.
- Scott Wiener
Legislator
And so when it comes to AI, we need to work very hard to not just allow innovation to happen, but actually foster innovation and do everything we can to support innovation. But we should not stick our heads in the sand and pretend that there aren't very, very real risks. And we have a responsibility as government to get ahead of those risks, working collaboratively with the people who are innovating.
- Scott Wiener
Legislator
But we should not pretend that those risks don't exist, because there's going to be a real tendency, and I know we're going to hear a lot of voices saying, don't worry, nothing to see here. There is something to see here, and we ought to take it seriously. So again, to both chairs, thank you so much for convening the hearing today.
- Steve Padilla
Legislator
Thank you. Thank you, Senator. I appreciate that. Do you want to briefly respond to that? And then I'm going to invite our second panel.
- Daniel Ho
Person
Thank you, Senator Wiener, and thank you for your leadership on these issues. I couldn't agree more. We have to get ahead of these risks. I think we also need to be sure that government has the appropriate information on what risks to actually address. That's why I'm such a fan of adverse event reporting, because as I noted in my opening remarks, we had many months on Capitol Hill where there was a lot of consternation around the bioweapons risks.
- Daniel Ho
Person
And at least the sort of consensus now has been through the best design studies that actually, at least given the current technology, that risk is nowhere, anywhere what people had kind of ginned up. And so I think that's why kind of adverse event reporting is a really important way in which government can inform itself.
- Daniel Ho
Person
Let me say one other thing, which is that the thing I am worried about is forms of regulation that really have adverse kind of compliance costs, particularly because the history of cybersecurity is one where open inquiry and being able to interrogate the weaknesses of models has been so important. And there are some proposals out there that could really make it much harder, for instance, for academics, public interest groups, really, to interrogate the weaknesses of these models. And we have to design regulations that don't have that kind of adverse effect.
- Scott Wiener
Legislator
Yeah, an open source is definitely a topic of intense debate, and there are folks, of course, who don't agree with you, but it's not just about current technology, because we know that in the next year, two years, five years, AI is not going to look like it is today. It's going to be dramatically more powerful models. And so we want to just get ahead of that and again, not stopping it from happening. We want the innovation to proceed, but if we're continually playing catch up, that just doesn't work very well. So thank you.
- Steve Padilla
Legislator
Thanks to you all. In the interest of time, I'm going to thank our distinguished panelists for panel number one. Professor, Director, thank you very much. It's my honor to welcome up our second panel, which will focus on workforce and the innovative perspective. Addie Cooke is global AI public policy lead at Google Cloud. And Annette Bernhardt is Director, Technology and Work program at the University of California, Berkeley Institute for Research on Labor and Employment. Please make yourselves comfortable. Welcome. Thanks for your participation. And, Addie, please proceed when ready.
- Addie Cooke
Person
Good morning. Chairs Padilla, Dodd, Wiener and Members of the Committee, thank you for the opportunity to testify today on the use of artificial intelligence in state government and workforce. My name is Addie Cook and I am the Global AI Public Policy Lead at Google Cloud. I'm excited to be back in California, where I began my post-university public service career as a member of AmeriCorps Vista serving youth in Los Angeles public housing. To this day, that experience continues to inform my policy work in AI.
- Addie Cooke
Person
In my role at Google, I wear a few different hats. As my title suggests, I support our legislative engagements around the globe, including in the EU, but my role really is to support our product teams and our customers who use our products, including in public sector. When I began this role three years ago, I was given the opportunity to serve as a representative on our manager level AI Principles Committee.
- Addie Cooke
Person
Google began developing these principles to guide the development and use of AI, resulting in their publication in 2018. Since then, we have been working to improve implementation, organization, and governance of our AI principles across Google and in Google Cloud. Over the past year, I have helped advise and guide updates to our process to ensure alignment with all the forthcoming AI regulations, standards and frameworks, including in public sector. We proudly contributed to the National Institute of Standards and Technologies AI Risk Management Framework.
- Addie Cooke
Person
Within Google, our governance program is aligned with the AI RMF approach and underpinned by industry leading research and a growing library of resources that we make available to the public. Importantly, the NIST approach, which has been successful in cybersecurity and privacy, is flexible and can adapt as the ecosystem progresses. The AIRMF provides guidance to developers and deployers of AI systems on how to strike a practical balance between optimizing beneficial use cases and addressing potential risks.
- Addie Cooke
Person
We look forward to our continued work with NIST and the many states who are considering adopting NIST as their foundation. As the AI ecosystem matures, new techniques and applications are developed and further progress is made. Frameworks like NIST can ensure that we are all aligned across 50 states. Alignment is important, especially in the delivery of government services, whether it's administering unemployment benefits more efficiently or providing communities with authoritative information about vaccines.
- Addie Cooke
Person
These practical applications of AI ease rote work and deliver better outcomes for the public at large. AI can support agencies by processing paperwork, digitizing claims requests, and automating these rote tasks, such as reviewing applications, the very things that slow service delivery and lead to civil servant burnout. Using practical and results-oriented AI also means more satisfied customers and cost-effective service delivery. With less rote work, government employees can expect more creative and satisfying engagements with both the public and each other.
- Addie Cooke
Person
These are critical, positive outcomes for delivering government service. Just this past week, Google was proud to support a great example of this work. We supported the New York State Department of Labor, who was recognized last week for their innovation in information technology from the National Association of State Workforce Agencies. Perkins, their innovative web-based chatbot powered by Google Cloud AI, can respond to 14 languages, 24 hours, seven days a week.
- Addie Cooke
Person
This is really important for workers who work a nontraditional nine to five and may have important questions about their benefit claims outside of normal working hours for a state government agency. Since launching in August 2021, Perkins has handled more than 128,000 claim specific inquiries. With Perkins, customers simply select their language and answer a couple of prompts about why they're reaching out, and then, once logged in, they can ask specific questions about their inquiries.
- Addie Cooke
Person
For instance, if the customer believes that they received an incorrect payment amount after submitting the benefit in question, Perkins will provide a breakdown of the benefit paid, including the deductions that were taken from that benefit. From there, the customer can get more information on each deduction and elect to end the chat, or they can ask additional questions until their issue is resolved. This also might include routing to a human we understand that computer is not for everyone.
- Addie Cooke
Person
With Perkins, we have also worked with the state to enable virtual phone agents that do the same thing. The system has been updated to utilize speech recognition to capture responses, so you can say things like how do I file a claim? How do I check the status of my payment? How do I certify for benefits?
- Addie Cooke
Person
Since July 2023, over 3 million conversations have been powered by this technology, guiding users through this program with ease in freeing up New York call center representatives to handle more complex inquiries from citizens. The system also allows the department to carefully analyze interactions with their contact management system. But note it this system does not make any decisions or determinations. This is an important risk management decision that the Department of Labor made, and we were proud of Google Cloud to support that.
- Addie Cooke
Person
But ultimately, the decision was made by the department. Today, I'll end with three recommendations for California government agencies. Number one, governance structures and implementation mechanisms within agencies should be adequately resourced. As the legislature, I know that a lot of mandates will come out, but resourcing the agencies to receive those mandates will be critical. When agencies are tasked with procurement and oversight, they need dedicated resources to establish their own AI governance systems and training for agency officials across the agency charged with carrying out oversight and procurement.
- Addie Cooke
Person
Agencies will ultimately bear primary responsibility for oversight because only they can verify the end uses to which their systems are being put and any additional data that has been input into their training system. They should receive robust training and resource to effectuate this responsibility, whether through the use of internal funding decisions or governmentwide budgeting, to enable repeatable, scalable, efficient and effective use of AI to produce optimal outcomes for the public.
- Addie Cooke
Person
Number two, and I was glad to hear this question be asked earlier, agencies should develop a privacy policy, and frankly, the entire government of California should have a privacy policy that ensures employees understand the difference between commercial and enterprise AI offerings. This is critical to the security of the systems as well. In 2020, Google Cloud announced our public AI ML data privacy commitment.
- Addie Cooke
Person
In this commitment, we promise that by default, Google does not use cloud customer data for model training purposes or other product improvement purposes unless a customer has otherwise provided written permission to do so or has opted into terms for such data use. We provide our customers a frozen model that will not retrain the base model. Unlike our competitors, we extended this protection to our generative AI products from the very start. Our commitment to all Google Cloud customers includes your data is your data.
- Addie Cooke
Person
Your privacy is protected. We will not and cannot look at it without a legitimate need to support your use of our service. And your data does not retrain our models. We don't use that data that you provide us to train our models without your permission. Number three and my final recommendation, prioritize portability and interoperability. As more government data is stored in the cloud, interoperability is a critical factor for ensuring government agencies can facilitate data porting and interoperability across multi-cloud ecosystems.
- Addie Cooke
Person
Data portability and interoperability are central to innovation and help boost competition. Better delivery of government service for Californians requires better data collaboration across agencies. Google has always believed robust and reciprocal portability offerings should reduce or eliminate switching costs. Switching costs are associated when you change from one service to another, resulting in more innovative and user focused products. Portability and operability makes it easier for customers to choose among services and facilitates competition within cloud service spaces.
- Addie Cooke
Person
Equally, when it comes to our enterprise cloud services, portability, choice and openness are at the heart of our philosophy and fundamental to how we develop our technology. Just today, in fact, we made a big announcement on open source, Senator Wiener. To further advance our commitment to openness, we announced our first open-source generative AI model called Gemma, which is used by the same technology that we used for our Gemini line of products.
- Addie Cooke
Person
We know from past technological advances that it is necessary for key stakeholders to come to the table with a healthy grasp of both potential benefits and challenges. Thank you for the opportunity for me to speak today in front of the committee. I'm happy to answer any questions you have.
- Steve Padilla
Legislator
Thank you, Ms. Cook. Director Bernhardt?
- Annetta Bernhardt
Person
Good morning, Chairman Dodd, Chairman Padilla, and Senators, and thank you for the opportunity to testify today. My name is Annetta Bernhardt. I direct the technology and work program at the UC Berkeley Labor Center, and I am also contributing to a series of workshops at the UC Berkeley Data Science School, convening around AI governance. I want to commend your leadership, along with that of Governor Newsom and his administration, in making California a national leader in responding to the AI Revolution.
- Annetta Bernhardt
Person
And like you and others here, I believe the public sector should and must model the responsible use of artificial intelligence. And in the process, it should prioritize the safety and well-being of impacted communities, especially Low-income communities and communities of color, as well as public sector workers. My perspective comes from several decades of conducting intensive research on the US labor market and the future of work.
- Annetta Bernhardt
Person
For the past five years, my team and I have been studying digital technologies, including AI and their impact on frontline workers, with the goal of ensuring that working families in California have the tools they need to thrive in the 21st century economy. It won't surprise you to hear that our research has documented that both private and public sector employers in the US and in California are increasingly using digital technologies in the workplace. They are capturing, buying, and analyzing worker data.
- Annetta Bernhardt
Person
They are electronically monitoring their workers, they're using algorithmic management to manage them, and they're using technology to augment and automate workers' tasks. I want to emphasize the underlying technologies range all the way from very simple data analytics to generative AI. And I would similarly really urge the senators to take a broad lens here on the types of technologies we are talking about. We are often talking about integrated systems that don't feature just one type of technology.
- Annetta Bernhardt
Person
And I understand that generative AI poses its own concerns and risks, but we have to understand that in practice, we are going to be talking about an entire range of technologies. For us, the important point is that employers are using these technologies to make highly consequential employment related decisions, and we are already seeing concerns emerge. Intense monitoring can push warehouse workers to the point of injury. Biased hiring algorithms can shut women and workers of color out of opportunity.
- Annetta Bernhardt
Person
Gig platform workers can end up making below the minimum wage. Some employers are using surveillance to identify workers who are exercising their right to organize a union. And just-in-time scheduling in the retail industry can wreak havoc on workers' lives. I want to emphasize that such harms are not inevitable. I believe that employers can use data-driven technologies in ways that benefit both workers and their employers, and we have plenty of examples like that.
- Annetta Bernhardt
Person
But it will take robust guardrails to ensure that workers are not harmed by a rapidly evolving set of technologies that are often unproven, untested, and I will say, are often not at all understood by the employers that are using them, or even some of the engineers that are developing them. So the need for oversight is especially important in the public sector, where arguably the stakes are higher, as all of you have flagged.
- Annetta Bernhardt
Person
And I want to emphasize, though, that the stakes are higher partly because the technology is dual-facing. It will both impact the public and workers. And on the public side, as others have said, the state's use of AI can have profound economic, social and equity impacts. Here I am thinking about functions such as eligibility determination for public benefits like UI and MediCal, risk scoring and human services programs, grading and curriculum development in community colleges and sentencing in the criminal justice system.
- Annetta Bernhardt
Person
And similarly, AI has the potential to fundamentally transform public sector work as well. Ideally, as others have said, it could automate routine functions and free up work to workers to focus on their clients and make higher order decisions and troubleshoot. But researchers are already discovering concerns among the public sector workforce. They include mental health stress from increased workloads, job de-skilling, blurring of boundaries between home and work, inadequate training supports, and crucially, having to scramble and being blamed when new technology arrives and doesn't work.
- Annetta Bernhardt
Person
So how should California respond? In a recent report, we identified four principles to guide policymakers. First, public sector workers should participate fully in decision-making about new technologies because they possess the knowledge and experience to support responsible and effective use of AI in delivering public services. We call this the worker-in-the-loop principle, much like the more general human in the loop.
- Annetta Bernhardt
Person
And that is that public sector workers can play, can and should play a critical role in safeguarding the public from harm when government is using technologies that are capable of autonomous reasoning and decision making. Because they are on the front lines, these workers are best positioned to anticipate the impacts of AI on equality, accessibility, and equity of public services.
- Annetta Bernhardt
Person
And I would just stress, like others, the importance of human review and oversight is central to many leading policy frameworks, including the White House's blueprint for an AI Bill of Rights and the September 2023 report by the California Government Operations Agency. Second, as the senators have noticed, we must invest in public sector workers to ensure they have the skills and expertise to select, administer, manage and work alongside public sector AI.
- Annetta Bernhardt
Person
The priority should be to build this expertise in house, given the importance of centering the public good in the context where private sector interests sometimes dominate. In fact, the Office of Management and Budget recently called for a fully trained public sector workforce in its memorandum to the heads of federal agencies who are implementing President Biden's Executive order on AI. And here I would flag that some of the most successful models for incumbent worker training in the US have been labor management training partnerships.
- Annetta Bernhardt
Person
In these partnerships, unions and high road employers collaborate to solve key challenges in their industry while ensuring the economic security of their workers. I recommend drawing on California's strong track record here in supporting these types of partnerships through its funding of the HRTP programs, which I can tell the senators about more. Third, public sector workers should have the right to organize and bargain over the state's procurement and deployment of new technologies as a mandatory subject of bargaining.
- Annetta Bernhardt
Person
Both the governor's executive order and the senator's bills include provisions for the involvement of worker representatives, which is a great start. I would just really stress it in practice. It is vitally important that workers and their unions are brought in from the beginning, at the point of problem definition, before the procurement process starts, not after the fact when critical decisions have been made.
- Annetta Bernhardt
Person
And then finally, California should adopt more robust standards that protect workers, public sector workers, against potential harms from AI based systems, much like the public. In a recent report for my program, we lay out a full suite of such standards, including transparency and disclosure, worker rights over their data, and robust guardrails on the state in its role as an employer to prevent harms from electronic monitoring and algorithmic management.
- Annetta Bernhardt
Person
Importantly, I would flag for the senators that California workers at large private sector employers already have some of these rights after they gained coverage under the California Consumer Privacy Act on Jan. 1 of last year. So in closing, the governor's Executive order and the Senators bills are really only the first step in what will be months and years of work by the State of California to ensure the responsible introduction of AI in government services.
- Annetta Bernhardt
Person
And I just want to underline again the great opportunity that we have as a state to make sure that public sector workers are included as critical stakeholders to make sure the public benefits from and is not harmed by public sector AI. Thank you so much for the opportunity, and I'm happy to take any questions.
- Steve Padilla
Legislator
Thank you. Thank you very much. Director, my question to you was answered in testimony, so thank you for that. Briefly to Ms. Cook, as you know, there's a lot of vast amounts of training data required to inform algorithms often. I'm just curious about your practice or your observations in the private sector approach to safeguarding privacy, given that fact, if you could comment on that.
- Addie Cooke
Person
So are we talking about the tension between privacy and fairness, perhaps?
- Steve Padilla
Legislator
I'm talking about the private sector, given how you are situated, not necessarily company specific, but your observations in the sector and in the practice, given the vast amounts of data that is needed on training, to have an adequate, basically, model to inform the applications and algorithms. In this case, what is the approach in the private sector? Or what should be the approach to safeguarding integrity of data?
- Addie Cooke
Person
Yeah, absolutely. And this is obviously something we think a lot about, especially in the EU, where there is such a prominent privacy law, and also they're onboarding a new AI law. So navigating the regulatory bounds of each jurisdiction is extremely important for every technology company, not just Google. But I would answer in this way, especially in the context of public sector.
- Addie Cooke
Person
The unbound chat bots that we are very familiar with as of last year are not going to always be the right application for public sector. And I would actually venture to say it's often not the right application for public sector. The example I gave in my testimony was about a very bounded model, a model that was bound using very specific context.
- Addie Cooke
Person
So a call center, and it was trained using, in partnership with the New York Department of Labor, labor data to make sure we're providing optimal outcomes and make sure that we're safeguarding the privacy of the citizens of New York using the system. So I would say it's very context-specific and very dependent on the application. There are other applications that I think are also well suited for a public sector function, like our enterprise search tool, which is another bound product.
- Addie Cooke
Person
So if you work for the Department of Education, perhaps in the State of California, and you want to partner with the Department of Human Services, you can use data from both of those agencies to create a bound system where you have perhaps a search function that only looks at the data in front of it that you're pointing to. It doesn't reach out to the world's Internet data, which might cause mixed results, like hallucinations. There are a lot of different products out there.
- Addie Cooke
Person
And so I think what would be helpful for potentially this committee is to look at the applications that are very specific to delivery of public service and maybe get a better idea of how sometimes those chatbots that we're using in the public, sort of for fun, to meal plan, to plan some vacations, may not be the best fit for our government use, and certainly, we don't want government employees using those systems because it will not protect the privacy you want to use enterprise-grade products.
- Steve Padilla
Legislator
Thank you, Ms. Cook. Chairman Dodd.
- Bill Dodd
Person
So you use the word bound about five times. I don't know what that means or what the reference is there.
- Addie Cooke
Person
Yeah. So bound, so unbound would be these papers right here. So if I am a chatbot, I'm roaming the Internet, I'm gobbling up every piece of paper I can find. Whereas for a product that I would use in my example with the Department of Education and the Department of Health and Human Service for California, I would say, okay, I'm the Department of Health and Human Services.
- Addie Cooke
Person
I'm going to use this data set, which I'm very familiar with, which I've cleaned, and I'm going to partner with my friend over here in the Department of Education. I'm going to use this data set, and then I'm going to point the product just at those two piles of documents to test it. I'll say, what's Taylor Swift doing next weekend?
- Addie Cooke
Person
And it won't know because it's only looking at the Department of Labor and the Department of Health and Human Services data to look for information and patterns that we're trying to find across both data sets.
- Bill Dodd
Person
So you're most likely to find a more credible source when you do something. Thank you for that. I appreciate that.
- Steve Padilla
Legislator
Thank you. Are there other Member questions for this panel? All right, we thank you both for your testimony and participation. Happy to recognize and introduce our third panel, which will look at the state entity perspective. First, Amy Tong, Secretary of Government Opps. Liana Bailey-Crimmins, who's Director of California DOT. Angela Shell, Chief Procurement Officer at California Department of General Services, and Monica Erickson, Chief Deputy Director, at California Department of Human Services. Welcome, Madam Secretary.
- Steve Padilla
Legislator
We will start by recognizing you, then following in the order indicated in the agenda, and please proceed when ready. Welcome and thank you.
- Amy Tong
Person
Good morning, Chair Dodd, Chair Padilla and Senators, am I not on?
- Unidentified Speaker
Person
Are you on now? Okay. You're on.
- Amy Tong
Person
Okay.
- Amy Tong
Person
Did I turn it on or back on? All right, I'll start it again. Testing the technology. Good morning, Chair Padilla and Chair Dodd, and Members of the Committee, thank you for the opportunity to be here today. I am Amy Tong, secretary of the Government Operations Agency, and it is my pleasure to be here.
- Amy Tong
Person
Later on, introducing an esteemed panel of our state entities who is going to be briefing you all on what the state's implementation of generative AI. GovOps under the Governor's Executive order is the leading and coordinating agency for the implementation effort in partnership with the Department of Technology, Department of HR, Department of General Services, the Office of Data Innovation, the State Cybersecurity Integration Center, GO-Biz, and the state labor and Workforce Development Agency.
- Amy Tong
Person
While implementing this executive order, one of our primary initial goals has been the study the development, use and the risk of generative AI through the state. Recognizing the importance of evaluating this technology from the start, our first deliverable was a report released in November on the benefits at risk of generative AI. This report examines how the state could improve operations using Gen AI and highlights the importance of secure control pilots in doing so.
- Amy Tong
Person
The report also describes potential risk, noting that these issues must be addressed to ensure the Californians will collectively benefit from this technology. While evaluating Gen AI, we're also developing a deliberate and responsible process for the deployment of this technology within the state government by evaluating the feasibilities of these pilots.
- Amy Tong
Person
For this reason, we are carrying out multiple level of engagement within the state, including the state subject matter experts, our legal teams, our program teams, our privacy teams, security teams participating in the community of AI practice, as well as the generative AI leads from one in 100 departments. In addition to the state team, we consulted external entities with over 70 separate entities with over 140 unique individuals for the academia centers.
- Amy Tong
Person
Many of you heard from earlier. Local and federal officials, community-based organizations, as well as the state labor leaders to make sure that we keep our workforce and the communities we serve at the forefront of our evaluating this solution. Our goal is to learn by doing. We are taking this responsible very, very seriously and through these predeveloped pilots that which you have heard and I believe my colleagues are going to go into a little bit more.
- Amy Tong
Person
We are using the public sector data and engaging our employee from the very beginning in developing these procurement, and we are working to make sure that we're not only focusing on doing this work in a safe, secure and privacy protected manner. We're also engaging additional community-based organization and partners to be part of the solution evaluation as these pilots are moving forward and to further determine whether these pilots will continue or not into a real solution in delivering state services.
- Amy Tong
Person
In addition, given how fast Gen AI is evolving, we are committed to developed our administrative policy processes and procedures in an iterative manner and to upgrade them or update them on the frequency that it could be reflecting the learnings of our pilots. That concludes my comments and thank you again for your time and having me to join this. And we welcome the continued oversight and collaboration from legislative members as we go down to this journey together. With that, I'm turning over to Director Liana Bailey-Crimmins.
- Liana Bailey-Crimmins
Person
Thank you, Secretary. Good morning, Mr. Chairs and senators. Thank you for the opportunity to speak with you today. I am Liana Bailey-Crimmins, California State Chief Information Officer and the Director of the California Department of Technology. Copresenting with me today is Angela Shell, who is the state chief procurement officer for the Department of General Services, and also Monica Erickson, who is the Chief Deputy of CalHR. In partnership with GovOps, our departments play a pivotal role in delivering on the governor's executive order.
- Liana Bailey-Crimmins
Person
Governor Newsom is committed to continuing California's leadership in emerging technology, and this executive order is an example of that commitment. Our goal is to capture the benefits of generative AI for the good of society but also to be completely aware of the potential harm. As most of you are aware, the State of California has embraced technology and used artificial intelligence, such as CAL FIRE in detecting fires and saving lives, and also saving our environmental resources. And we're also using chatbots, and predictive analytics.
- Liana Bailey-Crimmins
Person
But we also understand that Generative AI is unique and different in its own right. California is home to 35 of the 50 world's top artificial intelligence companies, and so that's why we built an inclusive process to bring these best of minds together to help us as we are developing the guidelines.
- Liana Bailey-Crimmins
Person
The Executive order established the partnership between UC Berkeley and Stanford, and we wanted to make sure that we were having a thoughtful discussion on really understanding how we could benefit in government services, but also making sure that we are stepping up when it comes to leadership to protect against the risks and obviously making sure that we are aware of that as we're developing our processes and practices.
- Liana Bailey-Crimmins
Person
In addition to academia, GovOps, the Office of Data Innovation, Cal OES, CDT, DGS, CalHR and many other state departments are engaging. Numerous stakeholders, you've heard from Madam Secretary, over 70 just in the guidelines that we've produced so far.
- Liana Bailey-Crimmins
Person
In addition to this, we are looking at providing a comprehensive set of deliverables related to benefit risks, procurement uses and training, evaluating our critical infrastructure and making sure we're protecting against any potential harm, understanding vulnerable communities, and how we can make sure that we're not introducing or amplifying any biases, looking at IT project approval processes, procurement contracts, how do we approve these projects moving forward? Proof of concepts and pilot projects.
- Liana Bailey-Crimmins
Person
And the executive order specifically requested that the California Department of Technology build a sandbox, which was a safe area to be able to test this innovative set of services, and also to do an inventory in alignment with Assembly Bill 302.
- Liana Bailey-Crimmins
Person
As of today, the state has issued five requests for innovative ideas to safely and effectively test the implementation of Gen AI, and Angela Shell will go into more detail on that RFI square process that she is overseeing. As the state chief information officer and also the co-chair of the California Cybersecurity Integration Center, which is Cal-CSIC, I co-chair that with Department Director Ward for Cal OES, I take cybersecurity and privacy of our data seriously.
- Liana Bailey-Crimmins
Person
We have a motto, people first, security always, and the Department of Technology sandbox that we have built will be monitored 24/7 by the top security experts in the state. We also have isolated it from the existing operating systems of the state, so it's in its own isolated, firewalled environment. And the five Gen AI proof of concepts will only use publicly available data that is shared amongst our websites and available to all Californians.
- Liana Bailey-Crimmins
Person
So that means that no personal, confidential, proprietary or sensitive information will be used in regards to these proof of concepts. But even though we're using publicly available data, I just told you we're doing security always. So we have built high walls very high and moats very deep to make sure that all safeguards are in place. So some of those safeguards include FedRAMP certification. That is the highest level of federal government approved requirements when it comes to cloud security.
- Liana Bailey-Crimmins
Person
We also have the highest data levels of encryption, both in rest and in transit, and we have required controls for access so that only individuals that have the rights can access the information. We also have specific data management requirements. As most of you know, with our general provisions, what the state produces is what the state owns, and so the state will retain ownership and control of all of its data, intellectual property, and ongoing visibility on how that state data is being used and secured.
- Liana Bailey-Crimmins
Person
All participating vendors must comply with our security policies and our cloud security guides, which requires vendors to classify, sequester, protect state data and continuously monitor that data for any potential warning signs of a security or privacy incident. I look forward to further engaging the Legislature on this and other emerging technology innovations, and that completes my open remarks. Available for questions.
- Angela Shell
Person
Thank you, director. Good morning, Mr. Chair and senators, and thank you for the opportunity to present today. Again, my name is Angela Shell. I am the state's chief procurement officer and also the Deputy Director for the Procurement Division at the Department of General Services.
- Angela Shell
Person
My role is to oversee the state's procurement operations, including issuing procurement policy and guidance for all of our state departments.
- Angela Shell
Person
In partnership with the Government Operations Agency, both DGS and CDT have been actively working to identify ways in which our state procurement processes can be enhanced to provide departments with that guidance needed to effectively procure generative AI tools while also providing guardrails around when generative AI is being used in our procurements and how to safeguard that use through our contractual protections that include things like notices from vendors when materials are generated by Gen AI, and then additional terms and conditions that can help to protect whatever we're purchasing that's generated by Gen AI.
- Angela Shell
Person
As the director touched on, the state is using our most innovative procurement process, known as the request for innovative ideas two. So RFI squared and what we hope to gain out of this is an opportunity for us to have some insight using the secure sandbox into how generative AI can be used by the state with some specific projects.
- Angela Shell
Person
So to that end, the RFI2 process allows the state to solicit innovative ideas within an industry by releasing a problem statement, identifying the current as-is environment, and requesting solutions from the supplier community or innovators as we call them. Rather than identifying state-defined requirements upfront. The RFI process allows for an upfront validation of proposed solutions through working models or what we're calling proofs of concept so the state can make informed decisions about that innovative solution.
- Angela Shell
Person
It allows us to enter into contracts with a single or multiple innovators with the ability to negotiate a contract for full implementation once the proof of concept phase is complete. An RFI two does involve a competitive bidding process. It also allows an iterative dialogue between the state and those bidding vendors. The RFI2 is advertised on the state's government e-marketplace, Cal eProcure, and proposals are submitted at a date specified in the advertised document.
- Angela Shell
Person
A proposal conference is held for interested vendors to ask clarifying questions of the state around the procurement process or the problem statement itself, and those responses by the state to any questions are posted online for all potential bidders to see before they submit a proposal for the deadline. The proposal or concept papers are evaluated by a team of subject matter experts, and potential vendors are placed into a pool for evaluation of their ideas in that proof of concept phase.
- Angela Shell
Person
The state negotiates contracts with those potential vendors who will test those concepts, and again, additional contracts may be issued for a fully scaled up solution once the proof of concept phase is complete. The state is using the RFI2 for all five of the proofs of concept and the following five projects are- there are two with the California Department of Transportation.
- Angela Shell
Person
One is called vulnerable roadway user and it investigates near misses of injuries and fatalities to identify risky areas and monitor interventions that are designed to increase safety of vulnerable road users. The second one is traffic mobility insights, and the goal of that is to process and interpret complex data to improve traffic pattern analysis, address bottlenecks, and enhance overall traffic management.
- Angela Shell
Person
The third is with the California Department of Tax and Fee Administration, the call center team productivity and the goal of that is to improve call center operations, to reduce call times, improve customer service, and increase taxpayer compliance. The last two are with the California Health and Human Services Agency. One is for language access, and the goal of that is to ensure that Californians with limited English proficiency have timely access to information about public benefits and they can navigate public programs with ease and without confusion.
- Angela Shell
Person
And the last is a healthcare facility inspections and the goal of that one is to leverage tools to expeditiously document the facts or findings that are seen by a surveyor during healthcare facility inspections, and hopefully the tool will help them develop a concrete set of outcomes or citations that match the state and federal requirements. The timelines for the RFA process are all five projects have been advertised and concept papers have been submitted to the state.
- Angela Shell
Person
The concept papers are currently in the evaluation phase of the procurement process. Successful vendors are anticipated to begin testing concepts in the sandbox in March or April, and then testing is anticipated to be complete within six months in that sandbox. And then, if applicable again, implementation contracts will be issued once the testing and analysis in the sandbox is complete. So that completes my remarks, and I'm happy to answer any questions.
- Monica Erickson
Person
There we go. Good morning. Monica Erickson, Chief Deputy Director of Department of Human Resources the Executive order calls for training to support state government workforce to prepare and support state government workforce for generative AI.
- Monica Erickson
Person
Future training may include education and training, collaboration and community communication, upskilling and reskilling, support and feedback, and continuous learning before Gen AI tools are deployed. As a first step to prepare the workforce for generative AI, dialogue and engagement will guide the design of the trainings with the goal of individuals gaining general education needed so that potential labor, legal and privacy risks are identified and addressed proactively.
- Monica Erickson
Person
Also, building Gen AI skills and competencies to identify where Gen AI may help improve operational efficiency, high quality, equitable service delivery. Security and safety are paramount in generative AI, as my colleagues have stated.
- Monica Erickson
Person
Training will be designed to ensure safety and security during use cases and piloting, scaling the necessary familiarity with the way generative AI technology works within the workforce.
- Monica Erickson
Person
We'll ensure that all departments are well prepared to respond to any questions state government employees may have and are ready to leverage their ideas for positioning Gen AI to improve critical services they deliver for California. Thank you so much for allowing me to present today, and that concludes my comments.
- Steve Padilla
Legislator
Thank you again to all of the panelists. And just briefly for ., with respect to the evaluative process on RFI, have you institutionalized or created a protocol that there's a specific involvement for public sector employees in both the submittal and development of problem statements and then the pilot evaluation?
- Angelique Ashby
Legislator
Thank you, Senator Padilla, for that question. Yes, the state workforce is just like any emerging technology, is vital. Through the entire lifecycle, workers determine the business challenge and goals. They are also setting up what is required as requirements and outcomes and measurable outcomes.
- Angelique Ashby
Legislator
They are also involved in evaluating what a vendor proposes and then based on the solution. The outputs that come out of that is also meaning a human in the loop or as we heard earlier, a worker in the loop to validate that outcome. So they're incrementally involved all the way from the beginning to the end.
- Steve Padilla
Legislator
Thank you, Director. Other questions from you, Chairman Dodd. All right, sir, are there other members who have. Senator Ochoa Bogh.
- Rosilicie Ochoa Bogh
Legislator
Not a question. So I was just excited to hear the different areas that you folks are currently working on. Another area that just to plant the seed for future consideration would be using the programs in order to translate the policy, the legislations that we are currently introducing so that communities within California have access to the actual legislation that's being proposed and vetted.
- Rosilicie Ochoa Bogh
Legislator
And I think it would be a great opportunity to really engage all Californians in that discussion of policy. Just thought it would be great. Heard you and got me really excited on that end because that's something that has been working through my head as far as how do we make this much more accessible to all California? Thank you.
- Steve Padilla
Legislator
Thank you, Senator. And again, just doesn't appear that there's additional. Sorry, Chairman Dodd?
- Bill Dodd
Person
Yeah, I don't have any specific questions, but I'll tell you, I am very impressed with the four of you and what we're doing here in the State of California, in this area, and really appreciate your efforts and your leadership. I think it gives us a lot of trust in what's going on and just thank you. Appreciate it.
- Steve Padilla
Legislator
Echo that. Thank you, Mr. Chairman. And we will turn it back to Chairman Dodd for the public comment portion of this proceeding. And thanks to our distinguished panelists, all. Thank you, Madam Secretary. Thank you, Director.
- Bill Dodd
Person
Thank you very much, Chairman Padilla. Now we'll move to public comment. Those in the audience that wish to come up, please come forward now and form a line at the microphone, and we'll proceed. Good morning. No one wants to be first?
- Peter Leroe-Munoz
Person
I'll take that one then. Good morning, esteemed Committee Members. My name is Peter Leroe-Muñoz. I'm with the Silicon Valley Leadership Group. Artificial intelligence has the potential to help society and the government understand and develop solutions to our greatest challenges, whether climate change, public safety, transportation efficiency, and others.
- Peter Leroe-Munoz
Person
California has established itself as the global leader of AI research, funding and innovation, and as such, Golden State policies will shape AI's trajectory in the state and beyond.
- Peter Leroe-Munoz
Person
Therefore, the Silicon Valley Leadership Group has announced the formation of the Institute for California Artificial Intelligence policy, known as ICAP.
- Peter Leroe-Munoz
Person
ICAP's mission is to work with elected officials and regulators to think through the opportunities and challenges presented by AI and identify and promote policy and regulatory solutions that are informed, balanced, responsible, and effective. ICAP's work is grounded on several key principles.
- Peter Leroe-Munoz
Person
We believe AI policies should focus on outcomes, not theoretical or undemonstrated risks. Further, policies should be tech agnostic and recognize that there is no one size fits all approach or system for all challenges.
- Peter Leroe-Munoz
Person
Finally, government and industry should promote responsible AI that is human centered, has results that are appropriate for the user, avoids perpetuating or reinforcing biases, secure from unauthorized access or use, and considerate of the balance between benefits and challenges of different use cases.
- Peter Leroe-Munoz
Person
The Silicon Valley Leadership Group and the Institute for California AI Policy look forward to serving as a resource for policymakers and regulators as we look to maximize the benefits of this technology and address complexities for all Californians. Thank you.
- Bill Dodd
Person
Next speaker. Good morning.
- Sandra Barreiro
Person
Good morning. Sandra Barrero on behalf of SEIU California, I want to emphasize the importance of consulting rank and file members who are experts on how to improve a specific job function before procurement. Often when workers are consulted, it is after the fact.
- Sandra Barreiro
Person
And keep in mind, Department heads and directors are generally very far removed from performance of specific functions. And as a result, often when new technologies are rolled out, there can be a lot of frustrations, there can be morale problems, and there's no way for adjustment and recalibration if the technology isn't actually performing the function it's intended to serve.
- Sandra Barreiro
Person
And so this can all be avoided by making AI and the implementation of AI a mandatory subject of bargaining. For example, two panelists mentioned the potential for upskilling civil servants. But that raises a lot of questions. Right? Upskilling requires money for training.
- Sandra Barreiro
Person
It requires funding for salary increases. And are there enough upskilled positions to satisfy the current workforce so all of that can be addressed in bargaining. And lastly, the training of AI models does produce a lot of carbon emissions.
- Sandra Barreiro
Person
So in an ideal world, the AI model would be able to reduce at least the same amount of carbon, but we don't know if that's possible.
- Sandra Barreiro
Person
And so, again, another way to make sure that the training of an AI model is worth the public's investment is to include the rank and file members in the bargaining and procurement process because they are best suited to determine whether or not the public is actually going to benefit. Thank you.
- Bill Dodd
Person
Thank you. Next speaker.
- Christopher Nielsen
Person
Thank you, Chairs and senators. My name is Christopher Nielsen. I'm the Director of education for the California Nurses Association, where I also work on AI and technology policy. Thank you for convening this really important hearing. Nurses embrace and regularly master skill enhancing technologies that improve patient care quality.
- Christopher Nielsen
Person
But many of the AI systems that are currently rolling out in hospitals across the state threaten to deskill nurses work, degrade quality of care, and perpetuate health inequities in our communities.
- Christopher Nielsen
Person
Nurses urge you to develop policy frameworks based on the precautionary principle. We know that adverse event reporting, auditing these post market responses are necessary but insufficient.
- Christopher Nielsen
Person
We need robust regulations that require developers and deployers to demonstrate that AI systems are safe, equitable, and effective, that actually improve quality of care before they're deployed in healthcare settings. We've heard about the inaccuracies and biases that plague many current AI systems.
- Christopher Nielsen
Person
CNA recently polled nurses and found that 24% are regularly prompted by decision support algorithms to take actions that are actually not in the best interests of their patients. And yet, only 17% of nurses report having the ability to override those systems.
- Christopher Nielsen
Person
So nurses urge you to pursue policies that safeguard nurses clinical judgment and the clinical judgment and professional autonomy of other healthcare workers, and policies that strengthen our voice in decisions about how or even whether AI ought to be implemented, in particular use cases.
- Christopher Nielsen
Person
CNA is happy to discuss these concerns further with you, and we look forward to working with you to develop AI policies that put patients and workers first. Thank you.
- Bill Dodd
Person
Thank you very much. Next speaker. Good morning
- Unidentified Speaker
Person
Morning, Mr. Chairs, members. Thank you so much for this very, very important hearing. And it is so timely and so important because the benefits or the harms of Gen AI and other technologies on the public, on workers, are really going to depend on the guardrails that you all put in place. And there can be benefits with strong guardrails.
- Unidentified Speaker
Person
And I don't think you necessarily need a whole bunch of expertise in the intricacies of AI to do that. I think it's been said several times that having policies that are human centered, that are worker centered, that are centered on the public, that workers serve, are a good way to develop those guardrails.
- Unidentified Speaker
Person
Going back to what Ms. Barrero and Ms. Bernhardt said about human in the loop or worker in the loop, it should really be that humans are at the center of all of the policies that move forward.
- Unidentified Speaker
Person
And Ms. Barrero said very eloquently, workers need to be at the beginning of what technology is developed, what technology the state procures and deploys in order to provide the work or improve the work that they do. And it was mentioned that there's a need for training for upskilling of workers.
- Unidentified Speaker
Person
Really, I think also there is a need for workers who have the expertise to train the AI. And to train it to be an effective tool, because that is the most important, that workers stay at the center. And that AI and technology is a tool that they can use to make work more efficient, safer, more skilled, and overall better, both for workers and for the public.
- Unidentified Speaker
Person
And then also, workers really need to have oversight, especially in the public sector, of consequential decisions. In the public sector, workers are the ones making the decisions about medical eligibility, about whether CPS is going to take your kids away. They are personing the 988 mental health and suicide hotline.
- Unidentified Speaker
Person
They are deciding about people's unemployment benefits. People in the most vulnerable situations are coming to the state for services or to local government for services. Workers are very skilled in this.
- Unidentified Speaker
Person
They need to be the ones who have oversight if AI is used in any of those decisions, or the state may decide, you know what, AI should not have a role in this. We need to have workers who make these decisions. This data is too sensitive.
- Unidentified Speaker
Person
We don't want this to be put in and used to train AI. So that's just some of our recommendations. Thank you so much. This was a fantastic hearing.
- Bill Dodd
Person
Appreciate your comments. Next speaker, please.
- Samantha Gordon
Person
Good morning, Chairman Dodd, Chairman Padilla and Member of the Committee, thank you for the opportunity. Members of the committees, thank you for the opportunity to provide public comment. My name is Samantha Gordon.
- Samantha Gordon
Person
I'm the chief program officer at Tech Equity Collaborative. Our mission is to raise public consciousness about economic equity issues that result from the tech industry's products and practices and advocate for change that ensures that tech's evolution can benefit everyone. Thank you for your leadership in bringing us together. This was a great hearing.
- Samantha Gordon
Person
The public sector is, as we've discussed, uniquely positioned to set a standard that helps shape the tech industry's practices and products to ensure that they're beneficial to society.
- Samantha Gordon
Person
What distinguishes the public sector from the private actors who develop and sell this technology is that the public sector is responsible for providing services to everyone and navigating a complex set of requirements and use cases to ensure that the public has equal access, fair treatment, and the ability to pursue recourse for harm.
- Samantha Gordon
Person
Because of the unique role the public sector plays in serving all people, it is also uniquely required to set a high standard for the adoption of technology. Already, we've seen the way in which harm can be exacerbated through the adoption of emerging technologies in the public sector without adequate guardrails or a deep understanding of the context in which technology is entering.
- Samantha Gordon
Person
In Florida, an investigation found that the scores given to judges during sentencing hearings via a risk assessment product were significantly inaccurate.
- Samantha Gordon
Person
A ProPublica study found that only 20% of the people predicted to commit violent crimes actually went on to do so. Additionally, they found that the algorithm falsely flagged black defendants as future criminals, wrongly labeling them this way at nearly two times the rate of white defendants.
- Samantha Gordon
Person
In Michigan, nearly 40,000 people were wrongly accused of Unemployment Insurance fraud via an algorithm between 2013 and 2015. One woman shared that after losing her job, the state's UI system clawed back her benefits.
- Samantha Gordon
Person
She lost her apartment, she had to move in with relatives out of state and was unable to care for her ailing mother. Last month, January 2024 nearly a decade later, 3000 of those 40,000 people were awarded some restitution of approximately $2,000 each.
- Samantha Gordon
Person
As you can imagine these incidents are not isolated. There are more due to the centrality of the public sector in some of the most difficult and vulnerable moments of our lives.
- Samantha Gordon
Person
We urge the committee to set robust standards that puts people first in the design and deployment of technology in the public sector. People have iterated a lot of principles today. Many of them are great.
- Samantha Gordon
Person
We would echo them, ensuring a clear role for workers and the public in the design, development and deployment of technology development of robust standards to reduce harm prior to the contracting or adoption of technology investing in the public sector workforce and ensuring we have technical capacity and ability to provide a human in the loop for these important processes and decisions requiring transparency and accountability for these systems and developing clear processes for oversight, monitoring and evaluation of these tools and outlining clear benchmarks to end contracts when the technology proves harmful or ineffective.
- Samantha Gordon
Person
The public sector has been deliberately starved of resources for nearly 50 years, and the appetite for technology that result in backlogs, speed up complex processes and improve the public's ability to efficiently utilize government services is rightly a really urgent priority. But to not fall prey to speed over quality or the science fiction that Professor Ho talked about.
- Samantha Gordon
Person
We encourage the committee and all of our senators to consider these examples and principles to strengthen our resilience against products and practices that could further exacerbate problems in our communities.
- Samantha Gordon
Person
We think that this strengthens our resolve against harmful uses of technology and allows us to open up to new opportunities, technologies and approaches that are reliable, safe, and beneficial for all Californians. Thank you.
- Bill Dodd
Person
Thank you.
- Kristina Bas Hamilton
Person
Good afternoon. Good morning, Senators. Still morning. Feels like we're at almost midnight. I'm here. Kristina Bas Hamilton, representing Economic Security California. Economic Security California is a proud co-sponsor of SB 1047, which is Senator Weiner's bill.
- Kristina Bas Hamilton
Person
We specifically are very enthusiastic about one of the elements of the bill, which is the creation of Cal Computes, which is basically a public option that would create, like, a public cloud that would provide an alternative to the current model, which is basically controlled by very few players who have essentially control of the marketplace.
- Kristina Bas Hamilton
Person
So creating an alternative to not the private cloud necessarily, but creating public alternatives that will allow for investment by researchers and access to information that otherwise folks will be unable to access is very important. And thank you for having this hearing.
- Bill Dodd
Person
Thank you. Looks like the last speaker.
- Mitch Steiger
Person
All right. Thank you. Chairs Dodd and Padilla, Mitch and staff Mitch Steiger with CFT. We're a union of over 120,000 educators and classified professionals across California.
- Mitch Steiger
Person
And we would largely echo the comments of Ms. Bernhardt and other labor advocates that have already spoken, especially their comments around the need to include workers in the decisions over what sort of AI to introduce, how to introduce it, how training should work.
- Mitch Steiger
Person
It's a really key point that we think needs to drive the discussion specific to the education world. AI is already everywhere. It's all over the education system. Workers have barely been involved in these decisions, and it's created a lot of problems.
- Mitch Steiger
Person
Probably the most common use of AI in education has been in a variety of software that's used to assess students and then spit out recommendations on what the teacher should do. But in many cases, this software is very proprietary. The teacher isn't really involved at all.
- Mitch Steiger
Person
The student goes in, uses an Ipad, some other sort of technology answers questions that sometimes the teachers don't even have access to. And then it spits out recommendations. Here is what this group of students need.
- Mitch Steiger
Person
That's what they need. And the teacher doesn't really have often a whole lot of discretion over whether or not that makes sense or if they agreed with the conclusions of the AI. And the AI is just getting much more advanced quickly, far beyond our capacity to deal with it.
- Mitch Steiger
Person
And so we strongly would urge that as we look at this technology and as we look at how it gets introduced in the education system, that we focus on this concept that was raised by a lot of the speakers of making this human centered, that where this technology works is where it takes a task off of the desk of a very busy teacher. That could very easily be done by technology.
- Mitch Steiger
Person
Whether it's data entry, pains me to say that as someone who used to work in data entry probably employs far few people than it did back in my day, where we want teachers to be spending more time with children and educating them, and less time doing rope, busy work that technology can do a great job with.
- Mitch Steiger
Person
But where this technology seeks to replace all the best things that teachers and classified workers do, and all the advantages of having an actual human being there, interacting with the student and can learn, maybe the problem isn't long division. The problem is things at home and working with the student through that, things that AI won't be able to do.
- Mitch Steiger
Person
What this technology needs to do is kind of marry those two concepts of the advantages of technology, the advantages of human beings, to create the best education system that we can.
- Mitch Steiger
Person
So we're definitely not anti technology. We're just anti introducing technology that runs the risk of eliminating the greatest benefits of having good education workers there in the system so that we can have the best system overall. Thanks.
- Bill Dodd
Person
Thank you. Well, that concludes public comment. Before we adjourn, Senator Padilla, any closing comments?
- Steve Padilla
Legislator
Thank you for your leadership, Mr. Chairman.
- Bill Dodd
Person
Okay. I'd like to finish by saying a very big thank you to all the speakers that we had today. Your testimony was amazing. I think if I'm any reflection of the rest of the Senators who were on this panel and staff, I've learned a lot today and candidly probably have as many more questions as you continue to think through all these different things.
- Bill Dodd
Person
But this is a great start and it's been a great conversation. And for those of the public that joined us, thank you for your participation. This joint information hearing is now adjourned.
No Bills Identified
Speakers
State Agency Representative