A woman in a yellow sweater and a man in a brown jacket smile at the camera in two separate indoor and outdoor settings.
District 1 supervisor Connie Chan (left), former software engineer and Capitol Hill staffer Saikat Chakrabarti. Photos by Yujie Zhou.

This is part of a series of interviews with front-runners in the race to replace Nancy Pelosi as the representative for California’s 11th congressional district. Read about the candidates’ foreign-policy views here.


Three front-runners are vying to represent San Francisco as Nancy Pelosi’s replacement in Congress: District 1 supervisor Connie Chan, former software engineer and Capitol Hill staffer Saikat Chakrabarti, and California State Sen. Scott Wiener. 

The job will involve a good deal of tech policy chops, and Mission Local decided to ask the three contenders how they would approach Silicon Valley if they go to Washington, D.C.

All agreed to be interviewed for Mission Local’s previous Q&A on foreign policy, where the candidates spoke about Gaza, Taiwan, and more. But only Chan and Chakrabarti agreed to participate in this interview, which focuses on AI, regulating Silicon Valley giants, cryptocurrency, and San Francisco’s own tech policies.

In some matters, Chan and Chakrabarti’s platforms overlap quite a bit.

Both are in favor of more regulation. Specifically, limiting the powers of tech monopolies, protecting jobs from AI, and removing legal protections that have kept tech firms from being held liable for some of harms enabled by their products. Both agreed that cryptocurrency should be restricted.

The two begin to diverge when it comes to the role of tech in government itself.

Chakrabarti, who became a centimillionaire after being one of the founding engineers at Stripe, is a fan of creating a federal digital currency. Chan expressed doubts that such a currency could be created in a way that would safeguard privacy and consumer protection.

Chan’s tech platform focuses on regulating the way that tech is being deployed in the workplace and other sectors. Chakrabarti wants to create “a society with abundance for all,” echoing a phrase and alluding to a movement that last year was a darling of Democratic Party policymakers.

He seeks public ownership and control over AI through public-equity stakes in AI companies and by converting financially troubled AI enterprises (and financial institutions) into public utilities.

Wiener’s legislative record as a state senator offers clues as to what he might do in Congress.

He’s one of several state legislators who have moved to regulate tech, and he tends to work with tech leaders in doing so.

In March of this year, Wiener and Y Combinator CEO Garry Tan teamed up to promote SB 1074, a bill that, if passed, would make it easier to sue the largest tech companies (those with a market capitalization greater than $1 trillion and over 100 million monthly users in the United States) for anti-competitive behavior.

In 2025, Wiener successfully passed SB 53, which requires large AI model developers to publish information on their websites about, among other things, how they plan to manage the risks posed by their products. An earlier version of the legislation, SB 1047, was vetoed by the governor in 2024.

SB 53 also established CalCompute, a publicly owned cloud computing cluster that would be developed within the University of California system and could be used for AI research and development. That’s still awaiting state funding. 

Each candidate was interviewed separately, in person. Their answers have been edited for brevity and clarity.


AI development is accelerating rapidly. How should Congress make sure regulation keeps pace?

Chan: It is the AI industry claiming that the technology is rapidly developing. I am skeptical about the technology itself and the rate of growth, which might turn out to be a tech bubble. There are a lot of doubts both from the people in the industry and from investors who are watching it.

I am questioning how sophisticated that technology really is becoming. AI is really automation and algorithms, maybe on steroids. What is happening remains to be seen.

When I’m in Congress, my approach to regulating any technology, inclusive of AI, is to look at the technology itself and regulate accordingly. AI in the space of healthcare. AI in the space of entertainment industries. AI in education. AI in the workforce. All these should be regulated accordingly, prioritizing safety and the workforce — in this case, not taking away jobs.

If we make sure that these are our guiding principles in regulating AI, then I think we can go a long way.

Chakrabarti: You have to have public ownership and public control of this technology to keep pace. 

Artificial intelligence is a technology that’s threatening to wipe out half of all of our jobs. It will have a profound impact in San Francisco, especially with all the white collar jobs here. It might even wipe out humanity itself — that’s what a lot of tech leaders are saying. The regulations that are being proposed are nothing near the scale of the problem that we’re facing. 

I want to be clear, I don’t believe AI itself is good or evil. The question in front of us is, will this technology be for the benefit of all humanity? Or is it going to just further the consolidation of wealth and power into the hands of a very few?

Are we going to be looking at a dystopia where a few powerful people have all the wealth and power, and everyone else is in a permanent underclass — which is a term they use in tech here today. Or could we actually harness this power to create a society with abundance for all, a “Star Trek” future rather than a “Mad Max” future.

First, we need to make sure that the productivity gains from AI are going to workers, not just the CEOs. If you look at how automation entered manufacturing in Germany, they have labor corporations and government sit on the boards of their major auto manufacturers, and they make five- and 10-year industrial plans.

The result of automation there was their workers started to make double the wages of their American counterparts. They made more cars per capita, and had more automation.

Second, we need to actually be looking at a public option for AI. We should be treating computing power as a utility. In some industries, such as insurance underwriting or certain financial services, there might not be that much work required at all to provide those services.

What’s happening is those services are just being consumed by AI monopolies who are taking up more and more of our GDP. Instead of more and more of our economy being held in the hands of a few monopolies, these should be provided as public services by the government because they’re not going to require any work. 

Third, we need to actually have a federal jobs guarantee. There are tons of types of work that are not going to be affected by AI. Caretakers, educators — all kinds of work where we have massive job shortages — we have no training programs, no jobs guarantee. We need to have an actual, retraining program with a job guaranteed at the end.

The other thing is, we should actually be having public equity stakes in these AI companies. LLMs are essentially supercharging the expansion of capital. We need to have a way for the benefits of that capital to come back to humanity, not just getting consolidated further and further. LLMs are trained off of the sum total of human labor, so humanity should have a stake in it.

If Congress passes a national AI regulatory framework, should it preempt state laws like California’s AI regulations, or should states be allowed to go further?

Chan: Especially when it comes to technology, I am always going to be in the space of respecting local control in terms of policy.

In AI regulation, we should not preemptively stop local governments like San Francisco or state governments like California from being able to also impose their own regulations that can be more restrictive and more on target in regulating upcoming technology.

Chakrabarti: Legally, federal law preempts state law. A national AI framework should keep space in it to allow states to create their own regulatory frameworks that go further. I think states are great innovation labs for regulation. But regulations alone are not going to be the thing that actually tackles AI. A lot of this is about control and ownership, and that does have to happen at the federal level.

Should Congress create a new federal liability standard that makes tech companies responsible for certain harms caused by AI-generated content on their platforms?

Chan: Not just around AI, but also around technology, social media content, and so much more. Congress absolutely should establish a framework for consumer safety and consumer protection.

Chakrabarti: Yes, they should. We currently don’t have liability for AI companies or tech companies that have the potential for harm on their platforms at all. They should be responsible for certain kinds of harm on their platform.

Would you vote to narrow Section 230 protections for large social media platforms?

Section 230 of the Communications Decency Act of 1996 is a U.S. law that protects online platforms from being held legally responsible for what users post, while allowing them to moderate content — effectively separating these platforms from other publishers, like newspapers, which are held to a different legal standard. Supporters say Section 230 enables free speech. Critics argue that it lets platforms avoid accountability for most harms that were enabled by these platforms.

Chan: If you are going to be in the business of publishing information, then you ought to also be in the business of fact-checking. That is how I view social media; that’s also their responsibility. 

There has been a long debate about whether social-media platforms should function similarly to news outlets. My expectation is that we should amend this section to specifically tailor it to social-media platforms and how they should be regulated.

While I am always going to be an advocate for First Amendment rights, I’m also going to be a strong advocate against hate speech, or any language that promotes violence and predatory language for our kids and people online.

Chakrabarti: Yes. I’m not in favor of repealing Section 230, because I do think that would have a profound impact on free speech on social media platforms. But I do believe that there are certain kinds of harms that we should hold tech companies and social media companies responsible for.

Would you vote for legislation like the American Innovation and Choice Online Act that would prohibit dominant platforms such as Google, Amazon, Apple or Meta from preferencing their own products, even if that could force parts of those companies to be separated?

Chan: I’m always in favor of breaking up monopolies. I truly am a supporter of small businesses and diversity in terms of options, platforms and competition.

Chakrabarti: Yes. These massive platforms are using their monopoly and platform power to essentially preference products that are either their own products vertically integrated or preferencing products that just pay the most. They’re doing a pay-to-play system.

Regulation has to regulate how these search algorithms actually work, because consumers search on Amazon thinking they’re getting the best products at the top. What they’re actually getting is the products that paid the most to get their search results at the top.

Do you believe cryptocurrency should become a mainstream part of the U.S. financial system? Or do you think Congress should be working to limit its role?

Chan: I absolutely think that we should limit crypto. We haven’t even done a good job in regulating the existing banking industry. If crypto becomes another alternative system, it can easily help people abuse the system for illegal uses, potentially leading to criminal activities.

Chakrabarti: No, it should not become a mainstream part of the U.S. financial system. Right now what we’re seeing with cryptocurrency is a massive amount of corruption. Especially with this president, [who] is openly trading cryptocurrency as a way to essentially do a quid pro quo corruption. Foreign governments will buy Trump’s crypto coin in exchange for favorable trade agreements and favorable tariff deals.

What we need to be doing instead is regulating cryptocurrency, have them follow the same Know Your Customer banking standards. I’m a supporter of Elizabeth Warren’s Digital Asset Anti-Money-Laundering Act.

We need to make sure that currency is in the purview of the U.S. government and the Federal Reserve, and we don’t lose that power as a nation state.

Would you support legislation authorizing the Federal Reserve to issue a U.S. central bank digital currency?

Chan: The United States has yet to have a good grasp of regulating its existing banking industry and safeguarding people’s privacy and consumer protection, let alone digital currency. I am not confident that we have the capacity to ensure consumer protection yet.

Chakrabarti: Yes, I am a fan of a federal digital currency. I believe that would make a lot of things simpler. I don’t believe it should be required. I still believe cash should exist. But I’m a big fan of things like public banking.

Having something like a U.S. digital currency would make it easier for us to do things like deposit money into people’s bank accounts, if they have a public bank account, when we’re doing things like relief because of COVID-19, or something like a child poverty tax credit. 

I believe we need to have the federal government play more of a role in banking in general. Not only should we have public banking for consumers, we should also have public banking that directly saves funds and finances affordable housing in our country.

We used to have a nation where we had way more digital public banking services that we built outright. The Automated Clearing House, Fedwire, these are things that the federal government built. We should go back to that kind of a stance where the federal government is directly building financial services.

I also think we should have public debit cards, so that our small businesses aren’t dying from credit-card fees.

If you could create one piece of legislation to make tech better, what would it be?

Chan: I want to do many things, but what I will prioritize at this moment is really the public interfacing of ChatGPT, particularly with our youth — to understand how ChatGPT is being used.

I understand that it’s already in a lot of consumer spaces, but having it to be potentially used and tested with young people in learning environments and potentially in healthcare treatment environments — these are all my concerns.

Chakrabarti: It would be a way to get towards a public option for AI. The bit of legislation that we’ve already done significant work on at my think tank could become relevant: There’s potentially a bubble in AI right now that might pop.

If that bubble pops, we’re going to see AI companies and banks coming to the government for a bailout. Instead of providing a bailout, I believe the government should buy out these companies on pennies on the dollar, buy out the banks on pennies on the dollar, convert them into public utilities, and then actually give us, the people, control over where this technology goes and how we can use it, rather than having it be in the hands of a few powerful people at the top.

Learning from the past, should San Francisco have done anything differently regarding Uber, Twitter, Airbnb, DoorDash, or Waymo?

Chan: We should’ve really regulated Uber, Lyft and Waymo by having additional regulations and layering a permit before they can operate here.

I wish that we could continue to have some commitment from the city with these companies in a framework of how they should be regulated, and how they should contribute to the city, local economy — to understand that when they’re on our roads, these are the impact they bring, so that we could actually have conversations about how we mitigate those impacts.

Chakrabarti: We have seen how the previous tech wave gave all these tax breaks to try to draw these companies in, [and] led to a lot of displacement and a lot of the affordability crisis that we have in our city today. 

Instead, what we need to do with this next tech wave that’s coming our way with AI, which is going to be at a scale that’s even larger than what we saw last time, is we need to actually make sure that the public benefits from it, not just the tech companies at the top.

We have to get away from this mindset where we believe we are at the mercy of the tech companies. The tech companies should be working for us for the good of humanity.

I don’t think we should have done the massive tax break we gave for Twitter to come into downtown.

A lot of people are talking about AI in San Francisco, but not everyone seems to agree on what that is. Could you explain to us what AI is, and how you would regulate it?

Chan: I think the vision these AI companies seem to have is to have this so-called artificial intelligence do some of the analytical tasks for humans, but I know that we’re not there yet.

The principles that I approach AI and any technology with are safety and what kind of impact it has on our workforce. For example, in healthcare, like telehealth, the question is: If they’re using this technology to do diagnostics, how do you safeguard the technology to make sure that it is safe and that it is not replacing our medical staff and experts? 

Chakrabarti: AI has many different definitions right now. In the current world, when people are talking about AI, they’re really talking about these frontier large-language models. Tools like ChatGPT, Claude, Gemini.

AI is partly the algorithm; the actual neural network that runs these models. But it’s also the vast amounts of physical infrastructure powering it.

Fundamentally, the problem is this technology is potentially going to define the future of society, and a handful of CEOs get to determine that future for everybody.

I don’t think that makes sense. I think this is funny to me about who gets to decide the future of our society. A decision like that should be in the hands of we, the people. 

Regulations that fall into the bucket of just trying to slow this technology down are not going to be sufficient. It has to be an actual vision for how you use this technology for the good of mankind. The details of that are basically what I laid out in the first question.


San Franciscans will have their first chance to weigh in on their next congressperson on June 2. The top two vote-getters in that “jungle primary” will advance to the general election on Nov. 3, regardless of party affiliation.

Follow Us

Yujie is a staff reporter covering city hall with a focus on the Asian community. She came on as an intern after graduating from Columbia University's Graduate School of Journalism and became a full-time staff reporter as a Report for America corps member and has stayed on. Before falling in love with San Francisco, Yujie covered New York City, studied politics through the “street clashes” in Hong Kong, and earned a wine-tasting certificate in two days. She's proud to be a bilingual journalist. Find her on Signal @Yujie_ZZ.01

Leave a comment

Please keep your comments short and civil. Do not leave multiple comments under multiple names on one article. We will zap comments that fail to adhere to these short and easy-to-follow rules.

Your email address will not be published. Required fields are marked *