If elected, congressional candidate Scott Wiener wants to bear down on tech regulations, starting with making national the net neutrality and AI safety regulations he passed in California.
In his tech platform, shared first with Mission Local, Wiener lays out plans for more stringent regulations on AI, social media and internet providers. His competitors, tech centimillionaire Saikat Chakrabarti and Supervisor Connie Chan, have also called for regulation.
In 2017, as San Francisco’s newly elected State Senator, one of the first bills Wiener took on was net neutrality, which requires internet service providers to allow all websites to load at the same speed.
The Trump administration had just rolled back federal net neutrality regulations, but Wiener passed a California version after what he described as a “brutal fight” where “all of the telecom and cable companies were at war with us.”
Now, he wants to make that law, and others, national.
“As the largest state in the heartland of tech innovation, we have an outsized impact when we do step in, but in an ideal world we would have a federal data privacy law, a federal approach to social media regulation, federal AI standards,” Wiener said.
Congress has been reluctant to regulate tech over the past decades, but Wiener is hopeful that will change.
“I think there’s a growing chorus, especially of newer members, who understand that we need to make sure that tech is serving the public, and not vice versa,” Wiener said.
These days, Wiener’s best-known technology bills are perhaps the ones related to his work on AI safety.
He first started working on the issue in 2023, after meeting San Franciscans who were concerned that AI models might soon conduct cyberattacks, create bioweapons or go rogue and do things like manipulate people with personalized propaganda, crash financial markets, or seize control of infrastructure.
So Wiener introduced SB 1047, one of the first bills to take a crack at AI regulation.
It would have required large AI companies to create safety and security protocols and included provisions creating legal liability for companies if their models caused harm, but the bill was opposed by most AI companies, and ended up being vetoed by Gov. Gavin Newsom.
In 2025, though, Wiener got SB 53 over the hump. It requires that AI companies make their safety plans public, and report any incidents, such as a cyberattack, to the state. It also created whistleblower protections for employees who come forward about risks.
Much of Wiener’s federal AI-safety platform has similar provisions on transparency and safety.
On the transparency front, he wants large AI developers to make the safety measures they take public, including “model specs,” which explain how an AI model has been trained to behave.
Weiner also wants third parties to ensure that developers comply with minimum safety standards, such as testing models for dangerous capabilities. Most AI companies say they do safety testing but have been reluctant to agree to independent verification of their results.
If AI labs fail to follow these safety guidelines, Wiener wants to issue large fines — and if AI companies decide they’re rich enough to just pay the fines, then he wants courts to be able to issue an injunction to stop model development.
Wiener’s opponents in the election are also fans of AI regulation.
Chakrabarti, who has worked to make the Democratic Party more progressive, also wants expanded whistleblower protections and safety testing.
His vision involves creating a Federal AI Safety Administration, which would conduct safety testing. It would also issue licenses giving AI labs permission to train their models after they’ve agreed to comply with federal safety standards.
Chan has not released a formal tech platform yet, but in Mission Local’s Q&A on tech, which Wiener declined to participate in, Chan expressed skepticism that AI will be as transformative as many hope and fear.
“I am skeptical about the technology itself and the rate of growth, which might turn out to be a tech bubble,” she said. “I am questioning how sophisticated that technology really is becoming.”
Here are other issues discussed in Wiener’s tech platform:
Data centers
Data centers consume large amounts of energy, sometimes creating strain on infrastructure. Chakrabarti and Wiener agree that AI companies should foot the bill for expanding capacity, and that utility companies should not be allowed to hike prices in response.
But they have differing views on how to deal with the fact that the increased energy demand from data centers has led to an increase in fossil-fuel use.
Chakrabarti wants to require that new AI data centers use clean energy. Wiener, meanwhile, wants to create incentives for new data centers to power themselves through their own green microgrids, and use fees assessed on data centers to fund the clean energy transition.
Data centers also use significant amounts of water for cooling. Wiener wants them to use recycled water, while Chakrabarti is calling for a closed-loop liquid cooling requirement, where water is recirculated.
At a recent debate, the candidates were split over a data center moratorium, which Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez have called for. Wiener does not support a moratorium but Chan and Chakrabarti do. Chakrabarti wants to use the moratorium as leverage to negotiate regulations with companies.
AI and unemployment
AI’s growing facility in tasks such as writing, coding and analyzing data is leading to fears that many white-collar jobs could soon be replaced.
To deal with that scenario, Wiener is calling for an expansion of the social safety net, including expanded unemployment benefits and Medicare for All.
He pointed to polling that shows Europeans are less anxious about AI than Americans.
“I think the reason for that is, Europeans know that if their job goes away, whether because of AI or any other reason, they will still have healthcare. They’ll still have childcare, and they’ll have the space to be able to figure out what’s next,” Wiener said.
Chakrabarti is concerned about worker displacement, too, and also calls for Medicare for All. In addition, he wants the government to stand up new industries and guarantee everyone a federal job in infrastructure, clean energy, teaching, and more.
Chan, meanwhile, said her priority with AI regulation would be making sure that massive job displacement never happens. “All these should be regulated accordingly, prioritizing safety and the workforce; in this case, not taking away jobs,” Chan said.
AI and taxation
Wiener’s plan calls for an AI tax, both on AI companies themselves and on industries that successfully use AI to increase their profits.
Chakrabarti’s platform also calls for raising taxes on AI companies, emphasizing that the tax should not be on these companies’ payrolls and wages, as many currently are, but instead on profits and wealth.
“As AI starts replacing labor, this bias becomes increasingly costly,” his platform says.
Federal funding for AI research
Wiener wants to expand federal funding for AI research, including “alignment” research that tries to ensure that AI models won’t go rogue — that they have the same values as humans, and will heed instructions.
This part of his proposal builds on a portion of SB 53 that created CalCompute, which provides computing power for research, startups, and public institutions.
Economic Security California Action vice president Teri Olle, who worked with Wiener on the CalCompute portion of SB 53, said she is very excited about the prospect of greater public involvement in AI development, rather than having the future of the technology be dictated by the executives of a few large companies.
“There should be investment and projects and attention dedicated to AI in the public’s interest and for projects that might not be the things that are going to be immediately monetizable by this small handful of people,” she said.
Chakrabarti’s AI platform also calls for “publicly owned AI,” including a National AI Lab that would do safety research.
“The most important questions in AI development,” Chakrabarti platform states, “must not be answered exclusively by companies whose primary obligation is to their shareholders.”
Social media
Much of Wiener’s social media platform focuses on creating new requirements for social media algorithms. That includes mandating that companies explain how their algorithms work, allowing people to opt out of having an algorithm determine what content they see and banning undisclosed boosting of paid content.
It also suggests auditing social media companies whose platforms have hateful or extremist content and prohibiting platforms from profiting from such content.
Chakrabarti and Chan have yet to release detailed proposals on how they want to regulate social media platforms. But in the April Q&A on tech both Chan and Chakrabarti said they were in favor of altering Section 230, a law that prevents social media companies from being held liable for what their users post.
“If you are going to be in the business of publishing information, then you ought to also be in the business of fact-checking. That is how I view social media,” Chan said.
Chakrabarti was a bit more cautious, but said that “there are certain kinds of harms that we should hold tech companies and social media companies responsible for.”
Deepfakes
All three candidates support cracking down on AI misinformation and deepfakes by requiring that AI-generated content be disclosed, and allowing people who weaponize that content to be sued or prosecuted.
Data privacy
Wiener’s platform also calls for a federal data privacy law that would allow people to know what data companies are collecting on them, plus the right to delete the data or opt out of having it sold.

