House/ circled
π€ Rep. Fiefia, Doug
This bill makes companies that build powerful AI systems (artificial intelligence - computer programs that can think and learn) tell the public about safety risks. It also protects workers who report dangerous AI problems from being fired.
If you work at a tech company in Utah and discover that your company's AI system could be used to create harmful content for kids, you could report this to the government without your boss being allowed to fire you for speaking up.
People deserve to know if AI systems could be dangerous, and workers shouldn't get punished for reporting safety problems.
This creates necessary guardrails for rapidly advancing AI technology while encouraging internal safety reporting through robust whistleblower protections.
Making companies publish safety information could hurt their business and slow down AI innovation.
The regulatory framework may impose significant compliance costs while potentially revealing proprietary safety methodologies to competitors.
AI developers and tech companies, employees at AI companies, children and families using AI products, Utah's Office of Artificial Intelligence Policy staff
π¨βπ©βπ§
Utah PTA
π§βπ€βπ§
Parents United
π½
Libertas Institute
π¦
Utah Bankers
on the uploaded document.Logging in, please wait... 
0 archived comments