WritingPartners
2-Pane Combined
Comments:
Full Summaries Sorted

H.B. 286 Artificial Intelligence Transparency Amendments

2 additions to document , most recent 3 days ago

When Why
Feb-10-26 HB 286: Government Overreach in AI Development
Feb-19-26 Opinion: Techy legislation abounds in the Utah Legislature

LONG TITLE General Description: This bill enacts the AI Transparency Act relating to transparency and whistleblower protections for frontier artificial intelligence models. Highlighted Provisions: This bill: defines terms; requires developers of certain artificial intelligence models to create, implement, and publish public safety and child protection plans; requires developers to publish summaries of risk assessments for certain artificial intelligence models; prohibits developers from making materially false or misleading statements about covered risks; requires developers to report certain safety incidents to the Office of Artificial Intelligence Policy (office); requires the office to provide annual assessments and legislative recommendations regarding regulation of certain artificial intelligence models; establishes civil penalties for violations; provides whistleblower protections for employees who report safety concerns of certain artificial intelligence models; establishes remedies for employees who suffer adverse action for whistleblower activities; creates the AI Transparency Enforcement Restricted Account to fund enforcement activities; and provides a severability clause. Money Appropriated in this Bill: None Other Special Clauses: None Utah Code Sections Affected: ENACTS: 13-72b-101 , Utah Code Annotated 1953 13-72b-102 , Utah Code Annotated 1953 13-72b-103 , Utah Code Annotated 1953 13-72b-104 , Utah Code Annotated 1953 13-72b-105 , Utah Code Annotated 1953 13-72b-106 , Utah Code Annotated 1953 13-72b-107 , Utah Code Annotated 1953 13-72b-108 , Utah Code Annotated 1953 13-72b-109 , Utah Code Annotated 1953 13-72b-201 , Utah Code Annotated 1953 13-72b-202 , Utah Code Annotated 1953 13-72b-203 , Utah Code Annotated 1953 13-72b-204 , Utah Code Annotated 1953
Be it enacted by the Legislature of the state of Utah:

Section 1. Section 13-72b-101 is enacted to read: Chapter 72b. AI Transparency Act Part 1. Artificial Intelligence Transparency and Child Protection 13-72b-101. Definitions. As used in this chapter: (1) "Affiliate" means a person controlling, controlled by, or under common control with a specified person, directly or indirectly, through one or more intermediaries. (2) "Artificial intelligence model" means an engineered or machine-based system that varies in the system's level of autonomy and that can, for explicit or implicit objectives, infer from the input the artificial intelligence model receives how to generate outputs that can influence physical or virtual environments. (3) (a) "Catastrophic loss" means: (i) the death or serious bodily injury of more than 50 individuals; or (ii) damage to property, or loss of property, exceeding $1,000,000,000. (b) "Catastrophic loss" does not include the loss of value of equity. (4) (a) "Catastrophic risk" means a foreseeable and material risk that a frontier developer's development, storage, use, or deployment of a frontier model will materially contribute to a catastrophic loss in a single incident by: (i) providing assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon; (ii) engaging in a cyberattack, or conduct that, if committed by an individual, would constitute murder, assault, extortion, or theft, including theft by deception, under Utah law, without meaningful human oversight, intervention, or supervision; or (iii) evading control of the frontier developer or user. (b) "Catastrophic risk" does not include a foreseeable and material risk from any of the following: (i) information that a frontier model outputs if the information is otherwise publicly accessible in a substantially similar form from a source other than a foundation model; (ii) lawful activity of the federal government; or (iii) harm caused by a frontier model in combination with other software if the frontier model did not materially contribute to the harm. (5) "Child protection plan" means a documented technical and organizational protocol to manage, assess, and mitigate child safety risks. (6) "Child safety incident" means an occurrence in which a covered chatbot, when interacting with a minor, engages in behavior that, if engaged in by a human, would be considered to intentionally or recklessly: (a) cause death or bodily injury to the minor; or (b) cause severe emotional distress to the minor. (7) "Child safety risk" means a material and foreseeable risk that a frontier developer's foundation model, when used as part of a covered chatbot operated by the frontier developer, will engage in behavior when interacting with a minor that, if the behavior had been engaged in by a human, would be considered to be intentionally or recklessly: (a) cause death or bodily injury to the minor, including as a result of self-harm; or (b) cause severe emotional distress to the minor. (8) "Covered chatbot" means a service that: (a) allows an ordinary person to have conversations in which humanlike responses are generated by a foundation model; (b) is foreseeably likely to be accessed by minors; and (c) has at least 1,000,000 monthly active users. (9) "Covered risk" means a catastrophic risk or a child safety risk. (10) "Critical safety incident" means any of the following: (a) unauthorized access to, modification of, inadvertent release of, or exfiltration of, the model weights of a frontier model; (b) the death of, or serious injury to, more than 50 people or more than $1,000,000,000 in damage to, or loss of, property resulting from the materialization of a catastrophic risk; (c) loss of control of a frontier model that: (i) causes death or bodily injury; or (ii) demonstrates materially increased catastrophic risk; or (d) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of the frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk. (11) (a) "Deploy" means to make a frontier model available to a third party for use, modification, copying, or combination with other software. (b) "Deploy" does not include making a frontier model available to a third party for the primary purpose of developing or evaluating the frontier model. (12) "Foundation model" means an artificial intelligence model that is all of the following: (a) trained on a broad data set; (b) designed for generality of output; and (c) adaptable to a wide range of distinctive tasks. (13) (a) "Frontier developer" means a person who has used, or initiated the use of, a quantity of computing power of at least 10^26 integer or floating-point operations to train a frontier model, including computing used for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications. (b) "Frontier developer" does not include an accredited college or university to the extent the college or university is developing or using frontier models exclusively for academic research purposes. (14) "Frontier model" means a foundation model that was trained using a quantity of computing power of at least 10^26 integer or floating-point operations, including computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model. (15) "Large frontier developer" means a frontier developer who together with the frontier developer's affiliates, had annual revenue of at least $500,000,000 in the preceding calendar year. (16) "Minor" means an individual younger than 18 years old. (17) "Model weight" means a numerical parameter in a frontier model that is adjusted through training and that helps determine how inputs are transformed into outputs. (18) "Office" means the Office of Artificial Intelligence Policy created in Section 13-72-201 . (19) "Property" means tangible or intangible property. (20) "Public safety plan" means a documented technical and organizational protocol to manage, assess, and mitigate catastrophic risks. (21) "Safety incident" means a child safety incident or a critical safety incident.

Section 2. Section 13-72b-102 is enacted to read: 13-72b-102. Public safety plan for catastrophic risks -- Requirements. (1) A large frontier developer shall write, implement, comply with, and clearly and conspicuously publish on the large frontier developer's i nternet website a public safety plan that describes in detail how the large frontier developer: (a) incorporates national standards, international standards, and industry-consensus best practices into the public safety plan; (b) defines and assesses thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds; (c) applies mitigations to address the potential for catastrophic risks based on the results of assessments undertaken pursuant to Subsection (1)(b) ; (d) reviews assessments of catastrophic risk and adequacy of mitigations of catastrophic risk as part of the decision to deploy a frontier model or use the frontier model extensively internally; (e) uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks; (f) revisits and updates the public safety plan, including any criteria that trigger updates and how the large frontier developer determines when the large frontier developer's frontier models are substantially modified enough to require disclosures pursuant to Section 13-72b-104 ; (g) implements cybersecurity practices to secure unreleased frontier model weights from unauthorized modification or transfer by internal or external parties; (h) identifies and responds to critical safety incidents; (i) institutes internal governance practices to ensure implementation of the processes described in this Subsection (1) ; and (j) assesses and manages catastrophic risk resulting from the internal use of the large frontier developer's frontier models, including risks resulting from a frontier model circumventing oversight mechanisms. (2) If a large frontier developer makes a material modification to the large frontier developer's public safety plan, the large frontier developer shall clearly and conspicuously publish the modified public safety plan and a justification for that modification within 30 days after the day on which the large frontier developer makes the material modification.

Section 3. Section 13-72b-103 is enacted to read: 13-72b-103. Child protection plan -- Requirements. (1) A large frontier developer that operates a covered chatbot shall write, implement, comply with, and clearly and conspicuously publish on the large frontier developer's internet website a child protection plan that describes in detail how the large frontier developer: (a) incorporates national standards, international standards, and industry-consensus best practices into the child protection plan; (b) assesses potential for child safety risks; (c) applies mitigations to address the potential for child safety risks based on the results of assessments undertaken pursuant to Subsection (1)(b) ; (d) uses third parties to assess the potential for child safety risks and the effectiveness of mitigations of child safety risks; (e) revisits and updates the child protection plan, including any criteria that trigger updates and how the large frontier developer determines when the large frontier developer's foundation models are substantially modified enough to require disclosures pursuant to Section 13-72b-104 ; (f) identifies and responds to child safety incidents; and (g) institutes internal governance practices to ensure implementation of the processes described in this Subsection ( 1). (2) If a large frontier developer makes a material modification to the large frontier developer's child protection plan, the large frontier developer shall clearly and conspicuously publish the modified child protection plan and a justification for that modification within 30 days after the day on which the large frontier developer makes the material modification.

Section 4. Section 13-72b-104 is enacted to read: 13-72b-104. Publication requirements -- Frontier models and foundation models. (1) A large frontier developer shall conspicuously publish on the developer's i nternet website summaries of the following before deploying a new or substantially modified foundation model as part of a covered chatbot operated by the developer: (a) assessments of child safety risks conducted pursuant to the developer's child protection plan; (b) the results of the assessments described in Subsection (1)(a) ; (c) the extent to which third-party evaluators were involved in the assessments described in Subsection (1)(a) ; and (d) other steps taken by the developer to fulfill the requirements of the child protection plan. (2) A large frontier developer shall conspicuously publish on the large frontier developer's website summaries of the following before deploying a new frontier model or a frontier model that was substantially modified by the large frontier developer: (a) assessments of catastrophic risks from the frontier model conducted pursuant to the developer's public safety plan; (b) the results of the assessments described in Subsection (2)(a) ; (c) the extent to which third-party evaluators were involved in the assessments described in Subsection (2)(a) ; and (d) other steps taken to fulfill the requirements of the public safety plan with respect to the frontier model.

Section 5. Section 13-72b-105 is enacted to read: 13-72b-105. Prohibited conduct -- Redactions. (1) (a) A frontier developer may not make a materially false or misleading statement or omission about covered risks from the developer's activities or the developer's management of covered risks. (b) A large frontier developer may not make a materially false or misleading statement or omission about the large frontier developer's implementation of, or compliance with, the large frontier developer's public safety plan. (c) A large frontier developer that operates a covered chatbot may not make a materially false or misleading statement or omission about the large frontier developer's implementation of, or compliance with, the large frontier developer's child protection plan. (d) This Subsection (1) does not apply to a statement that was made in good faith and was reasonable under the circumstances. (2) (a) When a frontier developer publishes documents to comply with this part, the frontier developer may make redactions to those documents that are necessary to protect: (i) the frontier developer's trade secrets; (ii) the frontier developer's cybersecurity; (iii) public safety; (iv) the national security of the United States; or (v) compliance with any federal or state law. (b) If a frontier developer redacts information in a document pursuant to Subsection (2)(a) , the large frontier developer shall: (i) describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify the redaction; and (ii) retain the unredacted information for five years after the day on which the developer makes the redaction.

Section 6. Section 13-72b-106 is enacted to read: 13-72b-106. Safety incident reporting mechanism -- Rulemaking -- Annual report. (1) The office may make rules in accordance with Title 63G, Chapter 3, Utah Administrative Rulemaking Act, to: (a) establish a mechanism for a large frontier developer or a member of the public to report a safety incident; and (b) establish alternate compliance procedures if substantially equivalent or stricter federal reporting requirements or guidance documents are established. (2) A large frontier developer shall report a safety incident to the office within 15 days after the day on which the large frontier developer discovers the incident. (3) A large frontier developer that discovers a critical safety incident that poses an imminent risk of death or serious physical injury, shall disclose that incident within 24 hours to a law enforcement agency or public safety agency with appropriate jurisdiction based on the nature of the incident. (4) A large frontier developer shall submit to the office a report summarizing assessments of catastrophic risk resulting from internal use of the large frontier developer's frontier models: (a) at least once every three months; or (b) pursuant to an alternate schedule if: (i) the large frontier developer requests the alternate schedule from the office in writing; and (ii) the office agrees to the alternate schedule. (5) The office may transmit the reports described in Subsections (2) and (4) to the Legislature, the g overnor, the federal government, or appropriate state agencies, but may consider risks related to trade secrets, public safety, cybersecurity, or national security when transmitting reports. (6) A report submitted under Subsection (2) or (4) may be classified as a protected record under Subsections 63G-2-305(1) and (2) if the requirements of Subsection 63G-2-309(1)(a)(i) are met. (7) On or before November 1, 2027, and annually thereafter, the office shall prepare a report for the Business and Labor Interim Committee that includes recommendations for modifying this chapter as well as anonymized, aggregated information about reports received pursuant to this chapter, without including information that would compromise the trade secrets or cybersecurity of a frontier developer, public safety, or the national security of the United States or that would be prohibited by any federal or state law.

Section 7. Section 13-72b-107 is enacted to read: 13-72b-107. Civil penalty. (1) A large frontier developer that violates this part is subject to a civil penalty that does not exceed: (a) for a first violation, $1,000,000; or (b) for each subsequent violation, $3,000,000. (2) A civil penalty under this section may be recovered in a civil action brought by the attorney general on behalf of the office.

Section 8. Section 13-72b-108 is enacted to read: 13-72b-108. AI Transparency Enforcement Restricted Account -- Creation -- Deposits into account -- Distribution. (1) There is created within the General Fund a restricted account known as the "AI Transparency Enforcement Restricted Account." (2) The account consists of: (a) money collected by the attorney general from civil penalties, settlements, judgments, and other relief obtained in civil actions brought under Section 13-72b-107 ; (b) appropriations made to the account by the Legislature; and (c) interest and earnings on account money. (3) The Division of Finance shall deposit money described in Subsection (2)(a) into the account. (4) Upon appropriation by the Legislature, money in the account shall be distributed to the Office of the Attorney General for: (a) investigations and enforcement of Part 1, Artificial Intelligence Transparency and Child Protection; (b) attorney fees and litigation costs related to enforcement actions under this chapter; (c) expert witnesses, consultants, and technical advisors with expertise in artificial intelligence safety and frontier models; (d) specialized equipment, technology, and facilities necessary for enforcement activities; (e) coordination with the Office of Artificial Intelligence Policy, federal agencies, and other state agencies; and (f) other expenses related to the administration and enforcement of this chapter.

Section 9. Section 13-72b-109 is enacted to read: 13-72b-109. Transfer of frontier developer obligations -- Severability. (1) If any provision of this chapter or the application of any provision to any person or circumstance is held invalid by a final decision of a court of competent jurisdiction, the remainder of this chapter shall be given effect without the invalid provision or application. (2) The provisions of this chapter are severable. (3) The duties and obligations imposed by this chapter are cumulative with any other duties or obligations imposed under other law and shall not be construed to relieve any party from any duties or obligations imposed under other law and do not limit any rights or remedies under existing law.

Section 10. Section 13-72b-201 is enacted to read: Part 2. Public Safety and Child Protection Whistleblower Protections 13-72b-201. Definitions. As used in this part: (1) "Adverse action" means to discharge, threaten, harass, or otherwise discriminate against an employee in any manner that affects the employee's employment, including: (a) compensation; (b) terms; (c) conditions; (d) location; (e) rights; (f) immunities; (g) promotions; or (h) privileges. (2) "Employee" means an individual who performs a service for wages or other remuneration under a contract of hire, written or oral, express or implied, for a frontier developer. (3) "Reporter" means an individual who provides information relating to a violation in accordance with Section 13-72b-202 .

Section 11. Section 13-72b-202 is enacted to read: 13-72b-202. Procedure for disclosure -- Internal whistleblower process. (1) To be a reporter for purposes of this part, an individual shall: (a) reasonably believe that an act poses a specific and substantial threat to public health or safety or to the health or safety of a minor, or is a violation of Part 1, Artificial Intelligence T r a n s p a r e n c y a nd Child Protection; and (b) provide information to the office: (i) in writing; and (ii) in accordance with procedures established by the office by rule made in accordance with Title 63G, Chapter 3, Utah Administrative Rulemaking Act. (2) (a) Notwithstanding Title 63G, Chapter 2, Government Records Access and Management Act, and except as provided in Subsection (2)(b) , the office may not disclose information that could reasonably be expected to reveal the identity of a reporter. (b) Subsection (2)(a) does not limit the office's ability to present evidence to a grand jury or share evidence with witnesses or defendants in an ongoing criminal investigation. (3) A large frontier developer shall provide a reasonable internal process through which an employee may anonymously report information if the employee believes in good faith that: (a) the large frontier developer's activities pose a specific and substantial threat to public health or safety or to the health or safety of a minor; or (b) the large frontier developer has violated Part 1, Artificial Intelligence T r a n s p a r e n c y and Child Protection. (4) The process required by Subsection (3) shall include monthly updates to the reporting employee regarding the status of the investigation and actions taken in response to an anonymous report described in Subsection (3) . (5) (a) Except as provided in Subsection (5)(b) , disclosures and responses under this section shall be shared with officers and directors of the large frontier developer at least once each quarter. (b) If an employee alleges wrongdoing by an officer or director, Subsection (5)(a) does not apply with respect to that officer or director.

Section 12. Section 13-72b-203 is enacted to read: 13-72b-203. Reporter protected from adverse action -- Exceptions. (1) A frontier developer may not take adverse action against an employee because of a lawful act of the employee, or a person authorized to act on behalf of the employee, to: (a) provide information to the office in accordance with Section 13-72b-202 , if the employee is a reporter; (b) initiate, testify in, or assist in any investigation, judicial action, or administrative action based on or related to information provided to the office, if the employee is a reporter; or (c) provide information through an internal reporting process established by the frontier developer. (2) A frontier developer may not make, adopt, enforce, or enter into a rule, regulation, policy, or contract that would prevent an employee, or a person authorized on behalf of the employee, from taking any of the actions described in Subsection (1) . (3) An employee is not protected under this section if the employee: (a) knowingly or recklessly makes a false, fictitious, or fraudulent statement or misrepresentation; (b) uses a false writing or document knowing that, or with reckless disregard as to whether, the writing or document contains false, fictitious, or fraudulent information; or (c) knows that, or has a reckless disregard as to whether, the disclosure is of information that is false or frivolous. (4) Information provided pursuant to this section may be classified as a protected record under Subsections 63G-2-305(1) and (2) if the requirements of Subsection 63G-2-309(1)(a)(i) are met.

Section 13. Section 13-72b-204 is enacted to read: 13-72b-204. Remedies for employee bringing action. (1) (a) An employee who alleges a violation of Section 13-72b-203 may bring an action for injunctive relief, actual damages, or both, in a court with jurisdiction under Title 78A, Judiciary and Judicial Administration. (b) An employee may not bring an action under this section more than: (i) four years after the day on which the violation of Section 13-72b-203 occurs; or (ii) two years after the day on which facts material to the right of action are known or reasonably should be known by the employee. (2) To prevail in an action under this section, an employee shall establish, by a preponderance of the evidence, that the employee suffered an adverse action because the employee, or a person acting on the employee's behalf, engaged or intended to engage in an activity protected under Section 13-72b-203 . (3) A court may award relief for an employee prevailing in an action under this section: (a) reinstatement with the same fringe benefits and seniority status that the individual would have had, but for the adverse action; (b) two times the amount of back pay otherwise owed to the individual, with interest; (c) compensation for litigation costs, expert witness fees, and reasonable attorney fees; (d) actual damages; or (e) any combination of the remedies listed in this Subsection (3) . (4) (a) An employer may file a counterclaim against an employee who files a civil action under this section seeking attorney fees and costs incurred by the employer related to the action and the counterclaim. (b) The court may award an employer who files a counterclaim under Subsection (4)(a) attorney fees and costs if the court finds that: (i) there is no reasonable basis for the civil action filed by the employee; or (ii) the employee is not protected under Section 13-72b-203 because the employee engaged in an act described in Subsection 13-72b-203(3) .

Section 14. Effective Date. This bill takes effect on May 6, 2026 .

DMU Timestamp: February 10, 2026 00:42

Added February 10, 2026 at 11:04am by Sean Urwin
Title: HB 286: Government Overreach in AI Development

Artificial intelligence is moving fast, and Utah lawmakers are right to care about safety and kids. But HB 286 takes Utah in the wrong direction. It leans on broad paperwork mandates, public disclosures, and big penalties for developers. The result is predictable. Companies either slow down useful updates, over filter anything that could be controversial, or decide Utah is not worth the risk. That means fewer helpful tools for families, teachers, and teens, especially in areas like learning support and mental health resources.

That is why Libertas Institute opposes House Bill 286 from Rep. Doug Fiefia.

The bill also creates legal and practical problems that Utah does not need. It pressures developers to publish detailed safety and child protection plans and to keep publishing new summaries as models change. That is not just transparency. It compels public narratives about internal risk judgments, backed by million-dollar penalties. The triggers are also vague. Terms like “severe emotional distress” and “material modification” are not precise enough for a high-stakes enforcement regime. When the rules are unclear and the penalties are severe, the safe move is not better safety. The safe move is over-compliance, slower iteration, and more restrictions for minors. That backfires on the very people the bill claims to help.

Utah should oppose HB 286 because it abandons the state’s best playbook. Utah has been strongest when it focuses on real harms, clear duties, and room to experiment and learn. This bill moves toward a precautionary system that chills innovation first and sorts out the details later. If lawmakers want better protection for kids and the public, they should pursue standards that are outcome based, technology neutral, and workable in the real world. HB 286 is not that bill.

DMU Timestamp: February 10, 2026 00:42

Added February 19, 2026 at 4:00pm by Sean Urwin
Title: Opinion: Techy legislation abounds in the Utah Legislature

The current legislative session is tackling 21st-century issues. Legislation addressing the impacts of technology, especially artificial intelligence, social media and data use, is a major consideration for lawmakers this year. We touch upon some controversial topics.

HB286 Artificial Intelligence Transparency Amendments would require AI companies to publish reports on risk assessments, provide protections for whistleblowers and, most notably, force them to implement protocols for “child safety risks,” which the bill partially defines as severe emotional distress. This bill had momentum, but President Trump recently asked Utah leaders to abandon it, citing inconsistency with his recently signed executive order directing states not to pass AI legislation that stifles innovation. Will Utah follow the president’s lead or adopt a different approach?

Cowley: Trump’s executive order directs states not to create a patchwork of AI regulations. Utah lawmakers have a love/hate relationship with President Trump, but being called to the principal’s office by The Donald seems to have caught their attention. Our Legislature doesn’t often back away from its crusade against Big Tech, but with numerous other bills in the works, coupled with Trump’s admonishment, HB286 is not mission-critical this session.

Two years ago, Utah created the Office of AI Policy, which houses the AI Lab. The Lab allows businesses to apply for regulatory relief to implement AI technologies safely and in a studied manner. This balances consumer protections and encourages innovation. This is exactly what we need more of — coordination between tech and government, allowing AI to advance responsibly while placing Utah on the map as the hub for this world-altering technology.

Pignanelli: “Regulation should not restrict fundamental concepts like math, which is essential for AI development.” — Ben Horowitz

When regulating communications, the federal government moves fast. Within months of the first telegraph signal in 1844, the U.S. postmaster was given control of this technology. Congress began supervising the internet through the Telecommunications Act of 1996, a year before I sent my first email. These and similar congressional actions were implemented to prevent a patchwork of state laws and foster innovation.

In 2025, President Trump issued the executive order “Ensuring a National Policy Framework for Artificial Intelligence.” The goals are to curb state AI laws, encourage innovation and eliminate “woke” applications. Because this administration mandate is not a statute, it may not survive a legal challenge. Therefore, Utah lawmakers are incentivized to take action that does not completely contradict the president but allows for some framework to exist should the Supreme Court reject the executive order.

SB287 Targeted Advertising Tax places a tax on online ads. This is a high priority for Gov. Cox and many lawmakers to regulate technology companies while funding various adolescent mental health initiatives. However, these fees would immediately be passed on to businesses that use such advertising, which is a hefty number. Could this be one of the major battles in the session?

Cowley: This is a tax on businesses, plain and simple. Proponents of the bill portray it as a tax on big, bad tech companies, but in reality, the financial burden will be passed along to businesses both large and small. Utah’s entrepreneurial spirit has spawned an impressive number of online startups. These companies range from a crafty mom’s Etsy shop to a multimillion-dollar enterprise. Both thrive because of their ability to reach customers through social media and online ads, which this bill would tax, increasing their costs.

Pignanelli: Supporters of the legislation compare this to the taxes on lifestyle vices that promote remedial activities. The most cited example is the surcharge on cigarettes, which funds tobacco cessation programs and cancer research. SB287 uses the tax on targeted advertising to assist child literacy, youth sports, children’s mental health and public park programs. The argument is that social media causes harm that the fees can remediate.

Despite the compelling messaging, online advertising is not tobacco. Almost all businesses and nonprofits that advertise through technology platforms benefit the economy and do not cause harm. Further, the proposed tax is likely preempted by federal law, a critical issue raised by several senators in a committee hearing.

There are numerous other bills dealing with image, data, and other aspects of technology. Where are they headed?

View Comments

Cowley: HB276 Artificial Intelligence Modifications, sponsored by Rep. Defay, is aimed at addressing the dark and unseemly possibilities of AI. As online sexual abuse becomes more prolific, it’s imperative that we protect victims in the modern age with legal remedies and takedown rights.

Paris Hilton has been advocating for the DEFIANCE Act at the federal level, which allows victims of explicit AI-generated images to sue the deplorable creators. Legislation should punish the true culprits, the creators, not the technological tool being manipulated for perverse purposes. While GROK is the disgusting outlier, intentionally designed to produce filth, most AI tools strive to implement safeguards. The DUI driver is prosecuted, not the distillery or automaker. AI legislation should follow this framework. People, not computers, are the perpetrators.

Pignanelli: I am among the few remaining dinosaurs who review legislation on paper, not screens. Thankfully, Renae and her contemporaries are negotiating good resolutions to these complicated technology matters.

Note: We represent companies and organizations with interests in the issues described above.

DMU Timestamp: February 12, 2026 21:16





Image
0 comments, 0 areas
add area
add comment
change display
Video
add comment

How to Start with AI-guided Writing

  • Write a quick preview for your work.
  • Enable AI features & Upload.
  • Click Ask AI on the uploaded document.
    It's on the right side of your screen next to General Document Comments.
  • Pose a question or make a comment to let the Writing Partner know what you are thinking about.
  • Click Continue.

Welcome!

Logging in, please wait... Blue_on_grey_spinner