Cynthia Lummis recommends Rise Act, a bill that requires transparency for legal immunity

Senator Cynthia Lummis (R-WY) introduced the Responsible Change and Safe Expertise (RIS) Law of 2025A bill designed to clarify liability frameworks for artificial intelligence (AI) used by professionals.
The bill can bring transparency from AI developers – stop unsolicited models to be open resources.
In a press releaseLummis said the Rise Act means that professionals, such as physicians, lawyers, engineers, and financial counsels, will remain legal responsible for the advice they provide, even if it is aware of the AI systems.
At this time, AI developers who create systems can only protect themselves from civil liability when things wake up if they release model cards.
The proposed bill refers to card models as detailed technical documents that will reveal AI system training sources, intended use cases, performance metrics, known limitations, and potential failure modes. All of this is intended to help professionals assess if the tool is suitable for their work.
“The values of Wyoming both change and responsibility; the Rise Act creates unpredictable standards that encourage safer AI development while maintaining professional autonomy,” Lummis said In a press release.
“This law does not create safety in the blanket for AI,” Lummis continued.
However, the immunity provided under this law has clear boundaries. The law does not include protection for those who have developed in reckless, intentional misconduct, fraud, misunderstandings, or when actions fall out of the specified scope of professional use.
In addition, developers face a duty of ongoing responsibility under the Rise Act. AI documentation and specification should be updated within 30 days of removing new versions or detection of significant modes of failure, which adopt the ongoing transparency obligations.
Stop with short open resources
The Rise Act, as written today, stops ordering that the AI models become completely open resources.
Developers may prevent the information of the owner, but if the redised material is not related to safety, and each removal is accompanied by a written justification that explains the secret trade exemption.
In an earlier interview with CoinDeskSimon Kim, the CEO of Hashed, one of Korea’s top VC funds, spoke of the danger of centralized, closed-source AI effectively a black box.
“Openai is not open, and it is controlled by very few people, so it’s a bit dangerous. Making this kind of (closed resource) foundation is like making a ‘god’, but we don’t know how it works,” Kim said at the time.