Several bills addressing concerns over the safe and ethical use of artificial intelligence were introduced during the Oklahoma Legislature session, and some measures made it out of the House and over to the Senate.
A Feb. 20 article on the House of Representatives website, titled, “House Committee Passes Numerous AI Regulation Bills,” outlined the various bills passed by the House. Last October, an interim study was conducted on the “ethical, legal and societal implications of AI implementation, including privacy, bias and algorithmic transparency,” states the article.
House Bill 3825 would prohibit the dissemination of deceptive “deepfake media within 90 days of an election,” unless a clear disclosure is made. H.B. 3577 requires healthcare insurance companies to disclose the use of AI-based algorithms in the review process, states the article. Neither bill has seen any action since Feb. 20.
State Rep. Bob Ed Culver, R-District 4, outlined three bills he had knowledge of related to AI.
“House Bill 3453 would establish the Oklahoma Artificial Intelligence Bill of Rights, which defines ‘artificial intelligence’ and ‘real person,'” Culver said. “The bill also outlines eight ways Oklahomans are entitled to information about the use of AI, such as the right to know when they’re interacting with an AI engine rather than a real person, and the right to opt out of their data being used by an AI model.”
This bill was passed through the Senate Judiciary Committee on March 27 after the second reading.
H.B. 3828 requires the Office of Management and Enterprise Services and the administrative office of the courts to inventory all systems that use AI by Dec. 31, 2024, and each following year, Culver said.
“State agencies that do not use OMES must inventory their own systems and post the inventory list on their website,” Culver said. “The bill also would require assessments of agencies AI systems to ensure the systems do not discriminate.”
This bill was referred to the Senate Appropriations Committee April 4.
H.B. 3073 criminalizes publishing or distributing digitized representations of another individual’s name, image, voice or likeness without their written consent and with the intent of harm, Culver said.
This bill was referred to the Senate Judiciary Committee March 19.
Culver said although AI can be a useful tool, it can also be used inappropriately – and in some cases, criminally.
“I’m thankful that my colleagues also take the issue seriously – an interim study was held in October of last year – and I will continue to pay close attention to the issue in the future if further legislation regulating the technology becomes necessary,” he said.
State Rep. David Hardin, R-District 6, said the interim study showed human intelligence dropped by at least 30% as a result of AI.
Brian Woodliff, chief strategy officer of Northeast Oklahoma Management Services, which manages Northeastern Health System, spoke to the effects of AI in the medical field.
“In general, government involvement adds an element of safety but can slow progress and innovation. There needs to be a balance between fostering adoption of AI and its oversight,” Woodliff said.
If patients and staff are provided the latest technology that improves employee and patient satisfaction by simplifying processes and communication, AI can reshape the relationship between staff and patients, Woodliff said.
“Currently, there is a tremendous labor shortage in the health care industry,” Woodliff said. “If AI is proved effective, it may be one of the solutions to this problem. But health care is still very personal, and will always require caring people.”
On the TDP Saturday Forum March 30, readers were asked to discuss the issue: “The U.N. General Assembly recently approved a resolution on artificial intelligence, giving global support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is ‘safe, secure and trustworthy.’ Does the proliferation of AI concern you, since it could lead to widespread disinformation, or misconceptions about what’s real and what’s not? What sort of action, if any should the government at any level take to combat it?”
Alex Cheatham, a Tahlequah resident, said AI was rushed to the market before safety protocols were put into place to control it.
“So once again, the industry has decided to use us all as human guinea pigs. It could be used as a powerful creative tool, but it could also be used as a very destructive weapon,” Cheatham said. “The duality of humans has made its way into almost every new technology in the past, and it should have been given more consideration before this new product was released to the general public.”
Patrick M. Parker said he doesn’t trust the U.N. and although a lot of people are talking about AI, he questioned whether they even “have any clue what will happen.”
Keith Moore, a respondent to the question, said it is a U.N. pipe dream.
“The Democrats will use it to push disinformation and lies about anyone and anything that exposes them,” Moore said.
What you said
In a poll on TDP’s website, readers were asked: “Should the government take steps to regulate ‘artificial intelligence’ technology to protect people from being fooled and/or to safeguard us against its use by hostile foreign governments?” The answer that received the most votes was, “Yes, absolutely,” at 86.0%; votes for “probably, but only when it comes to things like national defense” came in at 5.3%; those who responded “probably not, because it could stifle freedom of expression or create other restrictions” came in at 1.8%; 1.8% voted “definitely not”; and 5.3% voted “undecided.”