AI Legislation, Safety, and Mis-Use Cases

It’s more than just legislation – although a “generative AI labeling” bill is in committee in both the U.S. Senate and the House. California has just passed a first effort at legislating extreme AI behaviors, and Texas (another bellwether state) also has AI legislation in committee.

So let’s start with a quick recap of WHY this is important – that is, a quick look at “bad boy behaviors.”

This is different from the subject of the previous blogpost, where we looked at AI hallucinations and deliberate deceptions. Instead, this post addresses misbehaviors by the companies that are developing and training their AIs.

{* THIS BLOGPOST IS STILL IN PROGRESS. AJM, Thursday, June 20, 2024; 10:25AM HI time. *}


Bad Boys Acting Out

It’s not the AI algorithms. They have their own problems. (See “AIs Acting Out: Crazy, Malicious, and Other Bad Behaviors” for highlights of not just LLM bad-behaviors, but also Meta’s very research-y AI that has no moral scruples … but then, it’s reinforcement learning. All it’s trying to do is get the job done – not “play fair” the way that we understand it.)

No, the “AI and society” are not a function of the AI. (No AI is going rogue. Nor is one likely to.) Rather, the problems are in how people are creating and using the AI. Or more specifically, creating gross intrusions into the intellectual property and identities of others.

Some of the noteworthy news items, of course, are:

  • Identity appropriation and OpenAI: What appears to be a “cloned likeness” of Scarlett Johansson’s voice, resembling her voice-over character in the movie Her; Ignacio de Gregiore Noblejas has a lovely and readable article (albeit behind a pay-to-read firewall), but Georgetown University Law Professor Kristelia Garcia weighs in on this issue, and this YouTube seems to capture both the (now rescinded) OpenAI ChatGPT “Sky” voice as well as a snippet of Ms. Johansson in Her.
  • EVERYONE is grabbing up other’s intellectual property to train their AIs – the long list includes OpenAI, Meta, Anthropic, Mistral, and (most recently) Perplexity – and Patronus AI released its report showing how widespread this practice is; a number of the big-name companies have been making haste to ink deals with major owners of the copyrighted materials, but the voracious LLM appetites have apparently trawled through everything in print – and now the multimodal AIs are gobbling up everything audio and visual. Just in the news: Perplexity is getting around code designed to barricade copyrighted materials from being absorbed by web crawlers. This has been confirmed by developer Robb Knight and by Wired.

The defense offered by OpenAI and Anthropic leaders has been along the lines of “we can’t train our LLMs unless we do this,” which seems a variant of “what’s mine is mine, and what’s yours is also mine.”


US Senate and House: Generative AI Labeling Bills in Committee

There’s AI legislation in various forms, in the U.S. Senate and in the House of Representatives. There are also bills within various state legislatures.

Most of these are … given recent findings … just a little too small, and a little too late.

The most prominent bill was proposed in the U.S. Senate as a bipartisan effort to “require disclosures for AI-generated content,” by Senators Brian Schatz (D-HI) and John Kennedy (R-LA). It was proposed in July, 2023, and is still in committee.

Representative Tom Keen introduced a sister bill, H.R. 6466 (essentially the same, with some more lengthy wording and restrictions), into the House in November, 2023.


US Legislature: Protection Against Deepfakes Act

In March, 2024, Rep. Eschoo (D-CA-16) and Neal Dunn (R-FL) introduced a bipartisan bill requiring labeling of deepfakes.

The Protecting Americans from Deceptive AI Act addresses these issues [undermining the democratic process, consumer deception, market disruptions, harms to minors, and national security risks] by directing the development of standards for identifying and labeling AI-generated content and requiring generative AI developers and online content platforms to provide disclosures on AI-generated content.”    

Official Website for California Representative Anna Eschoo (D-CA-16), discussing the Protecting Consumers from Deceptive AI Act.   

This “Protecting Americans from Deceptive AI” bill puts NIST in charge of developing standards for identifying and labeling AI-generated content.

As described in this PBS News article, “Key provisions in the legislation would require AI developers to identify content created using their products with digital watermarks or metadata.”


The Schiff Bill: AI Copyright Infringements

Responding to the numerous complaints and lawsuits raised by various companies whose intellectual property has been used (without permission) by multiple AI companies, who are seeking to train their LLM models, U.S. House Representative Adam Schiff (D-CA) introduced the “Generative AI Copyright Disclosure Act” on April 9, 2024. This bill would “require a notice to be submitted to the Register of Copyrights prior to the release of a new generative AI system with regard to all copyrighted works used in building or altering the training dataset for that system. The bill’s requirements would also apply retroactively to previously released generative AI systems.”

The bill has strong support from the creator community, with corresponding pushback from the AI content-generator community.


SB 1047 Passed in the California Senate

The California Senate recently passed the Schiff bill, the “Generative AI Copyright Disclosure Act.” This bill would “require transparency from companies regarding their use of copyrighted work to train their generative AI models.” This bill, introduced into the California State Legislature (April 09, 2024), is currently under review. It has been widely praised by leaders from organizations representing creative works, who see that their intellectual property has been threatened by generative AI systems.

Another bill, currently under consideration in the California Senate, deals with the extent to which AIs can operate.

Here’s my quick grab-bag of just a few of the (many, MANY) commentaries on SB 1047:

  • Zwi Mowshowitz. 2024. “Why Is Everyone …” (Why Is Everyone …) (This is the short article) (START HERE) Brief extract: “For a piece of state legislation, the bill has unusual international importance. Most of the companies making such models are headquartered in California.”
  • Zvi Mowshowitz. 2024. “Q&A on California SB 1047.” (Q&A). (Longer article, FAQs.)

Also. here’s a YouTube on this from the Cognitive Revolution on “Emergency Pod: Examining SB 1047, California’s Proposed AI Legislation” – which I haven’t watched yet – it’s on my docket for later today – and at two hours long, I’ll be doing some chores while listening in.


From the White House: The “Blueprint for an AI Bill of Rights”

To be clear – this is about establishing the rights of the citizens, not about the rights of the AIs. President Biden has published the “Blueprint for an AI Bill of Rights,” identifying five key elements that need to be addressed. This is NOT legislation; it is NOT law. (Laws need to be passed by both houses of Congress, and then either signed into law by the president, or overridden by Congress if vetoed. We’re not there, just yet.)

Instead, this is more in the way of guidelines, “things to think about.”

The five elements of this “Bill of Rights” are:

  • Safe and effective systems: You should be protected from unsafe or ineffective systems. 
  • Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way. 
  • Data privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  • Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  • Human Alternatives, Consideration, and Fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

Notice that these issues do not deal with intellectual property ownership, which has emerged more recently as a key issue. (See the Schiff Bill, above.)

Nevertheless, we’re seeing a convergence of actions – from the Federal to the state level, from the Executive to both legislative branches, and in all likelihood, we’ll see cases brought before the Supreme Court before long.


AI Legislation in Europe

There is significant focus on AI legislation throughout Europe and the United Kingdom.

On March 10, the European Parliament adopted the “Artificial Intelligence Act,”

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.”

Artificial Intelligence Act: MEPs adopt landmark law

Remarkably similar to the White House’s “Blueprint for an AI Bill of Rights,” the UK has recently adopted a “cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.”

Spiceworks’ Savio Jacob published this April 30, 2024 article on “AI Regulations around the World.” Similarly, White and Case, LLP, publish and update a “Global AI Tracker,” with information on legislation in various countries, and here is their summary of the status in the U.S. with their “AI Watch: Regulatory Tracker.”

In summary, there is been a huge step forward in AI legislation, both introduced in the U.S. and around the world. However, many bills are still “in committee.” Getting bills passed will require back-room negotiations between the powerful (and well-funded) AI companies and various other factions. Reaching agreement on any one piece of legislation will not solve the entire suite of “how to manage AI” issues.

Many of these “how to manage AI” issues emerge before people can anticipate that they will happen. Typically, we don’t have precedents. The stakes are huge. Thus, legislation will evolve — and be voted into law — but probably not as fast as many people would like.

We can only hope that the “deepfake” legislation passes into law well before the November 2024 elections.

Alianna J. Maren, Ph.D.

Founder and Chief Scientist, Themesis, Inc.

Copyright, 2024.

(Not that the “copyright” notice makes any difference right now!)

Share via
Copy link
Powered by Social Snap