A Plan for When AI-Created Works Infringe Human Works

By Jonathan M. Purow

*Published Nationwide by Law360 on November 4, 2019

Artificial intelligence has arrived.  We are long past the novelty of IBM’s Watson winning on Jeopardy in 2011, and have entered an era where artificial intelligence has innumerable impacts on our daily lives.  We are constantly aided by Amazon’s Alexa and Siri, guided by Uber’s AI, and flown on planes predominantly piloted by AI.

AI systems work by recognizing patterns in virtually limitless pools of data, and then generating a product that fits the pattern.  There are several AI programs that have already applied these processes to the creation of works of art.  The famed auction house Christie’s sold Portrait of Edmond Belamy, its first artificial intelligence-created “painting” for $432,500 at auction in December 2018.[1] An entire industry has arisen around AI programs that create music, including Sony’s Flow Machines, IBM Watson Beat and others.[2]  Even smaller parties are involved in the creation of AI-programs, as evidenced by the 16 year old who created an AI program that analyzed Kanye West’s rap lyrics to generate new rhymes.

In general, works wholly created by AI immediately enter the public domain because copyright law requires a human author.[3]  While the Copyright Compendium is not binding, it requires a human author for copyright protection. In terms of case law, the famous case involving a party on the opposite side of the evolutionary spectrum, Naruto the monkey, reinforced the requirement for a human author for copyright protection.[4]

If, however, a human is actively involved in the creation of the work by the AI program, then that person would likely be considered to have sufficient “authorship” to be the author for purposes of copyright (and therefore be subject to liability for copyright infringement).  This proposition dates back to the Supreme Court’s 1884 decision in Burrow Giles Lithographic Co. v. Sarony, in which the Court found that a camera was a tool of the photographer, who was entitled to copyright protection because he staged the composition and lighting of the photograph. A modern example would be an author using Microsoft Word to write a book–  Microsoft does not have a claim to the book authorship simply because its program was used, only the author does.[5]

One industry that may be significantly impacted by the proliferation of AI programs is the fashion world.  Artwork on garments of sufficient originality can be copyrighted, and there is an immense amount of copyright litigation in this field instituted by proactive parties that have invested in obtaining or creating original artwork and registering copyrights in them. This article will focus on one base hypothetical—who should bear liability when the user (“AI User”) of an AI program (“AI Program”) distributed for free use by the program’s proprietor (the “AI Proprietor”) creates thousands of floral artwork designs for clothing (each an “AI Work”), distributes these works, or posts them online for third parties to use/purchase (“3rd Party AI User”), and one of these works created by a human author (“Human Author”)is substantially similar to an original copyrighted work (“Human Work”)?[6]

The potential problem arising from this hypothetical is that the AI Design may enter the public domain, cannibalize the market for the similar Human Work, and leave the Human Author with limited ability to assert a successful copyright infringement claim against any of the aforementioned parties.

Examining the parties that could potentially bear liability in such a situation, from a practical perspective it is unlikely that the AI Proprietors would ever bear any liability.  First, the contract that any party will have to sign prior to using the AI Program will almost certainly require that the AI User indemnify the AI Proprietor and hold them harmless.[7]  Second, AI Proprietors like Google are likely the largest fish in this entire equation, and they could use brute legal or lobbying force (in a manner that produced the DMCA), to ensure that they never bear liability.  Furthermore, if the law were to find the AI Proprietor to bear any liability, all the free AI programs, and possibly some paid ones, would no longer be available for use and the general marketplace would lose these efficiently created designs.

The AI User may not bear liability either, but as discussed above it depends upon the extent of their involvement in the creation of the AI Work.  As this hypothetical already assumes substantial similarity, per the standard copyright infringement analysis the important question becomes whether or not the AI Program had access to the Human Work.  The AI User may be less and less active in the process of creating the AI Work, but unless the “inspiration pool” has been automated by the AI Proprietor in some capacity the AI User has to guide the AI Program as to what pool of data will be the inspiration pool.  If the AI User feeds the AI Program a limited set of data which includes the Human Work then the AI User effectively created access. Admittedly, the program (and its underlying algorithm) did not adequately create a work that was sufficiently different from the inspiration pool, but that shouldn’t free the AI User from being the party most responsible. Alternatively, what if the AI User did not limit the inspiration pool, and instead let the AI Program loose on the open internet, which resulted in a work substantially similar to the Human Work?  Courts have not automatically presumed that access exists if an infringed work was posted on the internet, they typically conduct some analysis to see if there is a reasonable presumption of access.[8] If the inspiration pool is as broad as the internet itself perhaps the AI User should not bear liability (and the AI Work would enter the public domain), but reasonable minds could differ.

If a Human Author asserted a copyright infringement claim against the AI User in the above hypothetical, a defense of independent creation could be asserted, but in order to avoid incongruous results it would make sense for this analysis to match the access analysis- if the AI User deliberately fed the AI Program certain works as inspiration then they should not be considered innocent infringers.

Under this hypothetical, the 3rd Party User would not be liable for distributing or selling the AI Work if it is in the public domain.  Even if the work is not in the public domain, the 3rd Party User’s potential liability is more tenuous because they were not involved in the creation of the AI Work and therefore had no control over whether the AI Program had access to the Human Work.  Yet the 3rd Party User’s knowledge of the origin of the AI Work could influence the perception of whether liability is warranted—if the AI User posts the AI Work or offers it for sale advertising it as an AI-created public domain work then the 3rd Party User would be considered in its rights to use the AI Work.  But if the AI Work was not marked as being an AI-created work and the 3rd Party User copied it without permission then they would appear to be less innocent, though it would not necessarily affect the copyright infringement analysis.

Taking the foregoing into account, I would posit that a new legal code to govern AI Works should include the following elements to increase accountability and make enforcement efforts easier to pursue and adjudicate:

  1. The AI Program should have a built-in function that inserts a digital fingerprint into any AI Work that identifies the AI Proprietor, AI Program and the AI User. If the work is 3D printed, in lieu of a fingerprint, this information would be stamped on the object (just as creators of physical works stamp a copyright notice).  This will introduce a level of accountability, so that even if the AI Work floats through the marketplace and is passed from hand to hand, the Human Author can identify these essential parties in case they need to contact them for enforcement purposes.
  2. The AI Proprietor should save the inspiration pool and process that the AI Program utilized “under the hood” to create the AI Work. At this point in time, these AI programs do not create works in a vacuum, their “intelligence” lies in their ability to recognize patterns that produce certain outcomes and mimic these patterns. The ability to look under the hood of these AI programs would give parties the ability to determine if the Human Work was used by the AI program as “inspiration”.[9] Admittedly, there may be a limitation on the ability to save this information, solely because of the amount of data that may be generated by these programs.
  3. Simple code should be created for incorporation into Human Works, websites featuring Human Works, or other mediums for transmitting Human Works, so that Human Authors can designate their Human Works as being on a “no-fly” list; AI Programs scouring for inspiration materials upon encountering this code would know that they could not add these no-fly Human Works to their inspiration pool. This way infringement could be avoided in the first place and any similarity between an AI Work and Human Work would be purely coincidental and “innocent”.
  4. The application of the notice-and-takedown procedures or a similar structure to ease Human Authors’ ability to enforce against potentially infringing AI Works.

With respect to the first two aspects of the proposed system, blockchain technology could be used to introduce a level of accountability for each AI Work.  Blockchain technology could not only document the AI Proprietor, AI Program and AI User, but could even track 3rd Party Users, consumers and their manner of use.  It theoretically could even document the inspiration pool so that an adjudicator could see if the Human Work was in the inspiration pool that produced the AI Work.

This system has several advantages.  The Human Author would be able to remove any infringements via the notice-and-takedown procedures.  Due to the fingerprint/blockchain information, the Human Author would be able to notify the AI Proprietor that the Human Work should be added to the “no-fly” list of works (if it had not been already), thereby minimizing the risk of future AI Works that infringe the Human Work and necessitate further enforcement efforts.  The inspiration pool could readily be determined for an analysis of what level of access the AI User gave to the Human Work.  An added bonus of the digital fingerprint is that it would prevent unscrupulous AI Users from trying to obtain a copyright registration in an AI Work, because the Copyright Office can quickly search each deposit copy to see if it was created by an AI Program.

This article is intended to start a discussion about the best means to police AI Works and avoid infringement of Human Works.  If AI Proprietors, Congress or the courts do not address this problem quickly perhaps it’s better, because we can then just hand this predicament over to Skynet and our future AI robot overlords!

[1] https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx

[2] https://www.theverge.com/2018/8/31/17777008/artificial-intelligence-taryn-southern-amper-music

[3] U.S. Copyright Office, Compendium of U.S. Copyright Office Practices, § 306 (3d ed. 2017).

[4] Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018).

[5] See Are Works Generated By AI Subject To IP Protection?, https://www.law360.com/articles/1020262

[6] If the past twenty years have proved anything, it is that copyright law has a difficult time keeping up with the pace of technology. Perhaps there is value then to speculating about potential issues, and determining workable solutions, prior to their coming into being. These solutions would preferably be implemented by the technology companies involved in the artificial intelligence field, as they can move faster than Congress or courts can. The law can then catch up and supplement, or rubber-stamp, the prevailing system.

[7] The AI Proprietor would be well served to require more than a simple email address and name from a user prior to use, so requiring the inputting of credit card info by the AI User prior to use might be wiser.

[8] See Building Graphics, Inc. v. Lennar Corp., 866 F.Supp.2d 530 (W.D.N.C. 2011), Nicholls v. Tufenkian Import/Export Ventures, Inc., 367 F.Supp.2d 514 (S.D.N.Y. 2005), Chafir v. Carey, No. 06 Civ. 3016 (KMW), 2007 WL 2702211 (Sept. 17, 2007).

[9] When these programs evolve to the point where AI Programs are only using other AI Works (akin to how Google’s AI Go-playing program constantly plays itself to get “smarter” vs. just playing human Go champions), this situation will get even more complicated.