In 2018, two UK professors and a scientist from Missouri filed patent applications in the US, Europe, and the UK for a new interlocking food container design that is easier for robotic systems to grab, and a warning light with an improved flashing pattern that will better catch people’s attention. The European and UK patent offices both indicated the claimed inventions are patentable. Despite this, it is unlikely that any patent will be issued because the sole inventor named on the patent applications is an artificial Intelligence system called DABUS (“device for the autonomous bootstrapping of unified sentience”). U.S. patent law, and patent law globally, requires each named inventor to be a person. If there is no human inventor, can the invention be patented? The DABUS system was designed to develop new ideas without human intervention. According to Dr. Steven Thaler, the developer of DABUS, while he trained DABUS, the container and warning light system designs it output were created autonomously, and he had no idea what the result would be. DABUS and other advanced AI systems are pushing the boundaries of what it means to be an “inventor” and raising significant questions about how AI generated inventions can be protected under intellectual property law.
To be an inventor, there must be conception of the invention. As explained by the Federal Circuit, the appellate court that hears patent appeals, “[c]onception is the touchstone of inventorship, the completion of the mental part of invention. … Conception is complete only when the idea is so clearly defined in the inventor’s mind that only ordinary skill would be necessary to reduce the invention to practice, without extensive research or experimentation”.
The contribution of an AI system to an invention can be considered on a continuum. At one end, the AI is simply used as a tool to verify the outcome or viability of a human made invention. This is no different in nature than the how computer systems have been used by inventors for decades and raises no particular new issues. On the other end of the continuum the AI system itself identifies a problem and proposes a solution without any human intervention. This scenario clearly falls outside current patent law and is unlikely to occur at present. The most likely scenario today, in which AI inventorship issues arise, is one in the middle; a human identifies a particular problem and uses AI to find a solution. At some point, the contribution of the AI system crosses the line from being merely a helper to contributing something that — if done by a person — would be viewed as conception and qualify them at least as a co-inventor. Likewise, the role for the user of the AI can shift from that of an inventor who conceives of the invention to simply issuing an ‘invitation to invent’’; i.e., one who presents the problem but leaves figuring out how to solve it to the computer.
The possibility of AI making inventive contributions raises many issues. First, how can an invention be protected at all if is there is no human to name as inventor? There have been documented cases of patents being granted by the US patent office on computer-generated inventions (no doubt unintentionally). A legal fiction that has been applied relies on the definition in the U.S. patent statute of an “inventor” as “the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.” The application is filed in the name of the person running the computer or otherwise closest to the issue and the AI output is treated as a “discovery” by the named inventor. While this approach may provide some legal cover, to the best of the author’s knowledge, it has not been tested in court.
One solution is for new laws to be made that specify who the inventor in this situation would be. There are many possible choices, including the person who programmed or trained the AI, the person who posed the question and started the process, the person who reviewed the output, or even the AI itself. Under U.S. law, the inventor is by default the legal owner of their invention. Allowing an AI to be named an inventor solves one problem but at the same time shifts the problem to who the default owner should be, which could include any of the above or a company having close ties to the AI, such as the company that trained the AI or is paying for its use. Any of these default options could be modified by appropriate contracts.
An inventor must also comply with various legal formalities and obligations, including executing a declaration of inventorship stating, under penalty of perjury, that they are “the original inventor”. The U.S. applications filed in the name of DABUS have been rejected on formalities grounds because a signed declaration of inventorship has not been submitted. U.S. inventors are also required to disclose to the patent examiner known information that may be “material to patentability”. AI systems are often trained using very large data sets of existing materials. It may be difficult, if not impossible, to determine whether a solution output by an AI is actually new or whether the AI is simply using a prior solution it knows of.
The question of whether to allow patents with an AI inventor raises important policy issues as well. Is patent protection for AI inventions needed to “promote the progress of science and useful arts” as set forth in the patent and copyright clause of the constitution? Since AIs could generate inventions at rates that far exceed humans, a company using an AI to invent in a new technology area could overwhelm human inventors and effectively lock out that area from future innovations. Should inventions with AI inventors have a shorter patent term or be progressively more expensive to help restrict filing? On a more practical basis, AI systems think differently than people. When evaluating obviousness of an invention the issue is considered from the viewpoint of a person of ordinary skill in the art. What is non-obvious to a person might be obvious to an AI. One solution could be a stricter standard of obviousness.
These are all difficult questions with no definite answers as of yet. Some guidance may be found by exploring how computer generated copyrighted material is treated. U.S. copyright law requires human expression and no protection is available for computer generated content. In contrast, UK copyright law defines an “author” of a computer-generated literary, dramatic, musical or artistic work to be “the person by whom the arrangements necessary for the creation of the work are undertaken.” Differences in the rate and impact of such copyrights may suggest which better serves the purpose of IP law.
Various patent offices and IP groups are actively considering the many issues raised by AI inventions. On August 27, 2019 the US Patent Office issued a “Request for Comments on Patenting Artificial Intelligence Inventions.” The questions posed included how to define an “AI invention” and what a person would need to do to be considered a co-inventor; do the laws about inventorship need to be revised? Who can and who should own an AI invention? What issues are raised for patent eligibility, the “level of ordinary skill in the art” used to evaluate obviousness, and prior art? Should AI inventions be given a different type of protection?
While AI inventions are still unprotectable, AI systems have already been given person-type status in other situations. For example, in 2014, the Japanese venture capital firm Deep Knowledge appointed an AI-based robot named “Vital” to its board of directors because Vital finds trends “not immediately obvious to humans”. The company relies on recommendations from Vital and corporate investments must be “approved” by the AI system. The AI robot Sophia, known for an ability to have conversations and to make over fifty expressions, was given citizenship by Saudi Arabia in 2017. While these may be considered stunts, AI systems are ubiquitous and are playing an increasingly important role in business and technology. As truly valuable inventions are generated by AI, economic and social incentives will require adjustments in patent law to provide a balance between protecting such inventions and leaving space for human inventors to continue to play a role.
 Sewall v. Walters, 21 F.3d 411 (Fed. Cir. 1994).
 See, “Report From The Ip5 Expert Round Table On Artificial Intelligence”, Munich, 31 Oct. 2018; https://www.fiveipoffices.org/material/ai_roundtable_2018_report/ai_roundtable_2018_report