Before we all get too deep into using ChatGPT or any other AI tool to create things for us, we need to address some of the questions that arise around content and property restrictions.
Some have declared that ChatGPT is the most important development since the invention of the printer or atomization. We will see. But there are problems with the accuracy and authenticity of the material that AI platforms like ChatGPT create. On the other hand, given the expectation that ChatGPT or other AI platforms may occupy at least some of the work of authors, analysts and other content creators, we also need to understand its legal implications.
No problem around using Personal ChatGPT as a Conversation Assistant. And the rules around using ChatGPT to create word documents seem obvious (don’t think about it). But when it comes to implementing AI-generated speech in content intended for wider distribution – say, marketing material, white paper or even text – legitimacy is a bit vague. When it comes to intellectual property, the prototype for ChatGPT is “trained on the part of the work created, and it remains unclear what the legal model for the reuse of this content is if it comes from someone else’s intellectual property.” According to Bern Elliot, an analyst at Gartner.
To gain a better understanding of where things are with the use of ChatGPT for shared content, I sought some legal advice on where the law (such as copyright or intellectual property law) stands. Here is their opinion:
ChatGPT tends not to include quotes or attributes of origin and IP used or synthesized. Is this a problem?
“From Intellectual Property [IP] Michael Kelber, co-chair of IP implementation at Neal Gerber Eisenberg, says that if the source material is not specifically quoted, it will not require a citation inclusion. “If the concept is used but not copied, the use will not affect any copyright or other protected IP. That said, from a research point of view, quotations or attributes would be useful in identifying bias and credibility, as well as other claims to authority.
It is not clear who can copyright or claim the AI-created works. Applicants who only use tools to create articles or OpenAI? WHO?
Margaret Esquenet, in partnership with Finnegan, Henderson, Farabow, Garrett & Dunner, LLP, says that for work to be protected under current U.S. copyright law, “work must be the result of originality and creativity by the author.” People “. “In the absence of human ingenuity, workers are not entitled to copyright protection. As a result, the U.S. Copyright Office will not list work created by autonomous artificial intelligence devices.
Kelber says the question “is also being addressed, for example, in cases involving photos taken by elders and cases exploring the concept of AI in the context of patents,” Kelber says. “So far, the courts have been hostile to the idea of people who are not people claiming copyright or invention, and in both cases IP ownership.”
Things get more complicated if copyright issues or IP infringement on AI-generated content occur. “Because authors in the United States need to secure registration or denial of copyright in order to exercise their potential rights in the face of copyright obligations, an appeal against denial of registration is copyright. “The strategy is likely to face a strong lead in light of the legislative history of the pre-requisites of copyright and court decisions. A sequence that confirms the need. “
As a result of “copyright standards” under current U.S. law, work created by AI is likely to be (1) public works immediately upon creation and without the intellectual property owner having the ability to claim rights; or (2) copying of The material on which the AI equipment is exposed during training, ”Esquenet continued. “If someone owns the training data set (or each of its components) and the degree of similarity between any particular job in the training suite and the AI job.”
Assuming that the previous case, which excludes copyright for non-humans, was followed, “it could prevent anyone from owning the result, the key is to give up such work in public,” Kelber says. Speak. “And that result can also be supported since a large number of material sources are at least publicly available.”
A compelling argument Kelber adds: “It may be argued that AI is just a tool and that people who are leading AI should be able to claim ownership of the output. Through the use of drawing software. However, in the case of ChatGPT, the operator control of the output is limited and there may be stronger arguments that the output is controlled by the ChatGPT developer than by the operator initiating the input.
What about when ChatGPT creates the same episode for someone else?
“Even if it is assumed that the right intellectual property owner is the person to whom the AI creates job questions, the idea of independent creation can prevent two parties where the same job creation question cannot exercise rights against each other,” says Esquenet. “In particular, successful claims of copyright infringement require proof of copying – independent creation is a complete defense. “Under this hypothesis, both parties do not copy the work of the other party, so no violation claim is likely to be successful.”
After all, if the content created is corrupt, who is to blame? “An exact copy of a protected job can create potential liability that raises another question: Who is responsible for an AI creator like ChatGPT – or the user who asked the question?” Kelber asks.
If the generated text is used in an article or paper, even in part, should ChatGPT be considered and cited as a source in itself? As in “Responses generated by ChatGPT accessed 12/15/2022”?
Quoting proper AI-generated content is okay. “From the point of view of quoting the Blue Book – cited in court transcripts – as well as for literary purposes, citations to sources such as ChatGPT would be appropriate,” Kelber described.