
“File:NYC Public Library Research Room Jan 2006.jpg” by Diliff is licensed under CC BY 2.5.
Lawyers. Amirite?
There are multiple articles making the rounds about a lawyer in big trouble for using chat GPT to create a brief, thereby presenting completely fictional citations in court.
People like innocent, so perfectly innocent if criminally negligent lazy-ass lawyer Steven Schwartz are shocked, SHOCKED to discover that ChatGPT pulls answers out of its shiny metal ass.
Yes, shock and surprise– despite plenty of information about its unreliability since the bot’s* release, such as Stack Overflow banning ChatGPT-sourced answers last December because they are so often just plain wrong.
The thing is, ChatGPT doesn’t know how to say ‘I don’t know.’
It is only as good as its programming, which reflects a certain tendency that I and many others have noticed to be prevalent in the demographic primarily responsible for funding, building, and providing data for ChatGPT: making shit up when you don’t know anything about it.
It pulls its data from the internet, where people often lie, or mindlessly copy and paste.
It’s a plagiarism speeder-upper.
The creators call plagiarism “predicting text,” by “including the next word, sentence or paragraph, based on its training data’s* typical sequence.” Fancy!
And we still have laws against plagiarism.
The US Copyright Office, for instance, won’t allow AI-generated art to be copyrighted. And the way the decision is written, this appears to apply to the written word as well.
In conclusion: You may want to reconsider things that sound too good to be true. And do your own damn research.
*”training data” = vast swaths of copyrighted texts
** I said what I said.
You must be logged in to post a comment.