Upriver Press Policy on AI and Publishing
Stated simply, we believe that humans should write, edit, and publish books. Upriver Press will not: (A) publish any content generated by artificial intelligence; (B) will not use AI for any element of the editorial process; (C) will not license our books to AI companies for any reason, thereby protecting the intellectual property rights of our authors and their agents.
Our Reasons
There are many types of AI that serve many different purposes. Some uses of AI can be helpful. So our reasons for not using AI only pertain to book publishing.
First, we (along with most publishers) oppose the business models that undergird generative AI companies. To train their large language models, most AI companies have stolen troves of copyrighted content—the hard work of journalists, authors, and publishers. When companies blatantly disregard laws that protect intellectual property, they undermine the foundations of a vibrant culture and democracy.
By disregarding copyright laws, the companies are threatening a foundation of our economy. There are now more than one hundred major lawsuits in the US federal court system filed by victims of copyright infringement. The AI companies and others continue to lobby to overturn US copyright laws so that they can have unfettered and free access to the hard work of thousands of scientists, writers, musicians, publishers, and artists. (Ironically, the AI companies have fought hard to protect the copyrights of their own algorithms.)
For those who might not think this issue affects them, consider these facts from the nonpartisan Congressional Research Service.
“A study by the US Patent and Trademark Office found that copyright-intensive industries—such as computer software, motion pictures, music, publishing, and news reporting—contributed $1.29 trillion to US gross domestic product and directly employed 6.6 million people in 2019” (Congressional Research Service, “Copyright Law: An Introduction and Issues for Congress,” March 7, 2023).
What happens if there are no copyright protections for the people who do all that work?
Second, under current US law, only humans can own a copyright. Machine-generated text loses all copyright protection. This is obviously bad for both publishers and authors. Authors who want their names on a book cover must write the book. Seems obvious.
Third, we believe that a flood of AI-generated “slop,” combined with its high association with confabulation, will further erode people’s ability to know what is real and factual. When people do not know what is true, society becomes more prone to the negative effects of propaganda and disinformation, often including social division and political manipulation.
Many experts are rightfully concerned about what might happen to our culture. We agree with neuroscientist Erik Hoel, who wrote the following.
We find ourselves in the midst of a vast developmental experiment. [The culture is] becoming so inundated with AI creations that when future AIs are trained, the previous AI output will leak into the training set, leading to a future of copies of copies of copies, as content becomes ever more stereotyped and predictable…. Once again we find ourselves enacting a tragedy of the commons: short-term economic self-interest encourages using cheap AI content … which in turn pollutes our culture and even weakens our grasp on reality (The New York Times, March 29, 2024).
Finally, generative AI—being mechanistic—lacks all ethical capacity. AI companies ask people to blindly trust the monolithic ethical “guardrails” of a small cadre of people who design the algorithms. Moral and ethical decisions belong with authors and publishers who are sensitive to each book’s purpose, audience, and cultural context. There is no advantage to offloading this responsibility to a big-tech company.

