AI-Assisted Writing: A Double-Edged Sword for Authors Writerful Books, 18 November 202418 November 2024 Now that many writers are using various AI apps to aid in their creative work, I believe it’s essential to discuss how these apps utilize the input data. Can the Use of AI Be a Barrier to a Publishing Deal? If you use AI to improve your manuscript, how can you be sure that a similar book won’t hit the market before yours is complete? Could using AI ultimately hinder your chances of securing a publishing deal? Some publishers and writing competitions (e.g. The Bath Novel Awards) specifically state that they do not accept manuscripts created—or even edited—by AI. One reason is likely quality-related: a manuscript produced largely by AI often lacks depth. Another reason may be more contractual: if significant portions of a manuscript are entered into an AI application, the text might (unintentionally) become accessible to others. From the perspective of a production company or publisher, signing a contract for an AI-generated manuscript could be a financial risk. Ownership of data primarily stays with the user…but that’s essentially where it ends. Microsoft claims that users retain copyright over their data. However, the concept of “copyright” in this context can be misleading. While users have rights to their own data and data generated by the AI application, Microsoft also states that if this user data is utilized elsewhere, copyright protections do not apply. ChatGPT takes a clear stance on user data ownership, explicitly stating that users own both the data they input into the app and the output it generates. In contrast, Google’s Gemini does not provide explicit information on this issue. (If you come across details about Gemini’s approach, I’d be interested to learn more.) AI applications often leverage user-inputted data to improve their models. Microsoft, for example, explicitly states that it uses all available data. Many AI services include broad language in their terms of service, describing how they use data while leaving loopholes that can protect them from liability in ambiguous situations. Notably, only ChatGPT and Gemini offer users the option to decide whether their input data is used for model training. Despite settings that seem to improve data security—such as preventing conversations from being evaluated—this protection can be superficial. For instance, Google stores conversations for three days, regardless of user settings. Similarly, ChatGPT does not clearly disclose how long data is stored or how it is used. Good or Bad AI Personally, I think sharing a manuscript with AI carries risks. You never truly know where your hard-written text will end up or how it might be used. It’s true that AI applications can speed up the completion of a manuscript and even improve it. The temptation to write more efficiently and quickly is likely to increase the use of AI applications. I’m not saying you shouldn’t use AI. However, I strongly encourage you to think carefully about how you use AI, which AI application you use, and whether the risk is worth it. It would be unfortunate if a publishing deal or contract with a production company fell through because you used AI in writing it. Even worse would be if someone else benefited from your years of hard work. I view AI like a curious neighbor: don’t share anything you wouldn’t want your neighbor to discover. Using AI Safely • Do not share personal or sensitive data. • Do not share an entire manuscript or even long sections of it. • Avoid uploading original files. • Check the privacy settings before using the AI application. • Edit individual sentences or very short segments at a time. • Use AI as a tool for information gathering. • Let AI act as a sparring partner, for example, to explore different narrative options. Mariia Kukkakorpi is a communications professional currently writing a children’s book series and developing a feature film script. She works in a legal tech startup as a communications manager. Author Website Guest Post