Great response Jo. I have to say, conflating those who question AI's place in creative writing with racism/apartheid AND calling them zealots for daring to say that maybe outsourcing your thinking to a large language model is not going to produce great work is quite the reach, but if we're going there, we could also argue that AI is a coloniser, stealing the work of writers for its own benefit and consuming huge volumes of precious water in the process.
Thanks Zoe! Yes, I agree his analogies were not very grounded, and I don't think my position as a whole was overly zealous, but I can see how someone embracing AI would think differently. I agree AI profiteers are predatory at the moment, literally profiting from other people's work (and trying to make that legal), wiping out jobs en masse, and putting huge pressure on already scarce resources. I really hope the powers that be are working on a tax system that will compensate society as a whole for what we're on the verge of losing. I don't know what's in the works in that regard, if anything. It all seems to be moving too quickly for any safeguards or compensations structures to be put in place. But I have my fingers crossed. Of course there will be advantages for many as a result of these tools, but so many others will be floundering, and the big AI honchos will be raking in huge sums. They need to balance the ledger somehow.
I felt the idea of too rigid a division between wrong/right when it comes to AI-assisted writing being comparable to racism was a bit of a stretch. For one thing, there are some serious 'wrongs' on which LLMs were built and continue to be developed. I'm not completely anti-AI. I use some forms of AI in my admin and editing processes. But I'm anti 'faking it' and I'm (I think justifiably) defensive about my career as a writer. It took about 35 years for me to get here and yes, I'm scared my hard work will all have been for nothing if we can't get readers to agree that human-authored work is more valuable than machine-authored work. And I'm also scared about how fast it's all happening and how little discussion, regulation and pause are happening before it becomes completely mainstreamed. Lastly, I'm concerned that the speed of change is deeply tangled up with the rich getting richer. These questions aren't just abstract. There are tech billionaires pushing this uncritical acceptance of gen-AI and once again, the climate and planet are the losers. And creatives are also losing - which is nothing new.
Absolutely, Sasha—the billionaires who are just going to keep getting richer really need to be contributing financially to the toll the whole thing will take on the planet and on the people becoming redundant in workplaces en masse.
Even more thoughtful, calm and nuanced than the first piece.
You are making some important points.
I get that there is a point of blurring the line where you might have glanced at an AI summary while googling to clarify a small piece of research, for example.
But i think it might be useful here to use the language of the justice system there of what a "reasonable person" would expect.
As in, I do not think a reasonable person would define "a 100 per cent human authored piece of work" as one in which the human somehow managed to prevent their eyeballs landing on anything AI generated during the five years it took them to write the book.
But a reasonable person must surely call it using AI if someone has intentionally bounced ideas, specifically regarding the book, off "Claude".
The difference between generative and non generative AI, also, has to be in those definitions somewhere.
Thanks Emma! Great points. The reasonable-person test is a good one here, and I need to look more in to generative vs non-generative AI. I have kept my head in the sand a bit on this issue, just writing away in my own little neck of the woods, so gen vs non-gen isn’t something I’ve actually teased apart yet. I’m still stunned any time I’m reminded how intimate and co-dependent a lot of people are with their personal and professional chatbots already. I am SO wary, but so many people are ALL IN. Wild times.
There is something incredible the first time you see just what it can do - just pouring out a formatted and plausible summary of something in seconds. It's insane.
BUT once you start to pick it apart you see repetition and weird phrasing and a hazy lack of clarity and realise that had you spend time writing it yourself you would have ironed out those issues through your writing process and better understood what you're trying to say.
I think it's fine for basic thought processes (though there is the water and all the other considerations) ie give me three ideas for what to cook for dinner with these ingredients, but even then, why not think about it? Seeing dementia up close makes me very wary of handing over that constant daily brain exercise to a machine wholesale.
Yes, I feel you’re probably right about the void-howling. But thats a good point about the impact on human cognition moving forward. If we’re outsourcing all our analytical and memory and summarising skills, and getting the bots to speak for us, what will become of our brains?
Great response Jo. I have to say, conflating those who question AI's place in creative writing with racism/apartheid AND calling them zealots for daring to say that maybe outsourcing your thinking to a large language model is not going to produce great work is quite the reach, but if we're going there, we could also argue that AI is a coloniser, stealing the work of writers for its own benefit and consuming huge volumes of precious water in the process.
Thanks Zoe! Yes, I agree his analogies were not very grounded, and I don't think my position as a whole was overly zealous, but I can see how someone embracing AI would think differently. I agree AI profiteers are predatory at the moment, literally profiting from other people's work (and trying to make that legal), wiping out jobs en masse, and putting huge pressure on already scarce resources. I really hope the powers that be are working on a tax system that will compensate society as a whole for what we're on the verge of losing. I don't know what's in the works in that regard, if anything. It all seems to be moving too quickly for any safeguards or compensations structures to be put in place. But I have my fingers crossed. Of course there will be advantages for many as a result of these tools, but so many others will be floundering, and the big AI honchos will be raking in huge sums. They need to balance the ledger somehow.
I felt the idea of too rigid a division between wrong/right when it comes to AI-assisted writing being comparable to racism was a bit of a stretch. For one thing, there are some serious 'wrongs' on which LLMs were built and continue to be developed. I'm not completely anti-AI. I use some forms of AI in my admin and editing processes. But I'm anti 'faking it' and I'm (I think justifiably) defensive about my career as a writer. It took about 35 years for me to get here and yes, I'm scared my hard work will all have been for nothing if we can't get readers to agree that human-authored work is more valuable than machine-authored work. And I'm also scared about how fast it's all happening and how little discussion, regulation and pause are happening before it becomes completely mainstreamed. Lastly, I'm concerned that the speed of change is deeply tangled up with the rich getting richer. These questions aren't just abstract. There are tech billionaires pushing this uncritical acceptance of gen-AI and once again, the climate and planet are the losers. And creatives are also losing - which is nothing new.
Absolutely, Sasha—the billionaires who are just going to keep getting richer really need to be contributing financially to the toll the whole thing will take on the planet and on the people becoming redundant in workplaces en masse.
Even more thoughtful, calm and nuanced than the first piece.
You are making some important points.
I get that there is a point of blurring the line where you might have glanced at an AI summary while googling to clarify a small piece of research, for example.
But i think it might be useful here to use the language of the justice system there of what a "reasonable person" would expect.
As in, I do not think a reasonable person would define "a 100 per cent human authored piece of work" as one in which the human somehow managed to prevent their eyeballs landing on anything AI generated during the five years it took them to write the book.
But a reasonable person must surely call it using AI if someone has intentionally bounced ideas, specifically regarding the book, off "Claude".
The difference between generative and non generative AI, also, has to be in those definitions somewhere.
Thanks Emma! Great points. The reasonable-person test is a good one here, and I need to look more in to generative vs non-generative AI. I have kept my head in the sand a bit on this issue, just writing away in my own little neck of the woods, so gen vs non-gen isn’t something I’ve actually teased apart yet. I’m still stunned any time I’m reminded how intimate and co-dependent a lot of people are with their personal and professional chatbots already. I am SO wary, but so many people are ALL IN. Wild times.
There is something incredible the first time you see just what it can do - just pouring out a formatted and plausible summary of something in seconds. It's insane.
BUT once you start to pick it apart you see repetition and weird phrasing and a hazy lack of clarity and realise that had you spend time writing it yourself you would have ironed out those issues through your writing process and better understood what you're trying to say.
I think it's fine for basic thought processes (though there is the water and all the other considerations) ie give me three ideas for what to cook for dinner with these ingredients, but even then, why not think about it? Seeing dementia up close makes me very wary of handing over that constant daily brain exercise to a machine wholesale.
But I feel like we're howling into the void here.
Yes, I feel you’re probably right about the void-howling. But thats a good point about the impact on human cognition moving forward. If we’re outsourcing all our analytical and memory and summarising skills, and getting the bots to speak for us, what will become of our brains?
Im the same Jo! I dont even know how to find chatgpt 😆 but im just .... completely fine with that ...
Same! Quite happy to leave it in its box.