Against Purity Politics: Why the AI Conversation in Indie Publishing Needs to Grow Up


Against Purity Politics: Why the AI Conversation in Indie Publishing Needs to Grow Up

Written by L.L. Shore

Assisted by computers probably made with child labour, Ritalin (thanks big pharma), Big Tech (Microsoft Word), AI tools (ProWritingAid, Copilot), coffee probably unethically sourced in Colombia and milk produced at the cost of clean drinking water.

The conversation around generative AI use in indie publishing has become predictably narrow. We should expect this from writers who can tie up messy plots and complex characters into a tidy bow that makes you, the reader, want to buy another book.

The past two weeks have been a riveting slurry of ‘AGAINST THE USE OF AI’ posts, a silent battle cry shouted to the deep ether of forgotten digital web trash, from which I might chance AI got some teaching.

You’re “against AI,” or you’re “part of the problem.”

There is no middle.

And doesn’t it feel good to have someone to blame?

Because then it’s not my problem, it’s their problem. And when it’s their problem, it’s their responsibility to fix it. And we have all fallen in love again with the classic consumerist, imperial colonial myth.

It is an individual’s responsibility for their success or demise in this world.

This behaviour dangerously collapses critical dialogue, and elevates statements made through technology at people, instead of authentic engagement with people.

And the reality is messier than a chic, visual feast of Instagram carousel posts, catchy repeatable slogans and trending algorithm music. If we care about creators, climate, culture, economy and ethics, then messy is where we should start.


What People Are Actually Afraid Of

Let me start by being honest about why so many authors and artists are deeply uncomfortable with generative AI. They are valid and there are mortal wounds that no one is fixing.

Most purist objections are not about hating new tools. Creatives know evolution and innovation in their bone’s. Like can recognise like. The four pillars of objection I can distil in my uniquely antagonistic mind are valid and they make sense:

  • BIRTH WOUNDS (How AI was created and trained)
    Large scale AI models were trained on enormous volumes of creative work. The writers of code reflected the white male majority. Regulation was debated but barely implemented because of the ‘innovation’ benefit.

  • MOB DON’S OF THE AI WORLD (Power & Control)
    Yes, we are talking about organised crime on a different scale. Tech bro’s and billionaires that can literally buy their way into rooms or out of trouble. We are talking about extreme concentrations of power with minimal accountability. The data privacy, harm impacts, how the ‘thing’ interacts with humans is all controlled here.

  • THE THREE C’S (Consent, Compensation, Craft)
    Creators didn’t consent to their works being used to teach a tool.
    Creators weren’t compensated for use of that work to teach a tool.
    Creators worry about the diminishing value of craft if oversaturation of AI created art floods the market.

  • IMPACT IMMORTALITY (Environment, People, Marginalised & Under-represented)Data centres require extreme amounts of water, an already over-pressured natural resource that is not infinite. Location, labour, communities all impacted by mega data centres. Ethical dilemmas over who is accountable when an individual user is harmed from use of AI. Barriers for indigenous, women and ethnic minorities to access, influence and shape this tool when we already start behind the start line. Data sovereignty, privacy of information and protection issues.

Combine this with a creative economy that already underpays and undervalues labour, platforms volume over depth, and a community of creators quietly hoping for the lightning strike. The viral post. The exploding page reads. The movie deal. The actors who “loved” a book they haven’t read.

That would make any normal person’s nervous system have an amygdala response. Speaking of that tiny little part of our brain, let’s go back to biology 101. When we have a fear-based reaction, our amygdala tends to hijack all thought. For very good reason – ENSURE SURVIVAL.

But your prefrontal cortex is switched off – yes, the one responsible for critical thinking, rational decision-making and a range of other important things. That has gone to sleep, while it’s cousin, the amygdala, is taking time in the spotlight.

And while the fear might be helping us survive, it wasn’t designed to attack large, messy and enduring problems such as AI. It was to save us from the lion about to eat our face.


You’re “against AI,” or you’re “part of the problem.”

We have switched the conversation from being “I am informed and I choose not to use this” to “anyone who uses this is unethical, should be cancelled AND burnt at the stake for treason.”

It feels good to blame, to say we aren’t painted with the same brush. It feels powerful.

It’s a mistake. Turning a structural and system problem into an individual sin is one of capitalism’s favourite tricks.

Point a finger at thy neighbour and we all won’t notice the pouring of power into the pockets of people above us.


Tools are not moral agents. Systems are.

A pen doesn’t decide whose stories get amplified. A keyboard does not set royalty rates. A word processor doesn’t design platform algorithms. And generative AI, for all its scale and danger, is still a tool layer sitting on top of much larger systems.

  • Political turbidity
  • Ongoing colonisation
  • Broken labour markets
  • Unregulated industry

Blaming individual authors for navigating those systems does not meaningfully challenge the systems themselves. A single indie author experimenting with AI for brainstorming does not hold the same moral weight as a corporation building models on scraped data then selling to the highest bidder.


Intent Still Matters

One of the most common claims I see is that ‘AI removes authorship.’ Authorship has always been about intent, storytelling and not who wrote the word on the page. Let’s be honest here, ghost writers are out of a job if this is the path we choose to define authorship.

Who chose the story. Who shaped the characters. Who decided the plot or themes and eventually accepted responsibility for what ended up on the page.

A human using a tool does not erase human authorship any more than spellcheck does. What matters is who is deciding, directing, curating, editing and shaping. If it’s a person, then it’s a person still authoring. This doesn’t make every AI use ethical, but it does mean that authorship isn’t extinct. Nor that ghost writing and AI are ethically identical but they both demonstrate authorship can be messy.

The ethical question is not: did a machine generate text?

It becomes: Who is exercising creative control? Who accepts responsibility for the result?

AI cannot answer those questions; it is and will always be a person who can. That is where authorship lives.


Accessibility Is an Ethical Issue Too

Some creators use AI because they are inexperienced. Some use it because they are disabled.

They have ADHD. Or chronic illness. Or brain fog. Or limited energy. Or no community around them. Or five jobs, a solo parent to two chaos children and writing helps fill the cup that is constantly being drained by everyone else.

Access to creative expression has never been equal, and if a tool supports budding, seasoned or future creators get to the start line where the rest of the world occupies, that matters.

Ethics that ignore material conditions tend to only serve people with privilege.


The Market Was Already Collapsing

AI did not invent:

  • Amazon’s race to the bottom pricing
  • Algorithmic discoverability or viral sensationalism
  • Subscription-model devaluation
  • Content oversaturation

Those were designed by humans, and those systems were degrading creative labour long before large language models went mainstream. AI is adding fuel to the already burning. And fighting each other over how much fuel you’re adding is not stopping the cause of the fire in the first place.


Environmental Reality Check

If environmental harm is part of this objection, that conversation needs to be honest about individuals, scale, industry and culture.

Generative AI uses electricity and water. Data centres have footprints. Water is a non-renewable resource. Unless someone has found another critical life source for us as a replacement, we as humanity have been on a collision course to being erased by thirst for generations thanks to the following:

  • Fossil-fuel transport
  • Industrial agriculture
  • Global shipping and transport
  • Fast Fashion
  • Overconsumption culture
  • Profit over people and planet mentality

A lifetime of daily petrol driving can generate approximately 220 tonnes of CO₂-e per person. The cumulative effect of how we live, work and play in this world is the problem. AI is of course one of the co-morbidities.

If environmental harm is our metric, then it has to be applied consistently. To transport. To food systems. To energy and consumption. Not just AI.

It also looks like advocating locally for climate adaptation. Participating in planting and restoration days. Learning from Indigenous land management practices. Questioning our reliance on Amazon and Kindle.

This is messy, it’s complex, and we risk symbolic virtue signalling if we only call out AI yet take no action in any other part of our lives to make our impact on this planet so that future generations can have a world that’s burning a little less.

Otherwise, we’re just picking the easiest target.


Harm Reduction Beats Absolutism

Some creators, including myself, draw boundaries like:

  • No publishing raw AI output
  • No mimicking living authors
  • No using AI to replace core creative decisions
  • Using it only for brainstorming, fact checking, outlining, or technical dumping

It’s not blindly drinking the Kool-Aid. It’s taking a tool, knowing it’s not perfect, and trying to minimise impact while using it.

Refusal is one strategy. Conscious choice with guardrails up is another. Neither makes someone morally superior nor does it equate to burning at the stake for treason. Indie authors are already struggling enough.

Public pile-ons and pre-emptive boycotts don’t dismantle systems. They punish individuals.

And there is always a human being, with real labour and real vulnerability, on the other side.


The Real Fight

If we want meaningful change, the targets are not indie authors. They are:

  • Data consent laws
  • Collective licensing frameworks
  • Transparency requirements
  • Environmental impact standards and tight, enforceable regulations
  • Anti-trust enforcement
  • Public accountability 

The pressure needs to be system level, not in comment sections cutting noses to spite faces.


My Position

I don’t think generative AI is harmless. I don’t think it’s evil incarnate either. It exists inside a deeply broken and hurt system, that continually perpetuates harm.

I think creators deserve protection, consent and compensation matter. I think our great grandchildren deserve a better world than we are handing them.

And I also think, if we are serious about making change, then we don’t start movements that eat their own. Because they eventually starve.

If you want zero AI use, then I’d start by asking each author about their use of it in their work. And make a personal decision if you are comfortable with reading it. No judgement either way.

Don’t boycott because you see a dash and it screams AI. Get curious, ask questions. Set personal boundaries and make informed choices.

Most indie authors love to engage in thoughtful, respectful dialogue on real world issues, such as AI. We aren’t always going to agree on everything and that’s okay. Because agreeing on everything would have made humans extinct a long time ago.

Your prefrontal cortex is begging to jump online, don’t let your amygdala hijack its glory.