Monday, February 2, 2026

Why Every Copywriter & Content Writer Needs an AI Usage Policy

 There should be a policy for that

Clients are comparing you to AI on your perceived value vs. AI’s near-zero cost.

That’s why every copywriter and content writer should consider having a simple AI usage policy for client projects. It doesn’t have to be drawn up by a lawyer, it doesn’t have to be complicated, it just has to be clear.

It should answer three things:

  • Where you use AI
  • Where you don’t
  • And why that protects the client’s brand

This reframes the conversation from “Are you using AI?” to “How are you using AI to make this better?” Now you’re not competing with AI, you’re directing it.

Clients already know AI is cheap. What they don’t know is whether it’s safe, accurate, or on-brand. They're leery of raw AI output … they want strategy, judgment, and brand safety. And an AI policy shows you’re not replacing yourself with a tool, but using a tool under control. This increases trust and helps justify your fees.

Currently AI is cheap ... controlled, intentional expertise is what still commands a premium.

Here’s a sample policy. Feel free to copy it and put it to work for you.

AI Usage Policy for [Copywriting/Content] Writing Projects

This brief policy explains how AI is (and is not) used in [my/our] work, and how [I/we] protect the quality, originality, and integrity of your brand content.

Human-led strategy and voice

    • All projects begin with human-led strategy: positioning, messaging, and brand voice decisions are made by [me/us], not by AI.
    • Your brand voice, audience insight, and offer structure are defined and maintained by a human writer throughout every project.

Where AI may be used

    • AI may be used for low-level support tasks such as: generating outline options, exploring angle ideas, reorganizing notes, or producing rough variants for individual lines.
    • AI may assist with light editing support (e.g., clarity suggestions, grammar checks, or length adjustments), always followed by human review and revision.

What AI is never trusted to do alone

    • AI is never used as an unedited “first draft” that is then simply rubber-stamped; all substantive copy is written or heavily rewritten by [me/us].
    • AI is not relied on for factual accuracy, legal or compliance language, or sensitive topics; any AI-assisted suggestions in these areas are verified and, where needed, re-crafted.

Quality, originality, and ownership

    • All final copy delivered to you is curated, edited, and approved by a human, with a focus on originality, clarity, and brand fit.
    • Any AI-assisted phrasing is treated as raw material to be transformed, not as finished content, to protect your distinct voice and reduce the risk of generic or derivative language.

Transparency and customization

    • If a project or context requires stricter constraints on AI use (for legal, ethical, or internal-policy reasons), [I/we] will adapt this policy and follow your specific requirements.
    • If you ever want details on how AI was involved in a given piece of work, [I am/we are]      happy to explain where it was used and how the final decisions remained human-led.

 
_______________________


If AI is not part of your process, you can still take control of the conversation with an AI usage policy such as:

AI Usage Policy

  • [I/We] do not use AI tools at any stage of [my/our] writing process.
  • All research, strategy, drafting, and revision are done by [me/us] using [my/our] own expertise and ethical sources.
  • Every deliverable is fully original, human-created work.

  • [I/We] do not consent to [my/our] work being used for AI training or machine-learning purposes.


 

Friday, January 30, 2026

Amazon's "Ask This Book": Thumbs Up or Thumbs Down?

 


Amazon just added a new Kindle feature called Ask This Book.

Reading a book on Kindle and have a question? You can type a question about the book you’re reading and get an AI-generated explanation instantly ... inside the book itself. Not a quote. Not a page jump. An interpretation.

Forget who a character is? Ask. Unsure why something matters? Ask. The system explains it in its own words, without the author’s voice, and without leaving the page.

And that raises a real question for writers.

This isn’t a reference tool. It’s not helping readers find what they already read. It’s telling them what the book means. And it’s doing that without the author’s voice.

There’s no opt-out. If your book is on Kindle, this feature can explain it, summarize it, and interpret it. You don't see the questions. You don’t see the answers. You can’t fix mistakes. You don’t get a say in how your work is framed.

So, writers, how does that feel?

Does this help readers stay immersed or does it put a machine between you and your audience? Is this a useful assist, or the start of something that slowly rewrites how your work is understood?

The Authors Guild is already pushing back (as they should), arguing that this effectively turns books into annotated editions, without permission or new terms. Amazon didn’t negotiate. They just turned it on.

Some people will say it’s accurate. Maybe it is. But accuracy isn’t really the point. The question is control (and voice) and where interpretation begins and ends.

Copyright is about control over how work is reused and reshaped. This puts a machine between the writer and the reader. Even if it’s accurate, it’s no longer the author’s voice doing the explaining.

And once this becomes normal, it won’t stop here. Explanations become summaries. Summaries become condensations. Condensations become rewrites. Then something else entirely. And every step moves the reader further from the original work.


Does Ask This Book feel like a helpful tool for your readers or a line that shouldn’t have been crossed?


_________________________

For more details, check out “What Amazon’s ‘Ask This Book’ Feature Means for Authors” 



Thursday, January 29, 2026

How Confident CTAs Drive More Conversions

 

Please


You were taught to say please.

Good manners. Polite. Civilized.

All good things ... except in a Call to Action.

When you say “please” in a CTA, you’re not being courteous. You’re being hesitant.

“Please sign up.”

“Please download.”

“Please consider…”

That one word quietly tells the reader: I’m not fully convinced this is worth your time.

A strong CTA doesn’t beg. It leads. It assumes confidence. It believes in the value. It removes friction instead of adding doubt.

If what you’re offering truly helps, don’t apologize for asking.

Say what to do. Say it clearly. Say it confidently. And save “please” for the dinner table.

 

Call To Action (CTA)

Here are 16 examples of confident, no-apology CTAs ... the kind that lead instead of ask:

Get the guide

Start your free trial

Book your strategy call

Download the checklist

See how it works

Join the newsletter

Claim your spot

Watch the demo

Start risk-free today

Upgrade your plan

Read the case study

Test it for 14 days

Build your first campaign

Access the template

See pricing

Get instant access - no commitment


 

Wednesday, January 28, 2026

Poll Results: Copywriters’ Fears About AI in 2026

In an informal, unscientific poll, responders indicated that copywriters’ biggest fears about using AI cluster around losing work, losing value, and losing trust in their own (and others’) writing. Many see AI as both a useful tool and a direct threat to how they get paid and how their craft is judged.


Poll Results: Copywriters & AI

1. Job and income security

  • Fear that AI tools will replace human-written ad copy, blogs, emails, and social posts, shrinking demand and pushing rates down.
  • Anxiety that clients will switch to “cheap” AI output and only hire writers to lightly edit, devaluing deeper strategic and creative skills.
  • Worry that entry-level and junior roles will disappear, making it harder to build a career path toward senior creative or strategy roles.


2. Being misjudged or distrusted

  • Fear of being falsely accused of using AI when they did not, especially as AI detectors often flag human work as “machine-written.”
  • Concern that clients and managers will trust AI detectors over the writer’s word, damaging professional reputation and relationships.
  • Unease that audiences may assume polished, efficient copy is “just AI,” making it harder to prove the value of expert human craft.


3. Loss of creative identity and craft

  • Anxiety that AI will homogenize tone and style, flooding channels with same-sounding content and making original voices harder to spot.
  • Fear that writers will be pushed into prompt-tweaking and editing instead of concepting, storytelling, and big-idea development.
  • Worry that constant reliance on AI will dull skills like ideation, structural thinking, and nuanced phrasing over time.


4. Ethical, legal, and IP concerns

  • Fear that their past work has been scraped to train models without consent or credit, undermining ownership of original writing.
  • Concern about accidentally publishing AI-generated material that includes plagiarism, inaccuracies, or fabricated details, with legal or brand consequences.
  • Discomfort with being asked to “just run it through AI” when the underlying data, permissions, or attributions are unclear.


5. Practical quality and workflow worries

  • Worry that AI will confidently generate factual errors or made-up case studies that slip through and damage credibility.
  • Frustration that prompting, checking, and rewriting AI drafts can be time-consuming and sometimes slower than writing from scratch.
  • Concern that clients will overestimate AI’s capabilities, expecting instant, perfect copy and compressing timelines even further.


Why Every Copywriter & Content Writer Needs an AI Usage Policy

  Clients are comparing you to AI on your perceived value vs. AI’s near-zero cost. That’s why every copywriter and content writer should c...