A cartoon robot looking shocked and scared as an angry mob of humans waves protest signs at it under a spotlight. Generated by Afitpilot.

Ya Filthy Clanker!

AI assists. Humans decide.

There’s a certain kind of comment that keeps popping up whenever AI enters anything physical.

“Hire a coach.”
“I’d never trust AI with my body.”
“It doesn’t really understand.”
“Until it’s AGI, it’s inferior.”
“Don’t put your running in the hands of a filthy clanker.”

Some of these are emotional.
Some are ideological.
Some are just funny.

But buried underneath the noise is a legitimate concern:

What happens when AI is the only thing in the loop?

That question deserves a real answer.
So here’s ours.


The criticism isn’t wrong

Let’s be clear upfront.

Fully autonomous AI coaching, with no human oversight, is a bad idea.

Not because AI is useless.
But because training is a high-stakes system.

Your body is not a spreadsheet.
It’s not a sandbox.
And it’s definitely not a prompt playground.

Fatigue accumulates.
Injuries don’t announce themselves.
Context matters.
And small errors compound.

Anyone pretending AI should just “decide everything” is either naïve or selling something.

We agree with the critics on this part.


What AI is actually good at

AI is very good at certain things:

  • Processing large amounts of training data
  • Detecting patterns humans miss
  • Tracking trends over weeks and months
  • Suggesting logical progressions
  • Flagging inconsistencies between intent and execution

In other words:
AI is excellent at analysis and suggestion.

What it is not good at is accountability.

It doesn’t feel consequences.
It doesn’t own outcomes.
It doesn’t carry responsibility.

And that distinction matters.


The real problem: “AI-only” thinking

Most of the backlash against AI coaching isn’t really about AI.

It’s about removing humans entirely.

When people say:
“I’d never trust AI with my body”

What they usually mean is:
“I don’t trust a system where no one is responsible.”

That’s fair.

Automation without accountability isn’t coaching.
It’s delegation without ownership.

So instead of trying to argue against that fear, we designed around it.


How Afitpilot actually works

Afitpilot was never built as “AI replaces coaches”.

It was built as:

AI assists. Humans decide.

Here’s what that means in practice:

  • AI generates training suggestions based on your data, feedback, and goals
  • Those suggestions are explicitly marked as suggestions
  • Every workout can be reviewed, modified, or rejected
  • On request, a human coach can step in to verify or validate the plan
  • Adjustments are traceable, explainable, and reversible

AI handles the heavy cognitive lifting.
Humans retain authority.

Always.


Why this matters more than AGI debates

A lot of comments fixate on one idea:

“Until it’s AGI, it doesn’t really understand.”

That’s a philosophical debate.

Training is a practical one.

You don’t need artificial general intelligence to:

  • Spot under-recovery
  • Notice chronic under- or over-loading
  • Detect adherence drift
  • Flag mismatches between RPE and prescribed intensity

But you do need human judgment to:

  • Decide when to push anyway
  • Interpret life context
  • Balance risk versus reward
  • Take responsibility when something goes wrong

Waiting for AGI to train responsibly is missing the point.

Good systems don’t wait for perfection.
They design for reality.


“Just hire a coach” misses something important

Hiring a coach is great.
We love coaches.
We work with them.

But let’s be honest about reality:

  • Most people can’t afford 1-on-1 coaching long-term
  • Many coaches are overloaded and slow to adapt plans
  • Feedback loops are often weekly at best
  • Data is fragmented across platforms

AI doesn’t replace coaches.
It augments what coaches wish they had time to do.

And when athletes want human input?
It’s there.

Not removed.
Not hidden.
Not mocked.

Integrated.


This isn’t anti-AI or anti-human

It’s anti-absolutism.

AI-only is reckless.
Human-only doesn’t scale.

The future isn’t choosing sides.
It’s building systems where strengths compound instead of collide.

AI for analysis.
Humans for judgment.
Clear boundaries.
Clear responsibility.


Final word

If you want a system that blindly auto-generates workouts and tells you to trust it:
Afitpilot isn’t for you.

If you want:

  • Transparency
  • Control
  • Explainable decisions
  • And a human backstop when it matters

Then yeah.

No filthy clankers running the show.

AI assists.
Humans decide.

For deep thinkers, creators, and curious minds. One post. Zero noise.

We don’t spam! Read our privacy policy for more info.