AI've Got Questions

Human or AI? Jon Gillham Talks Content Integrity and Why He Built Originality.ai

Stacey Epstein Season 1 Episode 8

In this conversation, Stacey Epstein interviews Jon Gillham, founder and CEO of Originality.ai, discussing the implications of AI in content creation. Jon shares his journey from running an SEO agency to developing a tool that ensures transparency in AI-generated content. They explore the risks associated with AI content, the importance of understanding these risks, and the philosophical debates surrounding AI's role in writing. The discussion also touches on the target market for Originality.ai and the future of content creation in a world increasingly influenced by AI.

Stacey Epstein (00:01.519):
Today on our show, I'm excited to welcome Jon Gillham, founder and CEO of Originality.ai. Jon, welcome to the show.

Jon (Originality.ai) (00:13.132):
Thanks, Stacey. Thanks for having me.

Stacey Epstein (00:14.925):
Yeah, it's great to meet you. Let's start by learning a little about you. I always love hearing the journeys of entrepreneurs and founders. Give us a bit of your background.

Jon (00:28.386):
Sure, I’ll give you the short version. I went to school for engineering and worked at a refinery, but I always wanted to get back to my hometown, Collingwood, Ontario. It’s a place where we love to ski, bike, and enjoy the outdoors. The skiing is pretty mediocre—but for Ontario, it’s not bad! From there, I built a portfolio of online businesses, always revolving around content that ranks on Google. I built and sold an SEO-focused agency, and then I saw the wave of generative AI coming—even before ChatGPT—and started building Originality.ai to meet that need.

Stacey (01:06.681):
So it was an SEO agency?

Jon (01:12.537):
Exactly.

Stacey (01:23.821):
So tell us more about the need you saw. What was the problem you recognized that AI could help solve?

Jon (01:30.540):
Inside our agency, we had a division—this was before ChatGPT—that transparently used AI to generate content for clients at a reduced rate. We also had a human-written content division. Clients started asking, “How do you know your human writers aren’t just using AI?” We had a policy, sure, but no real verification process. That’s what sparked the idea for Originality.ai: to drive transparency between the writer, the agency, and the client, so everyone could be confident in whether AI had been used. We actually launched the weekend before ChatGPT did.

Stacey (02:24.559):
Talk about timing!

Jon (02:28.406):
Right? At first, we were nervous because our tool had been trained on existing models. But it turned out ChatGPT was mostly a wrapper on GPT-3 and 3.5, so our detection remained accurate—which was a relief.

Stacey (02:49.495):
Okay, so give us the elevator pitch for Originality.

Jon (02:55.950):
Sure. If you’re a marketer paying $100, $300, even $1,000 for a piece of content, you’re probably not happy if it turns out that the writer just pasted it from ChatGPT in 10 seconds. There are real risks associated with publishing AI-generated content—both in terms of Google visibility and brand reputation. Our goal is to make it irresponsible not to check content before publishing.

Stacey (03:29.015):
That’s an important conversation. Content teams are leaning heavily into AI to generate content quickly and at scale. The logic is: faster content equals better lead gen. So let’s dive into the risks. If I play devil’s advocate, I’d say I might lose a bit of ranking, but overall, I’ll generate more leads. Why is it risky to use AI for content?

Jon (04:24.526):
Great question. It’s all about understanding and managing risk. Google is aggressively targeting AI-generated spam and mass-produced content. They’ve been penalizing and even de-indexing websites that rely too heavily on it. We’ve had clients come to us saying, “Our site got hit, but we didn’t use AI!” We run the content and discover that yes, their writers had used AI—they just didn’t know. The business paid for the content, published it, got penalized, and lost all traffic.

There's also reputational risk. Take the Chicago Sun-Times, which recently published a summer book list where 10 of the 15 books were hallucinated. Total reputational mess. That could’ve been avoided with a 10-second scan using Originality. So you’ve got both visibility and trust issues—big ones.

Stacey (06:51.319):
So should content teams just avoid AI altogether?

Jon (06:59.726):
Not necessarily. It’s about who makes the decision. It shouldn't be the writers deciding whether to use AI—it should be the risk owner: the marketing lead, the business owner, whoever’s responsible. And then you need tools, policies, and training in place to manage that. We use AI in specific places on our own site, but it's intentional and we're transparent about it.

Stacey (08:10.935):
So you’re saying it’s not about banning AI—it’s about managing risk and having clear policies. And Originality helps enforce that?

Jon (08:20.963):
Exactly.

Stacey (08:36.495):
Okay, let’s go deeper. What if I know exactly what I want to say—it’s my original thought—but I use AI to clean it up into polished paragraphs. What does Originality do in that case?

Jon (09:14.766):
It depends on how much AI was used. We have different detection models. If your policy is “no AI ever,” we offer a strict detection mode. But if light AI editing is acceptable, we have a model for that too. Our system can usually tell if 90% is human and 10% is AI-edited. It flags it accordingly and gives transparency without over-penalizing.

Stacey (10:44.781):
Got it. Now here’s a philosophical one: why should it matter if someone uses AI to write a LinkedIn post, for example? If it’s their message and their intention, does it really matter?

Jon (11:42.574):
This is something society is still figuring out. If I’m reading a product review for a child’s bike seat, I want that review to be real—not fabricated by AI. On LinkedIn, we found 60% of long-form posts are AI-generated. Personally, if I could turn off all AI-written posts, I would. The platforms don’t mind—they want more content. But that doesn’t mean it’s the best content. For me, human perspective matters.

Stacey (13:32.013):
It’s an interesting monetization idea. What if LinkedIn let users pay to only see human-generated posts?

Jon (13:41.016):
Exactly. Or even a browser extension that filters for human-authored content. We don’t know where it’s going, but the public backlash is definitely brewing.

Stacey (14:00.053):
Totally. AI makes marketing more efficient, but there's friction when it comes to the content humans actually want to consume. Especially when it comes to brand trust.

Jon (14:51.224):
Yes—and there are great use cases for AI, like pulling stats together for content. We use AI to format and embed validated stats. We’re transparent, and it works well. But when a company tries to pass off AI content as a personal human message, and readers realize it—that’s where it backfires.

Stacey (16:17.283):
Tell us about your customers. Who’s your target market?

Jon (16:33.718):
Our core market is marketers—anyone publishing content on the web. Whether you’re a writer, editor, or agency, we help create transparency. We do have thousands of academic users, but that’s a tricky use case. Detection tools are accurate, but not perfect. In academic settings, you need near-perfection for disciplinary actions, which makes it a less ideal environment for our tool.

Stacey (18:11.779):
That makes sense. Do you imagine a future where sites display a “Checked by Originality” badge?

Jon (18:24.494):
It’s on our roadmap. But we’re cautious—it might send the wrong message, like “AI is bad.” We’re not anti-AI. We’re pro-transparency. So if AI is used intelligently and disclosed, that’s great. We want to support that—not stigmatize it.

Stacey (18:58.873):
I really like that approach. You’re not taking sides. You’re just helping companies enforce whatever policy they choose.

Jon (19:21.720):
Exactly. Our stance is fairness and transparency.

Stacey (19:31.063):
How easy is it to use? Walk us through it.

Jon (19:36.428):
It’s very similar to a plagiarism checker. You can paste or upload content, or scan an entire website. We offer bulk scanning too, to help you see how much AI-generated content might already exist on your site.

Stacey (20:02.753):
And who are your main users?

Jon (20:07.942):
About 60% are web publishers or freelance writers. We sell to both individuals and companies.

Stacey (20:22.735):
I’ve learned so much. It’s a fascinating time to be in marketing and content. Any thoughts on where this is all headed?

Jon (20:47.342):
It’s evolving quickly. I think authorship will become more important—people will want to know who stands behind the content. I also think we’ll see LLMs handling the “informational” content layer, and websites focusing more on transactional content and human engagement. The marketing funnel is going to compress dramatically.

Stacey (21:43.769):
Agreed. Thanks so much for joining the show. I assume people can find you at originality.ai?

Jon (21:53.782):
Yes—originality.ai, or connect with me on LinkedIn: Jon Gillham. Or email me directly at jon@originality.ai.

Stacey (22:00.687):
Awesome. We’re in the off-season for skiing, but winter’s not far off—maybe we’ll see you on the slopes. Take care!

Jon (22:08.184):
Sounds good. See you!