ChatGPT prompts for SEO: practical library + method (no fluff)

Library of SEO prompts for ChatGPT + Tacmind framework, CITE scoring and a 30-day plan to improve rankings and visibility in AI Overviews.

Updated on

February 20, 2026

Pablo López

Inbound & Web CRO Analyst

Created on

February 19, 2026

Summarize this post

If you’re searching Chat GPT prompts for SEO, you’re almost certainly in one of two phases: either you’re testing random prompts and results are inconsistent, or you already use it daily but you lack a system to make it reliable, measurable, and publishable.

This guide is for the second: a copy-paste prompt library, plus a method so what comes out of ChatGPT meets SEO (crawl/index + policies) and is also easy to select/cite in AI experiences (AI Overviews, source-based answers, etc.).

Why do your prompts “work sometimes” and not others?

Because a prompt without a process usually fails in one (or several) of these areas:

  • Lack of verifiable context: ChatGPT fills gaps when you don’t give data (Search Console, logs, URL list, real brief).
  • Non-operational output: it gives “nice” ideas, but not deliverables (titles, H2s, entities, schema, interlinking).
  • Risk of policy violations: the model can suggest tactics that verge on spam or manipulation.
  • Not designed to be “citable”: even if you rank, your content may be hard to extract, attribute, or link in AI-driven engines.

Tacmind framework: PromptOps SEO (B-C-E-V-P)

This is the framework we use to turn prompts into a repeatable system:

  1. Brief: what goal, for whom, and what “done” means (KPI, page, query, stage).
  2. Context: concrete inputs (SERP notes, URLs, structure, constraints, language, brand).
  3. Evidence: pasted data (GSC extracts, logs, content inventory, real FAQs).
  4. Verification: QA prompts (sources, consistency, policies, cannibalization, intent).
  5. Publish: output in CMS-ready format + final checklist.

This connects to an uncomfortable reality: if your page isn’t crawlable/indexable and doesn’t meet Search Essentials/spam policies, no prompt will fix it.
We tested prompts like “create an SEO article” and the first draft looked perfect… until we reviewed it: unsupported claims and sections that didn’t match the real intent.
We switched to PromptOps: we pasted SERP data + constraints + a claim-verification block.
Result: less “pretty text,” more publishable assets and a much faster editorial review.

— Pablo López, Tacmind

How to prioritize prompts (and not get lost in an endless list)

You don’t need 200 prompts. You require 15–25 that move metrics and are easy to operate with data. For that, I use a simple scoring system: CITE Score.

Criterion How to score quickly 0–3
C (Context) Does it include concrete inputs (URL, SERP notes, audience, constraints)? 0 = vague / 3 = fully specified
I (Impact) Does it directly affect rankings, CTR, indexing, or conversion? 0 = marginal / 3 = core
T (Traceability) Can the output be verified (data, sources, checklist, tests)? 0 = subjective / 3 = auditable
E (Effort) How much work does it take to execute well each time? 0 = high / 3 = low

Prioritize prompts with CITE ≥ 9. The rest goes to the backlog.

If you also want to measure “visibility in AI engines” (not just rankings), you need prompt- and source-level tracking: that’s where a dedicated GEO/AEO stack makes sense.

SEO prompt library for ChatGPT (ready-to-copy templates)

Rule of thumb: every prompt starts with role + goal + inputs + output format + checks.

1) What does the user really want? (intent + SERP format)

Act as an SEO analyst.
Target keyword: [KEYWORD]
Country/language: [EN-US]
Business type: [B2B/B2C/ecommerce/saas/local]
Constraints: don’t invent data; if info is missing, ask.

Task:
1) Classify intent (informational/transactional/navigational/investigation).
2) Suggest the winning content format (guide, category, comparison, glossary, template, tool page).
3) List 8–12 must-have subtopics (H2) aligned with intent.
4) Flag 5 misalignment risks (what people expect vs. what we usually write wrong).
Output in bullets + one proposed H1.

This keeps you aligned with people-first content and avoids writing for the robot.

2) How do I turn one keyword into a content map (topic cluster)?

Act as an SEO content strategist.
Core topic: [TOPIC]
Primary keyword: [KEYWORD]
Goal: build a cluster to earn topical authority without cannibalization.

Give me:
- 1 pillar page (suggested URL + purpose)
- 8–12 satellites (title, intent, focus keyword, differentiating angle)
- Internal linking rules (pillar ↔ satellites)
- E-E-A-T signals the cluster should include (proof, data, authorship, criteria)

If you’re working on AI visibility, this map becomes even more important: AI systems tend to “retrieve” pieces by subtopics and recompose answers.

3) Which entities and terms must I cover to be “citable”?

Act as an entity-focused SEO editor.
URL to optimize: [URL]
Focus keyword: [KEYWORD]
Audience: [AUDIENCE]
Product/service: [PRODUCT]

Generate:
- A list of entities (people, concepts, standards, tools) a search engine would expect to see.
- For each entity: why it matters + where to place it (H2/H3) + a “liftable” definition sentence (1–2 lines).
- A final checklist to ensure clarity and attribution (definitions, examples, boundaries, sources).
Don’t invent business-specific entities if I haven’t provided them.


4) Publishable editorial brief (so production isn’t improvisation)

Act as a content lead.
Keyword: [KEYWORD]
Page goal: [GOAL]
Audience: [AUDIENCE]
Offer: [OFFER]
Tone: clear, no fluff, practical.

Deliver a brief with:
- Main promise (1 sentence)
- Differentiating angle (3 bullets)
- Full H2/H3 structure
- Requirements: examples, checklist, FAQ, soft CTA
- “Don’t do”: things we must NOT claim without sources or that verge on manipulation
- Key definitions (5)
Output in a format ready to hand to a writer.


5) On-page optimization (without keyword stuffing)

Act as an on-page SEO.
Current text (paste excerpt):
[PASTE 800–1500 words]

Goal:
- Suggest H1/H2/H3 improvements
- Rewrite the intro to match intent
- Add missing sections (max. 3)
- Propose 6–10 natural internal-link anchors

Rules:
- Don’t repeat the keyword unnaturally
- Prioritize clarity and usefulness
- Flag “which claims need a source”


6) AEO: direct answer + expandable block (ideal for AI-driven engines)

Act as an AEO specialist.
Question we want to answer: [QUESTION]
Context: [CONTEXT]

Give me:
1) Direct answer (40–60 words, 1 paragraph)
2) Extended answer (200–300 words with steps)
3) List of 6 related FAQs (question + 1–2 line answer)
4) Signals to improve extraction (definition, criteria, examples, exceptions)

AEO and GEO aren’t the same: AEO is “extractable answer”; GEO is “verifiable, citable evidence” around it.

7) What schema makes sense here? (with a policy checklist)

Act as a technical SEO consultant.
Page type: [ARTICLE/PRODUCT/CATEGORY/SERVICE/LOCAL]
Visible elements on the page: [LIST: FAQ, author, rating, price, etc.]

Task:
- Recommend appropriate structured data types (only if applicable)
- List eligibility requirements and “don’t do” items (hidden content, misleading markup, etc.)
- Provide a validation checklist (tests + consistency with visible content)
Don’t generate markup that isn’t supported by real, visible content.

This helps you avoid structured data policy issues and quality problems.

8) Internal linking with intent (and without inventing URLs)

Act as an information architecture SEO.
URL inventory (paste list):
[URL 1]
[URL 2]
...

Goal: improve discovery and distribute authority to: [TARGET URL]

Give me:
- 10 recommended internal links (source → destination)
- Suggested anchor (natural, not forced exact match)
- Reason (intent and semantic relationship)
- 3 rules to avoid cannibalization

We tested “add internal links” and ended up creating cannibalization between two very similar guides.
We adjusted the prompt: we forced intent classification per URL and defined a single “canonical page” per subtopic.
Result: cleaner architecture and more consistent signals for Search and for AI extraction.

— Pablo López, Tacmind

9) Verification prompt (anti-hallucination + policies)

Act as a critical reviewer.
Draft: [PASTE TEXT]

Checklist:
- Flag “contestable” claims that require an external source.
- Mark potential policy issues (spam, manipulation, unverifiable promises).
- Detect internal contradictions.
- Propose concrete changes to make the text more “people-first”.
Output: prioritized list + rewrite of the 2 riskiest paragraphs.

(Useful when you work with content sensitive to quality and spam policies.)

10) Prompt to “optimize for AI features” (Google AI Overviews/AI Mode)

Act as an editor focused on AI features.
Topic: [TOPIC]
Current draft: [PASTE TEXT]

Goal:
- Rewrite 3 blocks so they’re clear, verifiable, and easy to cite:
 a) definition
 b) criteria/decision
 c) steps/process
- Add 2 mini-examples with boundaries (when it does NOT apply)

Rules:
- Don’t invent data
- Use short, precise sentences
- Keep consistency with the rest of the article

To appear in AI features, Google emphasizes technical requirements + SEO best practices + measurement as the foundation.

30-day plan: from random prompts to a system that scales

Days 1–7: technical base + inventory

  • Review indexing, crawling, and blocks (robots/noindex/canonicals).
  • Inventory URLs by intent and funnel stage.
  • Select 10 prompts with CITE ≥ 9 (you have candidates above).

Days 8–14: on-page quick wins + AEO

  • Rewrite intros and missing sections on 5–10 priority URLs.
  • Add “direct answer + extended + FAQ” blocks where it fits.
  • Implement a claims checklist (sources required).

Days 15–21: architecture + evidence

  • Interlinking by clusters (pillar ↔ satellites).
  • Strengthen verifiable signals: definitions, criteria, boundaries, sources, authorship.

Days 22–30: measurement + iteration

  • Define KPIs: rankings, CTR, conversions and (if applicable) presence in AI answers.
  • Adjust prompts based on what breaks: insufficient inputs, output format, QA.

If you want to measure continuously in AI engines (prompts, mentions, alternatives, and prioritized actions), you can rely on a dedicated AI visibility platform:

Common mistakes (and how to fix them)

  1. “Write me an article about X” → Add intent, audience, structure, and output format.
  2. Not pasting data → Include GSC excerpts, URLs, SERP notes (even if manual).
  3. Publishing without verification → Use the anti-hallucination prompt and flag claims.
  4. Optimizing for keywords, not questions → Add AEO blocks (definition, criteria, steps).
  5. “Just-in-case” schema → Only if there’s visible content backing it + policies.
  6. Borderline tactics (footer keyword dumps, forced anchors, uncontrolled scaling) → Review spam policies and simplify.
  7. Cannibalization due to poorly defined clusters → One intent = one primary URL.
  8. Confusing SEO with “being cited” → Work AEO + GEO: extractable answer + verifiable evidence.

We tried scaling 30 pieces with the same prompt and the output started to “sound the same”: repeated patterns, soft definitions, little evidence.
Fix: we added a fixed “boundaries and exceptions” block + a source checklist + one real example per section.
Change: the content became more differentiated and review stopped being a battle against déjà vu.

— Pablo López, Tacmind

FAQ about ChatGPT prompts for SEO

Do I need to write prompts in Spanish or English?

In the language you’re producing content in. If your site is EN, prioritize EN to avoid awkward nuance. What matters isn’t the language—it’s context and verification.

Can ChatGPT replace a Search Console or log analysis?

No. It can help interpret them, but you need the real data (and then validate decisions with tools).

How do I prevent the model from suggesting “SEO spam”?

Include explicit rules (“no manipulative tactics”) and add a policy review block.

What makes content more “citable” in AI experiences?

Clear definitions, criteria, steps, boundaries, examples, and verifiable evidence around them (AEO + GEO).

Can I control how AI bots access my site?

In some cases, yes: you can manage crawlers via robots.txt and specific directives per user-agent (for example, OpenAI’s crawlers).

Was this helpful?

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Related articles

Ready to get recommended in AI answers?

Track mentions and competitors—then follow a clear action plan to improve recommendations.