A Definitive Guide for Agencies That Want Automated, Transparent Client QA
Most agencies run SEO audits manually, quarterly at best. Things slip through the cracks: a homepage meta description still referencing services the client dropped months ago, landing pages that should be noindexed but are not, schema markup with a typo in the Instagram URL, neighborhood pages still advertising old services. These are not edge cases. They are the norm at agencies that rely on human memory and quarterly review cycles.
This article documents the exact system we use to run automated weekly SEO audits across every active client, using an AI agent, a Google Sheet, and a project management thread. The agent runs every Friday at 9 AM PST, audits each client website against a standardized checklist, and posts findings directly into the client’s Basecamp project thread so the team can act immediately.
We are publishing this process publicly because the goal is not competitive advantage through secrecy. The goal is a standard for automated client QA that proves the MAA loop (Metrics, Analysis, Action) works at scale, and that any agency can replicate.
What You Need Before You Start
To replicate this system, you need four components: an active client Google Sheet, an AI agent with web browsing capability, a scheduling mechanism, and a project management tool where findings get posted.
The Google Sheet is the run list. Ours has four columns: a row number, the client name, a link to the client’s Basecamp project, and the client’s website URL(s). Some clients have multiple websites, and the agent audits each one. The sheet currently has 29 active clients, ranging from local service businesses like ARDMOR Windows and Doors and Plumbing Pros PA to SaaS companies like Kartra and WebinarJam to personal brands like Ross Franklin and Richard Canfield. The variety matters because the audit criteria must work across verticals, not just for one type of business.
The AI agent is Claude Opus 4.6, configured with web browsing capabilities so it can crawl live websites and analyze what it finds against our audit checklist. The agent needs to be able to read HTML source, check meta tags, validate schema markup, verify robots directives, and assess content relevance. Any capable LLM with browsing tools can do this, but the critical thing is that the agent must be able to access and parse live web pages, not cached versions or screenshots.
The scheduling mechanism triggers the agent every Friday at 9 AM PST. This can be a cron job, a Zapier or Make automation, or any scheduling tool that can invoke the agent on a recurring basis. Consistency matters more than the specific tool.
The project management delivery endpoint is where audit findings get posted. We use Basecamp, but this could be Slack, Asana, Monday, or any tool with an API that allows posting comments to specific projects or threads. The key is that findings are posted directly into the client’s existing project thread, not sent as a separate email or dropped into a shared drive. The team needs to see findings in context alongside the work they are already doing for that client.
The Complete SEO Audit Checklist
The agent crawls each client’s live website and checks the following elements. This is the standardized checklist that every client is measured against, every week. Publishing these criteria openly means everyone, including clients and the public, knows exactly what is being measured and can work toward passing.
Title tags: The agent checks every indexable page for title tag presence, length (under 60 characters to avoid truncation), keyword relevance, and uniqueness across pages. Duplicate title tags across different pages are flagged because they signal sloppy site management and confuse search engines about which page to rank for a given query.
Meta descriptions: Each page is checked for meta description presence, length, relevance to the page content, and whether the description still accurately reflects current services. A meta description that references services the client no longer offers is a common and damaging issue.
H1 structure: The agent verifies that every page has exactly one H1 tag, that the H1 is relevant to the page content, and that H1 tags are not duplicated across pages. Multiple H1 tags on a single page or missing H1 tags are flagged.
Robots directives: The agent checks robots.txt configuration and per-page robots meta tags. It flags pages that should be indexed but are marked noindex, and pages that should be noindexed but are not.
Schema markup: The agent validates structured data present on the site, checking for proper JSON-LD formatting, accurate business information (name, address, phone, social profiles), and common errors like typos in URLs.
Image alt text: Every image on audited pages is checked for alt text presence and relevance. Missing alt text is an accessibility violation and a missed SEO opportunity.
Sitemap configuration: The agent checks for the existence and accessibility of XML sitemaps, verifies that important pages are included, and flags pages in the sitemap that return non-200 status codes.
Internal linking: The agent analyzes the internal link structure of the site, checking for orphan pages, broken internal links, and excessively deep page hierarchies. Important pages should be reachable within three clicks from the homepage.
Landing page indexation status: For clients with dedicated landing pages, the agent checks whether each page is actually indexed and accessible.
Content accuracy: This is the check that catches the most issues. The agent reads page content and flags references to services the client no longer offers, outdated year references, incorrect business hours, old team member names, and neighborhood or location references that no longer apply.
Step-by-Step: How the Agent Runs Each Friday
Here is the exact workflow, broken into concrete steps that another agency can implement.
Step 1: Read the client sheet. At 9 AM PST every Friday, the scheduler triggers the agent. The agent’s first action is to open the active client Google Sheet and read the full list: client name, Basecamp project link, and website URL(s). This is the single source of truth for which clients get audited.
Step 2: For each client, crawl the live website. The agent navigates to the client’s website and crawls the homepage plus key interior pages. It reads the raw HTML to extract title tags, meta descriptions, H1 tags, robots meta directives, schema markup, image alt attributes, and internal link structure.
Step 3: Run the checklist. The agent evaluates what it found against the standardized audit criteria. Each check produces a pass or fail result with a specific explanation.
Step 4: Compare against last week’s audit. Items that were flagged last week and fixed this week are noted as resolved. Items that were flagged last week and are still broken are escalated to Priority 1 status.
Step 5: Post findings to Basecamp. The agent compiles the audit results into a structured comment and posts it directly in the client’s Basecamp project thread.
Step 6: Move to the next client. The agent repeats Steps 2 through 5 for every client in the sheet. A full run across 29 clients typically takes 30 to 60 minutes.
The MAA Loop: Metrics, Analysis, Action
The weekly SEO audit is a practical implementation of the MAA loop, the framework that separates agencies that measure from agencies that actually improve.
Metrics is what the agent produces: a factual, objective assessment of what is on the live website right now. These are not opinions or recommendations. They are measurements.
Analysis is what the team does with those metrics. Not every issue has the same priority. The team reviews the audit findings and prioritizes based on SERP impact, client sensitivity, and fix difficulty.
Action is the fix. Someone on the team makes the change. The action should happen within the week, before the next Friday audit.
The loop closes when the agent runs again the following Friday and checks whether the fix took effect. This weekly cycle creates accountability without micromanagement. The agent does the measuring. The team does the fixing. The next audit does the verification.
How Your Team Should Respond to Audit Findings
When your team sees an audit comment posted by the agent in a Basecamp thread, treat it like a QA report. The agent identifies what is broken. Your team fixes it. Here is the triage protocol.
Priority 1 items are issues flagged in a previous audit that remain unfixed. These are red flags because they mean either the fix was never deployed or it was overwritten. Priority 1 items get addressed first, ideally the same day.
Priority 2 items are new issues with direct SERP impact: incorrect meta descriptions, missing H1 tags, noindex directives on pages that should be indexed, broken schema markup. These should be resolved within two business days.
Priority 3 items are new issues with indirect or long-term impact: missing image alt text, suboptimal internal linking, content accuracy issues on low-traffic pages. These should be resolved before the next audit.
What This Looks Like for the Client
Clients see a weekly comment in their project thread that shows exactly what was checked, what passed, and what needs attention. This demonstrates proactive care, creates transparency, builds trust through accountability, and reduces client anxiety about things falling through the cracks.
For agencies that want to go further, you can give clients read access to the audit criteria document itself. When the client understands what is being measured, they become an ally in maintaining quality rather than an adversary questioning whether work is being done.
Real-World Example: The Zooby Audit
The first full audit we ran was on Zooby (zooby.com), a client managed under Jeffrey Eisenberg’s Quickstart project. The agent found 15 issues that were live on the production website, several of which had been there for months without being caught.
The homepage meta description still referenced handyman services, a service category Zooby had stopped offering. Three landing pages were internally documented as noindexed but were actually still indexable. The schema markup contained an Instagram URL with a typo. Multiple neighborhood pages still referenced services Zooby had dropped months ago.
Most of these were 15-minute fixes with immediate SERP impact. The total time to fix all 15 issues was estimated at under three hours. The audit that found them took the agent less than five minutes. This is the leverage of automation: five minutes of agent time uncovering three hours of high-impact work that had been invisible to the human team.
Beyond SEO: This Is One of 18 Scheduled Jobs Every Agency Should Run
The weekly SEO audit is just one scheduled job in a larger system of automated client QA. The same architecture can be applied to other recurring jobs: weekly Google Business Profile audits, weekly social media presence checks, monthly website performance audits, weekly ad account health checks, and monthly content freshness audits.
Each of these follows the same MAA loop: the agent measures, the team analyzes and acts, and the next run verifies. The system is integrated by design through the LDT/CCS framework (Learn, Do, Teach / Content, Community, System), and these SOPs are published as definitive articles so that the process is transparent, teachable, and replicable across any agency.
Getting Started Today
If you want to implement this at your agency, start with these concrete actions. Create your active client Google Sheet with four columns: client number, client name, project management thread, and website URL(s). Choose your AI agent platform. Run your first audit manually on one client to calibrate the
