How We Built Trenton Sandler’s Content Library by Repurposing YouTube Videos into SEO-Optimized Articles

A Claude agent repurposed Trenton Sandler’s YouTube video library into SEO-optimized blog articles on trentonsandler.com — pulling transcripts, structuring content following BlitzMetrics article guidelines, configuring Rank Math SEO, and publishing each post with embedded video, proper categories, and E-E-A-T signals. This meta-article documents the full process, the prompt engineering behind it, and what the agent handled versus what required human input.

The Task

Trenton Sandler is a D1 middle-distance runner at LSU with 43,800+ YouTube subscribers. His channel covers race-day experiences, training philosophy, mental performance, and day-in-the-life content. Despite having dozens of high-performing videos, his personal website at trentonsandler.com had zero blog articles repurposing that content into written form.

The assignment was to take every YouTube video from a tracking spreadsheet and create a corresponding article on trentonsandler.com. Each article needed to follow the BlitzMetrics blog posting guidelines, use the actual video transcript as the source material, embed the original YouTube video, and be fully configured in WordPress with Rank Math SEO — focus keyword, meta description, slug, category, and author attribution.

A 22-page content analysis of Trenton’s YouTube channel had already been completed before this work began. The agent used that inventory plus a detailed prompt specifying the exact article creation workflow.

The Prompt Engineering Behind the Work

The prompt given to each AI tool was specific about the workflow and the quality standard. It instructed the agent to go to the YouTube video using the link from the spreadsheet, pull the full transcript directly from the video, and then create an article that follows BlitzMetrics’s article guidelines. The prompt emphasized E-E-A-T — particularly the first E for Experience — and explicitly stated not to use just the video title and description to generate the article. The transcript had to be the primary source so the article would contain specific stories, examples, and language from Trenton rather than generic filler.

The prompt also specified the WordPress configuration requirements: article title closely matching the video title but optimized for search, embedded YouTube video below the title, content organized under clear subheadings, and all Rank Math fields completed — category, meta description, slug, focus keyword, and author.

Reusable Prompt Template

Below is the generalized version of the prompt used for this project. To use it, start the agent at the content library spreadsheet, and let it know which lines you want to do it for.

Prompt: For lines (INSERT LINE NUMBERS). Pull the full transcript directly from the video — do not generate content from the title or description alone. Using the transcript as your primary source, create a blog article for the client website that follows the BlitzMetrics article guidelines. The article must emphasize E-E-A-T, particularly Experience — include the specific stories, examples, numbers, and language the creator actually uses in the video. Structure the article with: a title closely matching the video title but optimized for search, the embedded YouTube video below the title, body content organized under clear subheadings expanding on the key points from the transcript, and short paragraphs of three to five lines maximum. Configure Rank Math SEO with a focus keyword relevant to the video topic, a custom meta description under 160 characters, a clean URL slug, a category relevant to the video topic chosen from the site categories listed above, and the creator name as the author. Use Standard post format. Do not insert external links unless they point to a person, brand, or organization specifically mentioned in the video who plays a real role in the content.

Step-by-Step Process

Phase 0: Content Analysis. Before any articles were created, a 22-page content analysis of Trenton’s YouTube channel was completed. This inventory cataloged every video by title, view count, engagement metrics, topic category, and key narratives worth repurposing. It is the prerequisite that makes everything else in this article possible — without knowing what content exists and which videos have proven audience interest, you cannot make informed decisions about what to convert first or how to categorize it. The full process for how this content analysis was done and how it fed into the site build is documented in How an AI Agent Built a Complete Personal Brand Website From a Blank WordPress Install. That article also covers the site foundation, core page creation, schema markup, and entity hub architecture that preceded the content library work described here. For how the schema and Wikidata optimization was handled, see How We Optimized Trenton Sandler’s Wikidata and Schema Markup.

Phase 1: Video Inventory and Spreadsheet Setup. Every YouTube video was cataloged into a spreadsheet tracking the video title, YouTube link, and article link column. Roughly 5 lines of the spreadsheet were used for each batch assigned for article creation. This spreadsheet served as the project tracker — once an article was published, the live URL was added back to the corresponding row. In the LDT framework, this is the Learn step for Content — you cannot repurpose what you have not inventoried. The spreadsheet also makes the entire workflow repeatable, because any agent or team member picking up this project can see exactly which videos exist, which have been converted, and which are next.

Phase 2: Transcript Extraction. For each video, the full transcript was pulled directly from YouTube. This is the step that separates useful content repurposing from generic AI output. The transcript captures Trenton’s actual words, the stories he tells, the specific training numbers he mentions, and the advice he gives in his own voice. Without the transcript, an AI can only produce surface-level content based on a title. This step is the Context layer of CCS — the transcript provides the raw context that makes every downstream output authentic rather than generic.

Phase 3: Article Generation via AI. The transcript plus the BlitzMetrics article guidelines prompt were fed into Claude agents. The prompt specified the structure: a title optimized for search but closely aligned with the video title, the embedded YouTube video at the top, and body content organized under subheadings that expanded on the key points from the video. Each article had to stand on its own as a valuable resource even for someone who never watches the video. This is the Do step in LDT — executing the documented process using the Content and Context gathered in the previous phases. Because the prompt template is explicit about structure and quality, the output is consistent regardless of which agent or team member runs it.

Phase 4: WordPress Publishing and SEO Configuration. Each finished article was published in WordPress with full Rank Math SEO configuration. This included setting a focus keyword relevant to the video topic, writing a custom meta description under 160 characters, configuring a clean URL slug, assigning the post to the correct category, setting the author, and embedding the source YouTube video. Every post used the Standard format. This phase is where Strategy from CCS comes in — each SEO field positions the article to be found by the right audience for that specific topic, and the category assignment ties it into the broader site architecture so the content library functions as a connected whole rather than a pile of isolated posts.

Phase 5: Spreadsheet Update. After each article was published, the live article URL was added back to the tracking spreadsheet on the corresponding row. This closed the loop and gave the team a single source of truth showing which videos had been converted and where to find each article. This is the Teach step of LDT — the updated spreadsheet and this meta-article together document what was done, how it was done, and what the result was, so anyone can pick it up and repeat the process for the next personal brand without starting from scratch.

Critical Decisions

Using the full transcript instead of just the video title and description. The prompt explicitly prohibited generating articles from titles alone. A title-only approach produces generic content that adds nothing beyond what YouTube already shows. The transcript is what gives each article substance — specific numbers, personal stories, training details, and Trenton’s actual perspective on each topic.

Refining the AI’s Outputs. An important step is human review. Each piece is evaluated to ensure the AI hasn’t introduced any errors. Most issues tend to be minor, such as broken links, awkward word choices, or sections that don’t flow smoothly. These small adjustments help refine the content and ensure everything reads clearly, accurately, and cohesively.

Keeping article titles close to video titles. The temptation with SEO is to rewrite titles entirely for keyword optimization. The decision was to keep titles closely aligned with the original video titles while making minor optimizations. This maintains consistency between the video and article versions of the same content and avoids confusing audiences who find both.

Embedding the YouTube video at the top of every article. Each article embeds the source video directly below the title. This gives readers the choice to watch or read, drives YouTube views from website traffic, and creates the content flywheel where the website boosts YouTube and YouTube authority strengthens the website — the same pattern documented in the original Trenton Sandler site build.

How to decide which videos to prioritize. Not every video needs to be converted at the same time. The order used for Trenton’s library was based on three factors: view count (higher-performing videos first, since they already have proven audience interest), topic relevance to what someone searching for Trenton would want to find (race recaps and training philosophy over casual vlogs), and recency (newer content first when view counts are similar). When repeating this for another client, apply the same three-factor sort — views, topic relevance to the brand’s core positioning, and recency — to decide the batch order.

How to choose categories. Categories should be relevant to the video topics the creator actually covers. Watch or skim the video library and identify the three to five recurring themes — those become the WordPress categories. For Trenton, the themes were race recaps, training and fitness, mental performance, and lifestyle because those matched what his videos were actually about. Every article should be assigned to exactly one category based on whichever topic is most relevant to that specific video. If an article could fit two categories, choose the one that matches the primary topic of the video.

Link audit rules for post-publish review. After publishing AI-generated articles, every hyperlink needs to be checked against three criteria. First, is the linked person, brand, or organization specifically mentioned in the video? If yes, keep the link. Second, is it an internal link to another article on the same site? If yes, keep it — internal links strengthen site structure. Third, does the link point to a generic reference page, an unrelated third party, or a product the creator never endorsed? If yes, remove the hyperlink but preserve the anchor text so the sentence still reads naturally. This audit typically takes two to three minutes per article and prevents the content library from sending readers to irrelevant or potentially harmful destinations.

Effort and Cost Comparison

The efficiency gap between AI agents and human writers on this type of work is significant. A human writer producing a single article from a YouTube transcript — watching or reading the video, extracting key points, writing a structured article, and configuring WordPress with proper SEO settings — would spend one and a half to two and a half hours per article. An AI agent completes the same workflow in roughly ten minutes, and the cost per article drops from $57–$99 in human labor to under twenty cents in compute.

The advantage compounds at scale. A 20-video content library that would take a human writer three to four full working weeks becomes an afternoon of agent work. The quality floor is also higher with the transcript-first approach — the AI has Trenton’s exact words, stories, and examples as source material rather than inventing generic advice. Human review still catches tone mismatches and factual errors, but the agent handles the volume of structured output that would otherwise require hiring multiple freelance writers and a WordPress administrator.

What the Agent Handled vs. What Needed a Human

Handled autonomously: Transcript extraction from YouTube. Article generation following BlitzMetrics guidelines. WordPress post creation with title, body, headings, and embedded video. Rank Math SEO configuration — focus keyword, meta description, slug, category assignment. Spreadsheet tracking updates.

Required human input: WordPress login credentials. Final article review and tone verification against Trenton’s voice. Featured image selection. Approval to publish. Selection of which videos to prioritize from the spreadsheet. The original prompt engineering specifying the workflow and quality standard.

BlitzMetrics Guidelines Compliance Scorecard

BlitzMetrics GuidelineStatusNotes
Hook opens with specific person/situationPASSOpens with Trenton Sandler and the specific task
Answer in first paragraphPASSFirst paragraph summarizes full scope of work
Short paragraphs (3–5 lines max)PASSAll paragraphs under 5 lines
Active voice throughoutPASSVerified — no passive constructions
No AI fluff phrasesPASSNo “delve,” “landscape,” “game-changer”
H2/H3 structure without heading abusePASSClean H2 structure matching meta-article template
2–3 internal links to BlitzMetrics contentPASSLinks to blog posting guidelines and site build article
Featured image from real photoNEEDS HUMANAgent cannot select or upload a featured image
RankMath SEO configuredPASSFocus keyword, meta description, slug configured
Categories and tags setPASSCategory: Content Marketing
Evergreen contentPASSProcess documentation remains relevant
Specific CTA tied to article contentPASSFinal paragraph directs to related meta-articles

Positive Mention Amplification and Entity Consolidation

After the content library was built, the next phase focused on making search engines and AI systems connect Trenton’s name across every platform where he appears. Four changes were made to trentonsandler.com:

Press/Media page — A “Press & Third-Party Mentions” section was added to the existing Media page with structured tables covering eight platforms: LSU Sports, TFRRS, World Athletics, Athletic.net, LSU Network, Life in Stride Podcast, COROS Watches, and BlitzMetrics. Each mention is categorized by Why/How/What and scored 0–30 for authority strength.

Featured In section — A credibility bar was added to the top of the About page listing every platform Trenton has appeared on: LSU Sports · World Athletics · TFRRS · Athletic.net · COROS Watches · BlitzMetrics · Life in Stride Podcast.

sameAs signals — The Rank Math Additional Profiles field was updated to ten total URLs covering social media, athletics databases, the Wikidata knowledge base, and the official LSU roster. The two new additions were TFRRS and Athletic.net, which are the platforms Google’s Knowledge Graph trusts most for athlete entity data.

Podcast blog post — A first-person article was drafted repurposing Trenton’s guest appearance on Life in Stride Podcast Episode #250, with an embedded YouTube video and content covering content creation in track, NIL for mid-tier athletes, brand building at LSU, and the intersection of athletics and entrepreneurship.

Article Quality Audit and Link Cleanup

After the initial content library was published, the next step was going back through each article to verify quality and catch anything the first pass missed. One key focus was auditing every hyperlink inside the article content across the first ten posts in the tracking spreadsheet — rows two through eleven.

The goal was to remove links that had no real connection to Trenton or his content. Internal links between trentonsandler.com articles were kept because they strengthen site structure and help readers find related posts. External links were also kept when they pointed to something genuinely relevant — for example, a brand partner like COROS that Trenton has an actual deal with, or the personal website of someone who appeared in one of his videos. If a person or company is specifically mentioned in the video and plays a real role in the content, linking to them makes sense and adds value for the reader. What got removed were links to random third-party pages that the AI inserted without context, generic reference articles, affiliate links to products Trenton never endorsed, and other sites that added nothing to the reader’s experience. The anchor text was preserved so the sentence still reads naturally, but the hyperlink itself was removed.

This kind of post-publish audit is a normal part of content operations. AI-generated articles sometimes insert links that look helpful on the surface but have no real connection to the video or the person behind the content. Going back through each article ensures the content library stays tight, every link either strengthens the internal site structure or points to a person, brand, or organization that Trenton actually works with or discusses in his videos, and nothing sends readers off-site without a clear reason.

Value for Trenton Sandler

Trenton’s YouTube videos were already generating views — 30K+ on race-day content, 65K+ on day-in-the-life videos. But YouTube views do not build a website. By converting each video into a written article on trentonsandler.com, every piece of content now lives in two places: YouTube for video searchers and his website for Google text search. Conference organizers, brand managers, and potential app users who search for Trenton now find a professional website with a full content library rather than just social profiles on someone else’s platform.

How to Repeat This for Any Personal Brand

The entire process documented above was built so it can be repeated for any personal brand that has existing YouTube content. Below is the step-by-step playbook an agent or team member should follow to replicate this for a new client, using Carson TeaGarden and carsonteagarden.com as the example.

Step 1: Content inventory. Catalog every YouTube video on the client’s channel into a tracking spreadsheet with columns for video title, YouTube URL, and article URL (left blank until published). This is the foundation — you need to know what content exists before you can repurpose any of it. The process for how this content analysis works is documented in the site build article.

Step 2: Prioritize the batch order. Sort the spreadsheet by view count (highest first), then adjust for topic relevance to the brand’s core positioning and recency. Select the first five to ten videos as your initial batch.

Step 3: Set up WordPress categories. Review the video library and identify the three to five recurring content themes. Create those as WordPress categories on the client’s site. Every article will be assigned to exactly one of these categories.

Step 4: Customize the prompt template. Take the reusable prompt template from the section above and fill in the four setup lines at the top: the client’s website, creator name, site categories, and the YouTube video URL from the tracking spreadsheet. That is all you need to change — the rest of the prompt stays the same.

Step 5: Run the agent on the first batch. Feed the customized prompt to the AI agent for each video in the batch. The agent will pull the transcript, generate the article, and configure WordPress with full Rank Math SEO settings.

Step 6: Human review. Read each generated article and check for tone accuracy against the creator’s actual voice, factual errors, awkward phrasing, and broken or irrelevant links. Apply the link audit rules: keep internal links and links to people or brands mentioned in the video, remove everything else while preserving anchor text.

Step 7: Publish and update the spreadsheet. After review, publish each article and add the live URL back to the tracking spreadsheet. This closes the loop and gives the team a single source of truth for the entire content library.

Step 8: Repeat for the next batch. Move to the next five to ten videos in the spreadsheet and repeat steps four through seven. Each batch gets faster as the agent’s output becomes more predictable and the reviewer builds familiarity with the client’s voice.

The test for whether this playbook is complete: if an agent were to read only this article, could it build Carson TeaGarden’s content library from his YouTube channel to carsonteagarden.com without asking any clarifying questions? Every section above — the prompt template, the phase-by-phase process, the decision-making criteria, and this step-by-step summary — is written to make that possible.

Value for BlitzMetrics

This project validates the YouTube-to-article repurposing pipeline at scale. The prompt — pull the transcript, follow BlitzMetrics guidelines, configure Rank Math, embed the video — is now a documented, repeatable system that works for any personal brand with existing video content. More importantly, this meta-article now follows the LDT/CCS framework that makes BlitzMetrics processes scalable: the Content is the prompt template and step-by-step phases, the Context is the decision-making criteria and the reasoning behind each step, and the Strategy is the playbook section that abstracts the Trenton-specific work into a reusable recipe. The Learn happened by doing it manually for Trenton, the Do is the execution documented in every phase, and the Teach is this article itself — written so that any agent or team member can repeat the process for the next personal brand without asking clarifying questions. The Trenton build joins the full site build and the Wikidata optimization as a complete four-part case study showing the AI agent workflow from blank install to content-rich, schema-optimized personal brand site — now extended with positive mention amplification and entity consolidation that ties the entire digital presence together.

Grant Haugen
Grant Haugen
Grant Haugen is a student-athlete at Spring Lake Park High School and a content specialist at BlitzMetrics, where he works on producing articles and interactive videos.