
{"id":20799,"date":"2026-01-15T07:00:00","date_gmt":"2026-01-15T12:00:00","guid":{"rendered":"https:\/\/ipullrank.com\/?p=20799"},"modified":"2026-01-26T17:11:07","modified_gmt":"2026-01-26T22:11:07","slug":"misinformation-about-chunking","status":"publish","type":"post","link":"https:\/\/ipullrank.com\/misinformation-about-chunking","title":{"rendered":"Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"20799\" class=\"elementor elementor-20799\" data-elementor-post-type=\"post\">\n\t\t\t\t<div class=\"elementor-element elementor-element-58ddaaf e-flex e-con-boxed e-con e-parent\" data-id=\"58ddaaf\" data-element_type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-d10f7b5 elementor-widget elementor-widget-spacer\" data-id=\"d10f7b5\" data-element_type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d783ab8 elementor-widget elementor-widget-text-editor\" data-id=\"d783ab8\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">Back in 2011 when I first started writing SEO blog posts for Moz, despite their popularity I was writing walls of text because that was my nature. Then-CMO Jamie Steven instructed me to read Cyrus Shepard\u2019s <\/span><a href=\"https:\/\/moz.com\/blog\/10-super-easy-seo-copywriting-tips-for-link-building\"><span style=\"font-weight: 400;\">10 Super Easy SEO Copywriting Tips for Improved Link Building<\/span><\/a><span style=\"font-weight: 400;\"> for direction on how I should structure what I write for better performance.\u00a0<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-178083c elementor-widget elementor-widget-image\" data-id=\"178083c\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"1812\" height=\"845\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/time-on-page-earned-links-comparison.png\" class=\"attachment-full size-full wp-image-20820\" alt=\"Comparison showing average time on page and earned links between two content formats. Chunked and not chunked\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/time-on-page-earned-links-comparison.png 1812w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/time-on-page-earned-links-comparison-300x140.png 300w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/time-on-page-earned-links-comparison-1024x478.png 1024w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/time-on-page-earned-links-comparison-768x358.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/time-on-page-earned-links-comparison-1536x716.png 1536w\" sizes=\"(max-width: 1812px) 100vw, 1812px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8b1d74d elementor-widget elementor-widget-text-editor\" data-id=\"8b1d74d\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">In the article, Cyrus comes out swinging showing this visual comparison of a wall of text versus a very well-structured piece of content with lots of formatting and imagery. Using data to drive the point home, he shows how the two posts (by the same great internet marketer) had dramatically different performance, with 62X the external link capture and nearly 4X the time on page.<\/span><\/p><p><span style=\"font-weight: 400;\">I was hooked and those insights have stuck with me ever since. In fact, you can trace back elements of anything I\u2019ve written over the last 14 years to the formatting lessons of that classic post. I\u2019d go as far as to say I think more about these principles than I do so-called SEO \u201cbest practices.\u201d<\/span><\/p><p><span style=\"font-weight: 400;\">Part of why what Cyrus outlined resonated with me so much is that the principles just make sense. Conceptually, it all harkens back to everything we all learned about how humans interact with information when we read <\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Don%27t_Make_Me_Think\"><span style=\"font-weight: 400;\">\u201cDon&#8217;t Make Me Think.\u201d<\/span><\/a><span style=\"font-weight: 400;\"> Over time, I\u2019ve seen the specificity and better content UX highlighted yield better performance on any human-driven metric we measure as well as more visibility search engines and large language models.\u00a0<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-240e645 elementor-widget elementor-widget-heading\" data-id=\"240e645\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">But\u2026Google Says Don\u2019t Break Your Content Into Bite-Sized Chunks<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0489fe0 elementor-widget elementor-widget-text-editor\" data-id=\"0489fe0\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">Recently, on the <\/span><a href=\"https:\/\/search-off-the-record.libsyn.com\/seo-aio-geo-your-site-third-party-support-to-optimize-for-llms\"><span style=\"font-weight: 400;\">Search Off the Radar podcast<\/span><\/a><span style=\"font-weight: 400;\">, Danny Sullivan shared his opinion on \u201cchunking\u201d as a tactic to drive visibility in AI Search surfaces (emphasis mine).\u00a0<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-516561d elementor-widget__width-initial elementor-blockquote--skin-border elementor-blockquote--button-color-official elementor-widget elementor-widget-blockquote\" data-id=\"516561d\" data-element_type=\"widget\" data-widget_type=\"blockquote.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<blockquote class=\"elementor-blockquote\">\n\t\t\t<p class=\"elementor-blockquote__content\">\n\t\t\t\t\u201cOne of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?<br>\n\n<br>So<b> we don't want you to do that.<\/b> I was talking to some engineers about that. <b>We don't want you to do that. We really don't. We don't want people to have to be crafting anything for Search specifically.<\/b> That's never been where we've been at and we still continue to be that way. <b>We really don't want you to think you need to be doing that or produce two versions of your content,<\/b> one for the LLM and one for the net.\u201d\t\t\t<\/p>\n\t\t\t\t\t<\/blockquote>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3f35d9c elementor-widget elementor-widget-text-editor\" data-id=\"3f35d9c\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">After I laughed to myself in the graveyard of AMP POVs and technical specifications, I turned it back on.<\/span><\/p><p><span style=\"font-weight: 400;\">He continued, pre-empting the \u201cbut it works, I\u2019m going to do it anyway\u201d argument Danny offers (emphasis still mine):<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-639fc12 elementor-widget__width-initial elementor-blockquote--skin-border elementor-blockquote--button-color-official elementor-widget elementor-widget-blockquote\" data-id=\"639fc12\" data-element_type=\"widget\" data-widget_type=\"blockquote.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<blockquote class=\"elementor-blockquote\">\n\t\t\t<p class=\"elementor-blockquote__content\">\n\t\t\t\t\u201cLet's assume that, in some edge cases, let's even assume maybe in more than some edge cases, you're finding you're getting some advantage here. Maybe tiny degree measure. No, this is my secret weapon. It's doing it.\" <b>Great. That's what's happening now. But tomorrow the systems may change.<\/b><br>\n\n<br>So you've gone through all this effort. You've made all these <b>things that you did specifically for a ranking system, not for a human being<\/b>, because you were trying to be more successful in the ranking system, not staying focused on the human being. And then the systems improve, probably the way the systems always try to improve, to reward content written for humans. All that stuff that you did to please this LLM system that may or may not have worked, may not carry through for the long term.<br>\n\n<br>So was that the best use of your time and your energy? Was that the best use of putting turmoil into your marketing department, your content department, and all your other stuff so that you could say, \"A-ha, I've got the new thing that you wanted, I've brought it down from the mountain and here it is. Do these sorts of things.\u201d\t\t\t<\/p>\n\t\t\t\t\t<\/blockquote>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-af124c4 elementor-widget elementor-widget-text-editor\" data-id=\"af124c4\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">Before I take this where you know I will, let me first say this.<\/span><\/p><p><span style=\"font-weight: 400;\">I deeply respect Danny Sullivan for what he has done for the search marketing community both inside and outside of Google. Full stop.\u00a0<\/span><\/p><p><span style=\"font-weight: 400;\">However, I have two problems with these statements and want to clarify for anyone who is questioning the value of improving content structure (partially) in the service of better visibility:<\/span><\/p><ol><li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Chunking and creating content for users are not mutually exclusive.\u00a0<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The statements misalign with how Retrieval Augmented Generation technology functions and with where the future technologies are going.\u00a0<\/span><\/li><\/ol><p><span style=\"font-weight: 400;\">In the spirit of chunking, let\u2019s add a heading and get to my next series of extractable atomic points.\u00a0<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8377e34 elementor-widget elementor-widget-heading\" data-id=\"8377e34\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Chunking and Writing for Users is Not Mutually Exclusive<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f8c9914 elementor-widget elementor-widget-text-editor\" data-id=\"f8c9914\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">First, we need to disambiguate \u201cchunking\u201d from how it&#8217;s used to describe an operation in Retrieval Augmented Generation (RAG) systems from how it&#8217;s being used to describe a content optimization action.<\/span><\/p><p><span style=\"font-weight: 400;\">Chunking as it has been co-opted is really structuring content in a way that its passages and statements perform better when retrieved in a RAG pipeline. I know this <\/span><a href=\"https:\/\/searchengineland.com\/how-search-generative-experience-works-and-why-retrieval-augmented-generation-is-our-future-433393\"><span style=\"font-weight: 400;\">because I\u2019m one of the first people to drag the term from the AI\/IR space<\/span><\/a><span style=\"font-weight: 400;\"> into the SEO space.<\/span><\/p><p><span style=\"font-weight: 400;\">If we\u2019re being reductive (like most \u201cit\u2019s just SEO\u201d arguments are) we\u2019re effectively talking about content design or UX writing. As with everything in search and content marketing, machines are just a subset of the target personas. So, the idea of preparing the content only for the machine solely is still nonsense.\u00a0<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6dc2ba7 elementor-widget elementor-widget-video\" data-id=\"6dc2ba7\" data-element_type=\"widget\" data-settings=\"{&quot;youtube_url&quot;:&quot;https:\\\/\\\/www.youtube.com\\\/watch?v=yqKofAxT8UU&quot;,&quot;video_type&quot;:&quot;youtube&quot;,&quot;controls&quot;:&quot;yes&quot;}\" data-widget_type=\"video.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-wrapper elementor-open-inline\">\n\t\t\t<div class=\"elementor-video\"><\/div>\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-be6208f elementor-widget elementor-widget-text-editor\" data-id=\"be6208f\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">When we\u2019re talking about chunking in this sense, the act of structuring content overlaps with the content design aspects of Cyrus\u2019s post. However, where it differs is in reasoning of the decisions that you make in the copy that you write.\u00a0<\/span><\/p><p><span style=\"font-weight: 400;\">While the act overlaps with the largely qualitative processes people have historically used in the past, it is not the same. Effective chunking follows all the practices Cyrus discussed, but combines vector analysis to verify improvements. Also, for clarity, chunking is but one of an array of tactics you should apply from the content engineering toolbox. And, what differs it from UX writing or standard copywriting is the aspects of relevance calculation that must be accounted for on a passage level. It\u2019s not <\/span><i><span style=\"font-weight: 400;\">just<\/span><\/i><span style=\"font-weight: 400;\"> chopping paragraphs into smaller paragraphs and using more headings and hoping for the best.<\/span><\/p><p><span style=\"font-weight: 400;\">Consider this, no one can reliably determine that content is generative or not without watermarks. So, Google needing a more reliable signal leverages user interactions to determine whether content should continue to rank. The main attribute that yields better content performance is better structure irrespective of why you do it.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c83b8a9 elementor-widget elementor-widget-heading\" data-id=\"c83b8a9\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h3 class=\"elementor-heading-title elementor-size-default\">So, What Truly is Chunking?<\/h3>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d7916f4 elementor-widget elementor-widget-text-editor\" data-id=\"d7916f4\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">Chunking is the action that RAG systems take with content when they capture it to prepare it for the retrieval process. Chunking is the act of breaking content into a series of components that can be individually retrieved based on how relevant they are to a prompt or user query. <\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-00eb38f elementor-widget elementor-widget-image\" data-id=\"00eb38f\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"1812\" height=\"1176\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/semantic-text-chunking-example-highlighted-passages-1.png\" class=\"attachment-full size-full wp-image-20823\" alt=\"\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/semantic-text-chunking-example-highlighted-passages-1.png 1812w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/semantic-text-chunking-example-highlighted-passages-1-300x195.png 300w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/semantic-text-chunking-example-highlighted-passages-1-1024x665.png 1024w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/semantic-text-chunking-example-highlighted-passages-1-768x498.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/semantic-text-chunking-example-highlighted-passages-1-1536x997.png 1536w\" sizes=\"(max-width: 1812px) 100vw, 1812px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-56f1d9e elementor-widget elementor-widget-text-editor\" data-id=\"56f1d9e\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">This is also a function of dense retrieval which Google effectively announced when they revealed their implementation of <\/span><a href=\"https:\/\/blog.google\/products-and-platforms\/products\/search\/search-on\/\"><span style=\"font-weight: 400;\">Passage Indexing<\/span><\/a><span style=\"font-weight: 400;\">. In passage indexing, passages are embedded and stored and the query is too. Approximate Nearest Neighbor (ANN) searches are performed to pull the closest matching passages. This is one of the building blocks of RAG, the primary paradigm behind AI Search.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-961e2ed elementor-widget elementor-widget-image\" data-id=\"961e2ed\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"870\" height=\"1116\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/image4.gif\" class=\"attachment-full size-full wp-image-20805\" alt=\"\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6fcee16 elementor-widget elementor-widget-text-editor\" data-id=\"6fcee16\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">There are a variety of chunking strategies including, but not limited to semantic, layout-aware, fixed length token-size. Based on <\/span><a href=\"https:\/\/metehan.ai\/blog\/reverse-engineering-google-ai-mode\/\"><span style=\"font-weight: 400;\">Metehan\u2019s research into Google\u2019s public Vertex AI offering<\/span><\/a><span style=\"font-weight: 400;\">, it suggests that theirs may be a combination of fixed length and layout aware with the cascading heading option.<\/span><\/p><p><span style=\"font-weight: 400;\">So, we are using the same term to refer to both the action that the system takes to decompose content and the work that we\u2019re doing to restructure the content so it\u2019s easier to extract. I wanted to clarify that for people that look to invalidate meaningful discussion based on syntax and vocabulary.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-96efb88 elementor-widget elementor-widget-heading\" data-id=\"96efb88\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Why is Chunking Different from Classic Content Optimization for SEO?<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8ae3d7f elementor-widget elementor-widget-text-editor\" data-id=\"8ae3d7f\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">In classic SEO, content boundaries were defined largely by intuition and editorial convention. Even when using content optimization tools, the analysis was typically lexical and page-level, evaluating aggregate term usage rather than the relevance of individual passages. As a result, while pages may have been visually or structurally segmented, those segments were not deliberately optimized as independent units of meaning.<\/span><\/p><p><span style=\"font-weight: 400;\">Chunking as an optimization tactic changes that. With clearer insight into how modern systems evaluate content at the passage level, we can now treat each chunk as a discrete relevance object. This makes it possible to intentionally shape structure, specificity, and context within each passage to influence how it is measured, compared, and selected. Instead of optimizing pages holistically and hoping relevance emerges, chunking allows us to precisely adjust content at the level where relevance is actually computed.\u00a0<\/span><\/p><p><span style=\"font-weight: 400;\">Insights that Dan Petrovic shared on <\/span><a href=\"https:\/\/dejan.ai\/blog\/how-big-are-googles-grounding-chunks\/\"><span style=\"font-weight: 400;\">the length of Google\u2019s grounding chunks<\/span><\/a><span style=\"font-weight: 400;\"> and <\/span><a href=\"https:\/\/dejan.ai\/blog\/ai-search-filter\/\"><span style=\"font-weight: 400;\">how much of your content gets used after it makes it through filtering<\/span><\/a><span style=\"font-weight: 400;\"> give us more clarity on the atomicity. We also know that the natural boundaries of a paragraph we create influences what is considered a chunk in these systems.\u00a0<\/span><\/p><p><span style=\"font-weight: 400;\">Historically, SEO treats the page as a single context window with no real measurable way to tell if your optimizations really did anything except for the rankings themselves. Sure, the various content optimization tools give you a lexical score, but nothing that aligns with the breadth of modern information retrieval. Chunking offers a direct feedback loop for the isolation and improvement of specific blocks of content and how you can influence how they perform in AI surfaces.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4e5bcbd elementor-widget elementor-widget-heading\" data-id=\"4e5bcbd\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">The Google-shaped Web<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0cf2a2f elementor-widget elementor-widget-text-editor\" data-id=\"0cf2a2f\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><a href=\"https:\/\/www.theverge.com\/c\/23998379\/google-search-seo-algorithm-webpage-optimization\"><span style=\"font-weight: 400;\">Much<\/span><\/a> <a href=\"https:\/\/thinklikeacoder.org\/blog\/how-googles-search-algorithm-transformed-the-internet-and-shapes-what-we-know\"><span style=\"font-weight: 400;\">has<\/span><\/a> <a href=\"https:\/\/www.polemicdigital.com\/google-shaped-web\"><span style=\"font-weight: 400;\">been<\/span><\/a> <a href=\"https:\/\/www.theverge.com\/23753963\/google-seo-shopify-small-business-ai\"><span style=\"font-weight: 400;\">said<\/span><\/a><span style=\"font-weight: 400;\"> about how the web has conformed to what performs best in Google. It\u2019s expected when Google is the biggest referral channel. However, with the advent of generative AI, websites are no longer adapting to a single set of guidelines or incentives. Content is now shaped by multiple platforms and channels, including search engines, AI assistants, recommendation systems, and social feeds, each imposing different structural and semantic pressures on how information is created and how users react to it.<\/span><\/p><p><span style=\"font-weight: 400;\">After two decades of being in the space, I can say definitively that statements like this are how Google keeps marketers as their unpaid workforce, nudging the web toward what works best for Google.<\/span><\/p><p><span style=\"font-weight: 400;\">Googlers often speak as though they are merely extracting natural patterns from the web, positioning themselves as neutral observers. But they are not <\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Watcher_(comics)\"><span style=\"font-weight: 400;\">the Watchers<\/span><\/a><span style=\"font-weight: 400;\">. They are <\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Watcher_(comics)\"><span style=\"font-weight: 400;\">the Celestials<\/span><\/a><span style=\"font-weight: 400;\">. One watches without interference; the other designs systems that determine what survives. Google\u2019s ranking and retrieval decisions have shaped the web for decades. Entire categories of sites have converged on similar layouts, headings, FAQs, and explanatory formats not because those patterns emerged organically, but because Google\u2019s systems and PR consistently reinforced them.<\/span><\/p><p><span style=\"font-weight: 400;\">It\u2019s not that they \u201cdon\u2019t want people to have to be crafting anything for search specifically.\u201d It\u2019s that they \u201cdon\u2019t want people to have to be crafting things for search that take advantage of Google.\u201d\u00a0<\/span><\/p><p><span style=\"font-weight: 400;\">What changes with generative AI is not that this influence goes away, but that it begins to fragment. Search is no longer only about ranking pages. It is about selecting, extracting, and recombining passages across sources. The incentives now favor content that can be easily segmented, understood in isolation, and reused by machines.<\/span><\/p><p><span style=\"font-weight: 400;\">This is still a Google-shaped web. But the shape is starting to loosen, creating the conditions for what comes next.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9657bf8 elementor-widget elementor-widget-heading\" data-id=\"9657bf8\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Google still has search, but the agent-shaped web is emerging outside their control <\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-120692c elementor-widget elementor-widget-text-editor\" data-id=\"120692c\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">As generative AI becomes a primary interface for information, the incentives that once forced publishers to conform to Google\u2019s preferences are weakening. Users are getting answers without clicks, referral traffic is less reliable, and the payoff for strict adherence to SEO best practices and Google\u2019s guidelines continues to shrink &#8211;\u00a0 even when Google results are a key input for the results. The result is a gradual but meaningful loss of influence over how content is structured and prioritized.<\/span><\/p><p><em><span style=\"font-weight: 400;\">(sidebar: I\u2019ll have you know I wrote that em dash myself in that last paragraph.)<\/span><\/em><\/p><p><span style=\"font-weight: 400;\">In conversations with F100 clients, this shift shows up clearly. A few are moving towards abandoning search outright, but many are questioning how much effort it still deserves. Investment is spreading to other formats and channels, and teams are becoming more willing to deviate from rigid SEO conventions. Not because best practices are \u201cwrong,\u201d but because repeated testing shows their impact is increasingly marginal.<\/span><\/p><p><span style=\"font-weight: 400;\">What\u2019s emerging is an agent-shaped web. Content is no longer written primarily to satisfy a single ranking system, but to be usable by agents that retrieve, reason over, and recombine information across sources. These non-Google systems do not publish guidelines. They do not moralize tactics as \u201cwhite hat\u201d or \u201cblack hat.\u201d They simply use the content that works. In that environment, many behaviors Google historically discouraged are not violations. They are advantages.<\/span><\/p><p><span style=\"font-weight: 400;\">This is how Google\u2019s grip loosens. When influence shifts from ranking pages to supplying agents with usable inputs, control fragments. The web stops optimizing for compliance and starts optimizing for utility.<\/span><\/p><p><span style=\"font-weight: 400;\">That\u2019s why, when I\u2019ve asked Google engineers what to do beyond \u201cmake great content\u201d to improve rankings, the answer has consistently been \u201cnothing.\u201d That response only holds if Google remains the central force shaping outcomes. In an agent-shaped web, it isn\u2019t. And, that\u2019s why you see them creating fast-follow protocols like <a href=\"https:\/\/developers.googleblog.com\/en\/a2a-a-new-era-of-agent-interoperability\/\">A2A<\/a> after <a href=\"https:\/\/modelcontextprotocol.io\/docs\/getting-started\/intro\">MCP<\/a> and <a href=\"https:\/\/developers.googleblog.com\/under-the-hood-universal-commerce-protocol-ucp\/\">UCP<\/a> after <a href=\"https:\/\/www.agenticcommerce.dev\/\">ACP<\/a>.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ff89797 elementor-widget elementor-widget-heading\" data-id=\"ff89797\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">How Chunking Improves Relevance<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2c51759 elementor-widget elementor-widget-text-editor\" data-id=\"2c51759\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">We know that content structure influences people and they should be the primary audience for any content adjustment. Fundamentally though, Danny\u2019s statements do not align with how the underlying technology functions.\u00a0<\/span><\/p><p><span style=\"font-weight: 400;\">Search engines and Large Language Models are both built on the <\/span><a href=\"https:\/\/ipullrank.com\/content-relevance\"><span style=\"font-weight: 400;\">vector space model<\/span><\/a><span style=\"font-weight: 400;\">. Relevance is a function of distance measures between queries\/prompts and documents. Where search engines measure this to rank documents, LLMs use the plotted relationships to predict the next token.<\/span><\/p><p><span style=\"font-weight: 400;\">The distance measures are the values that are compared to determine what to feed the LLM. In synthesis pipelines, <\/span><a href=\"https:\/\/patents.google.com\/patent\/US20250124067A1\/en\"><span style=\"font-weight: 400;\">there is a pairwise determination<\/span><\/a><span style=\"font-weight: 400;\"> where passages are compared side by side to determine what gets sent to the language model. A longer piece of text that covers multiple subjects typically has lower relevance than a shorter piece of text that covers a single subject.<\/span><\/p><p><span style=\"font-weight: 400;\">Let\u2019s illustrate that idea with an actual example <\/span><span style=\"font-weight: 400;\">in my tool BubbaChunk (if you know you know).<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c89561a elementor-widget elementor-widget-image\" data-id=\"c89561a\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"571\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-semantic-editor-layout-chunking-2-1024x731.png\" class=\"attachment-large size-large wp-image-20827\" alt=\"\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-semantic-editor-layout-chunking-2-1024x731.png 1024w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-semantic-editor-layout-chunking-2-300x214.png 300w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-semantic-editor-layout-chunking-2-768x548.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-semantic-editor-layout-chunking-2-1536x1097.png 1536w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-semantic-editor-layout-chunking-2.png 1647w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-122d858 elementor-widget elementor-widget-text-editor\" data-id=\"122d858\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">The paragraph above targets the queries [machine learning] and the [data privacy]. When I generate embeddings for the queries and for that paragraph, using cosine similarity as my distance measure I get a 0.541 for [machine learning] and an 0.620 for [data privacy].<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-42e498f elementor-widget elementor-widget-image\" data-id=\"42e498f\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"571\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-screenshot-1024x731.png\" class=\"attachment-large size-large wp-image-20824\" alt=\"\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-screenshot-1024x731.png 1024w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-screenshot-300x214.png 300w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-screenshot-768x548.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-screenshot-1536x1097.png 1536w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/bubbachunk-screenshot.png 1647w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-af22671 elementor-widget elementor-widget-text-editor\" data-id=\"af22671\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">Now, let\u2019s split that paragraph in two and not change anything else. The machine learning paragraph has now improved 19.24% to a 0.645 cosine similarity. The data privacy paragraph improved 1.29% to 0.627.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5ea2245 elementor-widget elementor-widget-image\" data-id=\"5ea2245\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"714\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/machine-learning-chamfer-score-semantic-relevance.png\" class=\"attachment-large size-large wp-image-20828\" alt=\"Chamfer score analysis showing semantic relevance of a machine learning paragraph across multiple distance metrics.\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/machine-learning-chamfer-score-semantic-relevance.png 856w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/machine-learning-chamfer-score-semantic-relevance-300x268.png 300w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/machine-learning-chamfer-score-semantic-relevance-768x685.png 768w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-af1d804 elementor-widget elementor-widget-image\" data-id=\"af1d804\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"1003\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/data-privacy-chamfer-score-semantic-relevance-817x1024.png\" class=\"attachment-large size-large wp-image-20829\" alt=\"\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/data-privacy-chamfer-score-semantic-relevance-817x1024.png 817w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/data-privacy-chamfer-score-semantic-relevance-239x300.png 239w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/data-privacy-chamfer-score-semantic-relevance-768x962.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/data-privacy-chamfer-score-semantic-relevance.png 855w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-82ba842 elementor-widget elementor-widget-text-editor\" data-id=\"82ba842\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">When compared to passages on both subjects, it now has a better opportunity to perform. Within an environment of a full page, other elements like the heading hierarchy and surrounding passages can be used to influence this. I can further improve the scores with semantic triples, entity salience, and so on, but in isolation, restructuring this content by changing its boundaries improves its retrievability.<\/span><\/p><p><span style=\"font-weight: 400;\">Some folks may be invested in <\/span><a href=\"https:\/\/research.google\/blog\/muvera-making-multi-vector-retrieval-as-fast-as-single-vector-search\/\"><span style=\"font-weight: 400;\">Google\u2019s multi-aspect embedding technique MUVERA<\/span><\/a><span style=\"font-weight: 400;\">. BubbaChunk takes a similar approach and MUVERA uses <\/span><a href=\"https:\/\/medium.com\/@sim30217\/chamfer-distance-4207955e8612\"><span style=\"font-weight: 400;\">Chamfer Similarity<\/span><\/a><span style=\"font-weight: 400;\"> as its distance measure. Those are the Chamfer values you see in the screenshots. You\u2019ll note that there are improvements to all distance measures when I\u2019ve made this adjustment.\u00a0<\/span><\/p><p><span style=\"font-weight: 400;\">If you\u2019re curious, adding the headings does improve the scores significantly. Below you\u2019ll see adding the header to the \u201cData Privacy\u201d paragraph improved cosine similarity another 17.54%.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d73058d elementor-widget elementor-widget-image\" data-id=\"d73058d\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"878\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/data-privacy-machine-learning-chamfer-score-comparison-933x1024.png\" class=\"attachment-large size-large wp-image-20832\" alt=\"Chamfer score comparison showing semantic alignment between data privacy content and machine learning concepts.\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/data-privacy-machine-learning-chamfer-score-comparison-933x1024.png 933w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/data-privacy-machine-learning-chamfer-score-comparison-273x300.png 273w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/data-privacy-machine-learning-chamfer-score-comparison-768x843.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/data-privacy-machine-learning-chamfer-score-comparison.png 1071w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5e4b941 elementor-widget elementor-widget-text-editor\" data-id=\"5e4b941\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">No matter how you slice it, embed it or measure it, improving the structure of content yields better scores by machines and how it performs with people.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6904c5c elementor-widget elementor-widget-heading\" data-id=\"6904c5c\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Structured Content is Better in Any Paradigm <\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d074e7c elementor-widget elementor-widget-text-editor\" data-id=\"d074e7c\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">Danny\u2019s comments suggest that Google may eventually evolve its systems to discourage overt structuring techniques. That assumes structure is a temporary optimization tactic andsuggests we\u2019re moving to a world where \u201chigh-quality writing\u201d is a monolith that the algorithm will simply \u201cunderstand.\u201d Google\u2019s own research direction, alongside adjacent work from Meta, Berkeley, and MIT, points in the opposite direction. As systems gain access to more context, memory, and recursion, structure becomes more important, not less. Across multiple papers, Google Research is clearly pursuing near-infinite context through memory rather than brute-force attention, and they are building atop the state of the art from other groups in the space.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d539a2d elementor-widget elementor-widget-image\" data-id=\"d539a2d\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"702\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/blockwise-transformer-attention-distributed-compute-1-1024x899.png\" class=\"attachment-large size-large wp-image-20835\" alt=\"Infini-attention architecture showing compressive memory and linear attention for processing long or infinite context.\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/blockwise-transformer-attention-distributed-compute-1-1024x899.png 1024w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/blockwise-transformer-attention-distributed-compute-1-300x263.png 300w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/blockwise-transformer-attention-distributed-compute-1-768x674.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/blockwise-transformer-attention-distributed-compute-1.png 1065w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2de5c48 elementor-widget elementor-widget-text-editor\" data-id=\"2de5c48\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">Reviewing the state of the art, we find that Berkeley\u2019s <\/span><a href=\"https:\/\/medium.com\/@ignacio.de.gregorio.noblejas\/is-this-the-secret-to-googles-success-over-chatgpt-b2a545f39ad5\"><span style=\"font-weight: 400;\">Ring Attention<\/span><\/a><span style=\"font-weight: 400;\"> demonstrates how extremely long sequences can be processed by breaking them into rotating segments, where each segment attends locally while passing the summarized state forward. In that structure, the model does not need to see everything at once. It needs to preserve meaning across time. Systems like this rely on the continuity of information within a segment. By structuring content into atomic passages, you ensure each &#8220;rotating segment&#8221; contains a complete, unfragmented unit of meaning.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4f384bc elementor-widget elementor-widget-image\" data-id=\"4f384bc\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"573\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/infini-attention-compressive-memory-linear-attention-1024x734.png\" class=\"attachment-large size-large wp-image-20834\" alt=\"Infini-attention architecture showing compressive memory and linear attention for processing long or infinite context.\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/infini-attention-compressive-memory-linear-attention-1024x734.png 1024w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/infini-attention-compressive-memory-linear-attention-300x215.png 300w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/infini-attention-compressive-memory-linear-attention-768x551.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/infini-attention-compressive-memory-linear-attention.png 1075w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9138ea0 elementor-widget elementor-widget-text-editor\" data-id=\"9138ea0\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><a href=\"https:\/\/arxiv.org\/abs\/2404.07143\"><span style=\"font-weight: 400;\">\u201cLeave No Context Behind: Efficient Infinite Context Transformers with Infini-attention\u201d<\/span><\/a><span style=\"font-weight: 400;\"> formalizes this further by introducing compressive memory that allows models to retain and reuse information far beyond a fixed context window. You can\u2019t compress a mess without losing the message. Atomic, legible passages act as high-fidelity signals that survive the compression process, ensuring your information is correctly retrieved later.\u00a0<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-de5ba50 elementor-widget elementor-widget-image\" data-id=\"de5ba50\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"613\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/llm-memory-tree-construction-navigation-1024x785.png\" class=\"attachment-large size-large wp-image-20836\" alt=\"\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/llm-memory-tree-construction-navigation-1024x785.png 1024w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/llm-memory-tree-construction-navigation-300x230.png 300w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/llm-memory-tree-construction-navigation-768x588.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/llm-memory-tree-construction-navigation.png 1535w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8d918e7 elementor-widget elementor-widget-text-editor\" data-id=\"8d918e7\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">Meta\u2019s <\/span><a href=\"https:\/\/arxiv.org\/abs\/2310.05029\"><span style=\"font-weight: 400;\">MemWalker<\/span><\/a><span style=\"font-weight: 400;\"> pushes in the same direction by organizing content into memory trees that can be traversed, revisited, and updated. Structure provides the branches. By defining clear boundaries and semantic anchors, you build a &#8220;map&#8221; that helps the agent navigate and reconstruct the mental model of your information.\u00a0<\/span><\/p><p><span style=\"font-weight: 400;\">These approaches make Google\u2019s intent clear. The goal is not just larger windows. It is durable, near-infinite context.<\/span><\/p><p><i><span style=\"font-weight: 400;\">(sidebar: Those last 4 paragraphs were originally a single paragraph. I split them up while editing to isolate each specific idea and align them with the images from the papers. That\u2019s an example of chunking in action.)<\/span><\/i><\/p><p><br \/><span style=\"font-weight: 400;\">MIT\u2019s work on <\/span><a href=\"https:\/\/alexzhang13.github.io\/blog\/2025\/rlm\/\"><span style=\"font-weight: 400;\">Recursive Language Models<\/span><\/a><span style=\"font-weight: 400;\"> reaches the same destination through a different path. Rather than expanding context directly, RLMs decompose long inputs into smaller units and recursively invoke the model over the most relevant chunks. In effect, the model reasons over content iteratively, revisiting and recombining passages as needed. This reinforces the same reality. Passages are the unit of interaction. <\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c36434c elementor-widget elementor-widget-image\" data-id=\"c36434c\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1812\" height=\"989\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/mixture-of-recursions-token-routing-depth.png\" class=\"attachment-full size-full wp-image-20837\" alt=\"Mixture-of-Recursions model showing token-level routing, recursion depth, and conditional computation across layers\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/mixture-of-recursions-token-routing-depth.png 1812w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/mixture-of-recursions-token-routing-depth-300x164.png 300w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/mixture-of-recursions-token-routing-depth-1024x559.png 1024w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/mixture-of-recursions-token-routing-depth-768x419.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/mixture-of-recursions-token-routing-depth-1536x838.png 1536w\" sizes=\"(max-width: 1812px) 100vw, 1812px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-344998e elementor-widget elementor-widget-text-editor\" data-id=\"344998e\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">DeepMind\u2019s Mixture of Recursions (MoR) pushes this idea into the architecture itself. Instead of a fixed depth of computation, tokens are routed through different numbers of recursive steps. Some content is processed shallowly. Other content is revisited repeatedly. This is adaptive reasoning at the token level, and it further removes any illusion that content is consumed linearly or holistically. What matters is which pieces survive repeated passes through the system.<\/span><\/p><p><span style=\"font-weight: 400;\">A common rebuttal is that we are entering an era of \u201cInfinite Context.\u201d With models like Gemini 3 Pro with a 1 million token context window, why bother chunking when the model can ingest the whole book?<\/span><\/p><p><span style=\"font-weight: 400;\">The answer lies in inference cost and reasoning depth. The MoR paper reveals that not every token needs the same amount of \u201cthinking.\u201d In an agent-shaped web, computation is the new scarcity. Well-structured, atomic content allows the model&#8217;s &#8216;router&#8217; to identify meaning quickly and exit the recursive loop early. Brute-forcing an unstructured 2-million-token wall of text is computationally expensive and prone to &#8216;context rot.&#8217; If you want an agent to pick your content over a competitor\u2019s, you should make it the path of least resistance. You don\u2019t want to be just readable, but computationally efficient to digest.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3709274 elementor-widget elementor-widget-image\" data-id=\"3709274\" data-element_type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1812\" height=\"782\" src=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/hope-architecture-vs-transformers-nested-deep-learning.png\" class=\"attachment-full size-full wp-image-20838\" alt=\"Comparison of the HOPE architecture and standard transformers across nested and deep learning configurations.\" srcset=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/hope-architecture-vs-transformers-nested-deep-learning.png 1812w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/hope-architecture-vs-transformers-nested-deep-learning-300x129.png 300w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/hope-architecture-vs-transformers-nested-deep-learning-1024x442.png 1024w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/hope-architecture-vs-transformers-nested-deep-learning-768x331.png 768w, https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/hope-architecture-vs-transformers-nested-deep-learning-1536x663.png 1536w\" sizes=\"(max-width: 1812px) 100vw, 1812px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-418cd1c elementor-widget elementor-widget-text-editor\" data-id=\"418cd1c\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">At the bleeding edge, Google\u2019s <\/span><a href=\"https:\/\/research.google\/blog\/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning\/\"><span style=\"font-weight: 400;\">Nested Learning<\/span><\/a><span style=\"font-weight: 400;\"> moves beyond retrieval entirely. With the HOPE architecture, passages are no longer just fetched as context. They are used as signals for \u201cmemory infusion,\u201d updating the model\u2019s inner loop. This is where control erodes most clearly. Once content moves from retrieval into synthesis and memory update, our influence largely ends. Just as we can influence how we rank for synthetic queries but not how answers are composed, we can influence which passages are legible and extractable, but not how they are ultimately weighted, combined, or remembered.<\/span><\/p><p><span style=\"font-weight: 400;\">In this environment, atomic legibility is the only survival strategy. If a passage isn&#8217;t self-contained (meaning it lacks its own entity, context, and claim) it fails to \u201cinfuse\u201d correctly. It becomes noisy data. Just as Infini-attention relies on \u201ccompressive memory\u201d to store long-term state, your content must be compressible. You cannot compress a mess without losing the message. Each chunk must stand as a standalone signal so that when the agent tears the binding off so to speak, the individual page survives the transition from retrieval to synthesis.<\/span><\/p><p><span style=\"font-weight: 400;\">Furthermore, the shift to an agent-shaped web isn&#8217;t limited to text. Agentic systems are increasingly multimodal, needing to reconcile text with images, charts, and tables. Without layout-aware structure, these relationships disintegrate during the retrieval process. By defining clear boundaries and semantic anchors, we aren&#8217;t just helping the model read; we are helping it reconstruct the mental model of the information. Structure is the glue that ensures a chart and its context remain unified when an agent retrieves them from a near-infinite context window.<\/span><\/p><p><span style=\"font-weight: 400;\">None of this weakens the case for structure. It sharpens it. In every one of these systems, passages remain the atomic unit of meaning. Whether through attention, memory, or recursion, models operate on chunks, not pages.\u00a0<\/span><\/p><p><span style=\"font-weight: 400;\">Google still wants us to produce books: pages that can host ads, preserve attribution, and sustain the economics of the open web. Agentic systems read differently. They tear the binding off the book, ignore the table of contents, and pull out only the pages and paragraphs they need, sometimes revisiting them again and again. In that world, structure is no longer about presentation. It is about making meaning legible at the passage level.<\/span><\/p><p><span style=\"font-weight: 400;\">Across every paradigm, from the first Google-shaped web to the looming agent-shaped web, the best we can do remains the same. But the &#8216;why&#8217; has changed. We are no longer just formatting for &#8216;skimmability&#8217; or &#8216;dwell time.&#8217; We are formatting for Programmatic Legibility. We are building the API of meaning. By designing content so each chunk stands on its own with clear signals, we aren&#8217;t performing a workaround. We are ensuring our information survives the recursive, synthetic, and agentic loops that are now defining the web.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8d9d84e elementor-widget elementor-widget-heading\" data-id=\"8d9d84e\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">We\u2019ll Keep Being the Signal through the Noise\n<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6db045e elementor-widget elementor-widget-text-editor\" data-id=\"6db045e\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span style=\"font-weight: 400;\">There&#8217;s a lot of confusion right now as search and generative AI continue to blend. There are also a lot of people who don\u2019t understand the nuances so they keep looking to shoehorn the changes into what they already do and know so they can feel superior or at least relevant.\u00a0<\/span><\/p><p><span style=\"font-weight: 400;\">This also makes it difficult for the community because the only reliable sources of information come from reading patents, white papers, playing with the platform\u2019s public APIs, and then experimenting to see what works. Not everyone is capable of those things, nor do they have the time or wherewithal, so they do what I said above.<\/span><\/p><p><span style=\"font-weight: 400;\">It&#8217;s unfortunate that Google continues to want to play the FUD game and contradict what we can see with our own eyes in their research, patents, and in how their systems react to changes. That behavior further reinforces that we are not partners in making the world&#8217;s information accessible. We are the unpaid extension of their workforce.<\/span><\/p><p><span style=\"font-weight: 400;\">For these reasons, <\/span><a href=\"https:\/\/ipullrank.com\/\"><span style=\"font-weight: 400;\">iPullRank<\/span><\/a><span style=\"font-weight: 400;\"> will continue to do the work and the R&amp;D and share what really works and why. Our <\/span><a href=\"https:\/\/ipullrank.com\/ai-search-manual\"><span style=\"font-weight: 400;\">AI Search Manual<\/span><\/a><span style=\"font-weight: 400;\"> is an example of that. We\u2019ll continue to support everyone striving to build in this new world and this will be one of the threads we continue at SEO Week in April.<\/span><\/p><p><span style=\"font-weight: 400;\">So, get your ticket to <\/span><a href=\"https:\/\/www.seoweek.org\"><span style=\"font-weight: 400;\">SEO Week<\/span><\/a><span style=\"font-weight: 400;\"> and hear from the sharpest minds on what&#8217;s actually working for AI Search.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-891bacc e-flex e-con-boxed e-con e-parent\" data-id=\"891bacc\" data-element_type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-b9b2bd5 e-flex e-con-boxed e-con e-parent\" data-id=\"b9b2bd5\" data-element_type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t<div class=\"elementor-element elementor-element-f37e39e e-con-full e-flex e-con e-child\" data-id=\"f37e39e\" data-element_type=\"container\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;}\">\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-5dfe742 e-con-full e-flex e-con e-child\" data-id=\"5dfe742\" data-element_type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-1ac9476 elementor-widget elementor-widget-heading\" data-id=\"1ac9476\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h6 class=\"elementor-heading-title elementor-size-default\">Want to find out about how Relevance Engineering can help your business?<\/h6>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3ef446f elementor-widget elementor-widget-heading\" data-id=\"3ef446f\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h5 class=\"elementor-heading-title elementor-size-default\"><a href=\"https:\/\/ipullrank.com\/relevance-engineering-at-scale\" target=\"_blank\">Learn about iPullRank's Relevance Engineering Services<\/a><\/h5>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-25b8720 elementor-widget elementor-widget-button\" data-id=\"25b8720\" data-element_type=\"widget\" data-widget_type=\"button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-button-wrapper\">\n\t\t\t\t\t<a class=\"elementor-button elementor-button-link elementor-size-sm\" href=\"https:\/\/ipullrank.com\/services\/relevance-engineering\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-button-content-wrapper\">\n\t\t\t\t\t\t<span class=\"elementor-button-icon\">\n\t\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"25\" height=\"8\" viewBox=\"0 0 25 8\" fill=\"none\"><path id=\"Arrow 1\" d=\"M24.3536 4.20609C24.5488 4.01083 24.5488 3.69425 24.3536 3.49899L21.1716 0.317005C20.9763 0.121743 20.6597 0.121743 20.4645 0.317005C20.2692 0.512267 20.2692 0.82885 20.4645 1.02411L23.2929 3.85254L20.4645 6.68097C20.2692 6.87623 20.2692 7.19281 20.4645 7.38807C20.6597 7.58334 20.9763 7.58334 21.1716 7.38807L24.3536 4.20609ZM0 4.35254H24V3.35254H0V4.35254Z\" fill=\"#6F6F6F\"><\/path><\/svg>\t\t\t<\/span>\n\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Back in 2011 when I first started writing SEO blog posts for Moz, despite their popularity I was writing walls of text because that was my nature. Then-CMO Jamie Steven instructed me to read Cyrus Shepard\u2019s 10 Super Easy SEO Copywriting Tips for Improved Link Building for direction on how I should structure what I [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":20839,"comment_status":"open","ping_status":"open","sticky":false,"template":"elementor_theme","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[260,26],"tags":[],"diagnosis-deliverable":[],"class_list":["post-20799","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-relevance-engineering","category-seo"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking -<\/title>\n<meta name=\"description\" content=\"Refuting \u201cdon\u2019t chunk\u201d myths, Mike King explains how RAG and AI agents retrieve passages, not pages, and why structured, user-first content wins in AI search.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ipullrank.com\/misinformation-about-chunking\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking -\" \/>\n<meta property=\"og:description\" content=\"Refuting \u201cdon\u2019t chunk\u201d myths, Mike King explains how RAG and AI agents retrieve passages, not pages, and why structured, user-first content wins in AI search.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ipullrank.com\/misinformation-about-chunking\" \/>\n<meta property=\"og:site_name\" content=\"iPullRank\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-15T12:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-26T22:11:07+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/Danny-says-not-to-chunk.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1398\" \/>\n\t<meta property=\"og:image:height\" content=\"800\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Mike King\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ipullrankagency\" \/>\n<meta name=\"twitter:site\" content=\"@ipullrankagency\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Mike King\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"19 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/ipullrank.com\/misinformation-about-chunking#article\",\"isPartOf\":{\"@id\":\"https:\/\/ipullrank.com\/misinformation-about-chunking\"},\"author\":{\"name\":\"Mike King\",\"@id\":\"https:\/\/ipullrank.com\/#\/schema\/person\/82831a4b9f4b8be81d5a9bfed4cb9b20\"},\"headline\":\"Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking\",\"datePublished\":\"2026-01-15T12:00:00+00:00\",\"dateModified\":\"2026-01-26T22:11:07+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/ipullrank.com\/misinformation-about-chunking\"},\"wordCount\":4069,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/ipullrank.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/ipullrank.com\/misinformation-about-chunking#primaryimage\"},\"thumbnailUrl\":\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/Danny-says-not-to-chunk.png\",\"articleSection\":[\"Relevance Engineering\",\"SEO\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/ipullrank.com\/misinformation-about-chunking#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ipullrank.com\/misinformation-about-chunking\",\"url\":\"https:\/\/ipullrank.com\/misinformation-about-chunking\",\"name\":\"Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking -\",\"isPartOf\":{\"@id\":\"https:\/\/ipullrank.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/ipullrank.com\/misinformation-about-chunking#primaryimage\"},\"image\":{\"@id\":\"https:\/\/ipullrank.com\/misinformation-about-chunking#primaryimage\"},\"thumbnailUrl\":\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/Danny-says-not-to-chunk.png\",\"datePublished\":\"2026-01-15T12:00:00+00:00\",\"dateModified\":\"2026-01-26T22:11:07+00:00\",\"description\":\"Refuting \u201cdon\u2019t chunk\u201d myths, Mike King explains how RAG and AI agents retrieve passages, not pages, and why structured, user-first content wins in AI search.\",\"breadcrumb\":{\"@id\":\"https:\/\/ipullrank.com\/misinformation-about-chunking#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/ipullrank.com\/misinformation-about-chunking\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/ipullrank.com\/misinformation-about-chunking#primaryimage\",\"url\":\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/Danny-says-not-to-chunk.png\",\"contentUrl\":\"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/Danny-says-not-to-chunk.png\",\"width\":1398,\"height\":800},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/ipullrank.com\/misinformation-about-chunking#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/ipullrank.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/ipullrank.com\/#website\",\"url\":\"https:\/\/ipullrank.com\/\",\"name\":\"iPullRank\",\"description\":\"Digital Marketing Agency in NYC\",\"publisher\":{\"@id\":\"https:\/\/ipullrank.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/ipullrank.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/ipullrank.com\/#organization\",\"name\":\"iPullRank\",\"url\":\"https:\/\/ipullrank.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/ipullrank.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/ipullrank.com\/wp-content\/uploads\/2025\/03\/Logo_-_Layers.svg\",\"contentUrl\":\"https:\/\/ipullrank.com\/wp-content\/uploads\/2025\/03\/Logo_-_Layers.svg\",\"width\":177,\"height\":36,\"caption\":\"iPullRank\"},\"image\":{\"@id\":\"https:\/\/ipullrank.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/ipullrankagency\",\"https:\/\/www.linkedin.com\/company\/ipullrank\/\",\"https:\/\/www.youtube.com\/@iPullRankSEO\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/ipullrank.com\/#\/schema\/person\/82831a4b9f4b8be81d5a9bfed4cb9b20\",\"name\":\"Mike King\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/ipullrank.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/d57e62b40de6db99771f85cbce3ab1d29071b8cd0d643c8dcf2fc55818e1769f?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/d57e62b40de6db99771f85cbce3ab1d29071b8cd0d643c8dcf2fc55818e1769f?s=96&d=mm&r=g\",\"caption\":\"Mike King\"},\"description\":\"Mike King is the Founder and CEO of iPullRank. Deeply technical and highly creative, Mike has helped generate over $4B in revenue for his clients. A rapper and recovering big agency guy, Mike's greatest clients are his two daughters: Zora and Glory.\",\"url\":\"https:\/\/ipullrank.com\/author\/ipullrank\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking -","description":"Refuting \u201cdon\u2019t chunk\u201d myths, Mike King explains how RAG and AI agents retrieve passages, not pages, and why structured, user-first content wins in AI search.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ipullrank.com\/misinformation-about-chunking","og_locale":"en_US","og_type":"article","og_title":"Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking -","og_description":"Refuting \u201cdon\u2019t chunk\u201d myths, Mike King explains how RAG and AI agents retrieve passages, not pages, and why structured, user-first content wins in AI search.","og_url":"https:\/\/ipullrank.com\/misinformation-about-chunking","og_site_name":"iPullRank","article_published_time":"2026-01-15T12:00:00+00:00","article_modified_time":"2026-01-26T22:11:07+00:00","og_image":[{"width":1398,"height":800,"url":"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/Danny-says-not-to-chunk.png","type":"image\/png"}],"author":"Mike King","twitter_card":"summary_large_image","twitter_creator":"@ipullrankagency","twitter_site":"@ipullrankagency","twitter_misc":{"Written by":"Mike King","Est. reading time":"19 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ipullrank.com\/misinformation-about-chunking#article","isPartOf":{"@id":"https:\/\/ipullrank.com\/misinformation-about-chunking"},"author":{"name":"Mike King","@id":"https:\/\/ipullrank.com\/#\/schema\/person\/82831a4b9f4b8be81d5a9bfed4cb9b20"},"headline":"Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking","datePublished":"2026-01-15T12:00:00+00:00","dateModified":"2026-01-26T22:11:07+00:00","mainEntityOfPage":{"@id":"https:\/\/ipullrank.com\/misinformation-about-chunking"},"wordCount":4069,"commentCount":0,"publisher":{"@id":"https:\/\/ipullrank.com\/#organization"},"image":{"@id":"https:\/\/ipullrank.com\/misinformation-about-chunking#primaryimage"},"thumbnailUrl":"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/Danny-says-not-to-chunk.png","articleSection":["Relevance Engineering","SEO"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ipullrank.com\/misinformation-about-chunking#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ipullrank.com\/misinformation-about-chunking","url":"https:\/\/ipullrank.com\/misinformation-about-chunking","name":"Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking -","isPartOf":{"@id":"https:\/\/ipullrank.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ipullrank.com\/misinformation-about-chunking#primaryimage"},"image":{"@id":"https:\/\/ipullrank.com\/misinformation-about-chunking#primaryimage"},"thumbnailUrl":"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/Danny-says-not-to-chunk.png","datePublished":"2026-01-15T12:00:00+00:00","dateModified":"2026-01-26T22:11:07+00:00","description":"Refuting \u201cdon\u2019t chunk\u201d myths, Mike King explains how RAG and AI agents retrieve passages, not pages, and why structured, user-first content wins in AI search.","breadcrumb":{"@id":"https:\/\/ipullrank.com\/misinformation-about-chunking#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ipullrank.com\/misinformation-about-chunking"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ipullrank.com\/misinformation-about-chunking#primaryimage","url":"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/Danny-says-not-to-chunk.png","contentUrl":"https:\/\/ipullrank.com\/wp-content\/uploads\/2026\/01\/Danny-says-not-to-chunk.png","width":1398,"height":800},{"@type":"BreadcrumbList","@id":"https:\/\/ipullrank.com\/misinformation-about-chunking#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/ipullrank.com\/"},{"@type":"ListItem","position":2,"name":"Moving from a Google-shaped Web to an Agent-shaped Web: A Refutation of Misinformation about Chunking"}]},{"@type":"WebSite","@id":"https:\/\/ipullrank.com\/#website","url":"https:\/\/ipullrank.com\/","name":"iPullRank","description":"Digital Marketing Agency in NYC","publisher":{"@id":"https:\/\/ipullrank.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ipullrank.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/ipullrank.com\/#organization","name":"iPullRank","url":"https:\/\/ipullrank.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ipullrank.com\/#\/schema\/logo\/image\/","url":"https:\/\/ipullrank.com\/wp-content\/uploads\/2025\/03\/Logo_-_Layers.svg","contentUrl":"https:\/\/ipullrank.com\/wp-content\/uploads\/2025\/03\/Logo_-_Layers.svg","width":177,"height":36,"caption":"iPullRank"},"image":{"@id":"https:\/\/ipullrank.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/ipullrankagency","https:\/\/www.linkedin.com\/company\/ipullrank\/","https:\/\/www.youtube.com\/@iPullRankSEO"]},{"@type":"Person","@id":"https:\/\/ipullrank.com\/#\/schema\/person\/82831a4b9f4b8be81d5a9bfed4cb9b20","name":"Mike King","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ipullrank.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/d57e62b40de6db99771f85cbce3ab1d29071b8cd0d643c8dcf2fc55818e1769f?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d57e62b40de6db99771f85cbce3ab1d29071b8cd0d643c8dcf2fc55818e1769f?s=96&d=mm&r=g","caption":"Mike King"},"description":"Mike King is the Founder and CEO of iPullRank. Deeply technical and highly creative, Mike has helped generate over $4B in revenue for his clients. A rapper and recovering big agency guy, Mike's greatest clients are his two daughters: Zora and Glory.","url":"https:\/\/ipullrank.com\/author\/ipullrank"}]}},"_links":{"self":[{"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/posts\/20799","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/comments?post=20799"}],"version-history":[{"count":0,"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/posts\/20799\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/media\/20839"}],"wp:attachment":[{"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/media?parent=20799"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/categories?post=20799"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/tags?post=20799"},{"taxonomy":"diagnosis-deliverable","embeddable":true,"href":"https:\/\/ipullrank.com\/wp-json\/wp\/v2\/diagnosis-deliverable?post=20799"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}