NOTE: This is Part One of a post I drafted back in March of 2019!
My goal at the time was to shed light on what I saw were a lot of misconceptions around how people were viewing core updates. I think the entire paradigm of how we view core updates should be completely different than past Google updates or penalties of the past (like Penguin, Panda, etc).
Even today, I often explain my views on core updates and people nod their heads but then I still see a lot of “off” views on core updates out there in the world. The industry at large has gotten better since I first drafted the below in March of 2019 but I think it is still very worth going into detail about.
So this post will:
- First, break down why overall search traffic is a misleading and bad metric to use when analyzing core updates (and ‘visibility’ from 3rd party tools is even worse)
- It will help you make sense of the often elusive “average position” metric in Search Console – gives the most detailed explanation I’m aware of! And how to clearly think of what “traffic” actually is.
- Shows why I believe it’s dangerous and inaccurate to think of core updates as having “winners” or “losers”
- Explain the difference between “algo EAT” and “Human EAT” and why this matters for you and your site/content.
Let’s begin… the below was mostly written in March 2019 (I made a few edits and noted some updates)
I should also note: this post goes against some things fellow SEO’s and even friends of mine say. It’s really hard to publish something that may be contrary to what many people whom I greatly respect may say. My hope is we can continue to have healthy, productive, and respectful conversations as an industry! And although some of my thoughts may differ I greatly value and appreciate the input of ALL people and perspectives! DON’T be shy about sharing your views, I will always listen to and consider all of them.
—
The Problem With “Traffic” as an Algo Change Diagnostic Metric
What exactly is “traffic”? This seems like a simple question.
But BEFORE we discuss algo updates and traffic changes, we need to agree on what “traffic” is. After all, traffic is the key metric folks are using to tell the story of “winners” or “losers”.
And because “traffic” from a search engine… is not so straightforward.
In Google Analytics we can segment by Google/Organic and look at a daily trend line. But does that actually tell us what happened in an algorithm update?
This is a nutrition site that was “hit” on August 1st 2018, and “recovered” on March 12th 2019. Awesome right?? The traffic is great but this tells us nothing about how or why they “recovered”.
So I want to attempt to break down exactly what “traffic” is.
You have to drill into the most basic dataset, and singular metric that then contributes to “traffic”.
“Query Logs”… The Data You Never See
Consider this hypothetical “query log” (my made-up term) for the query “banana” over one day, against a page fruit.com/bananas:
“Query logs” would be the building block raw dataset of all data the results in what we see in Search Console.
At this level of granularity, you can isolate metrics to their most specific level. Each moment a human queried Google for “banana” it creates a new row in the log, with the corresponding data. At this level, the click-through rate can only be 100% or 0%.
THEN, the SUMS and AVERAGES of that data from the daily “query log” gets rolled into what you’d see as one day of data in Search Console for one query.
These individual metrics are the seeds to what results in traffic (an output, or result of these inputs).
Searches are a metric that we don’t see, but I’m sure Google must have it (maybe it’s just the same as “search volume” but broken into per day). Searches are assumed to be the total amount of times a query is searched on that day, although we might not always register an impression. I just think it’s good to keep this in mind.
The SUMS and AVERAGES of each row end up at the top of our search console report above the chart as the total numbers.
The above chart looks most like the “queries” report in Search Console:
Next, you can fold the query metrics up again, into what becomes the “pages” report in Search Console:
Keep in mind – impressions per page (when the report is set for one day), is the # of daily queries, for all queries, a page registered an impression for. It’s the sum of the sum.
And further, by the time we see the average position or click-through rate for a page, at this level, they are averages of averages. They are the average of every row in the query log, averaged again with all queries for each page.
This all shows up in the Search Console “pages” tab:
And Then Finally We Get Traffic…
TRAFFIC, finally, is rolling the metrics from all pages, into the total for each day:
By the time we get “traffic” – all of these metrics have been summed and averaged THREE TIMES.
And to be more particular, traffic from a search engine is only registered if the page is loaded, the GA code fires, AND the referral string is for ‘google’. (This is one reason clicks and traffic can be a little different – although I generally consider them the ‘same thing’ with a margin of error).
And only then can you have a standard traffic chart like below:
Each “dot” is a:
- query table..
- rolled into query metrics…
- rolled into page metrics…
- totaled for the day (and represented by “users” or “sessions” in GA, although similar to “clicks”)
Then a traffic trend is all of this, rolled into “dots”, with changes day by day.
So when search traffic changes you can ONLY know why when you:
- Isolate the change by a specific metric (ie: did avg position go down, or was it impressions?), and then determine why that specific metric changed (ie: impressions can change because of seasonal search volume, a position can change for literally hundreds of reasons).
- AND segment your data, usually down to pages or queries.
So why are we analyzing Google updates with traffic at the domain level?
2020 Update: I’ve seen people doing this less and less, so I think the collective understanding has evolved here, but it could still improve!
“Traffic” (or clicks) is the result of many individual metrics that can all fluctuate for several reasons.
And it’s especially problematic when that “visibility” data comes from third-party tools — which I love using for other purposes — but they do not provide an accurate or reliable way to analyze traffic changes.
The Need For Segmentation Deeper Than Just “Organic Traffic”
In addition to traffic as a metric being problematic, if you really want to see what’s going on, I think algo change analysis requires segmentation by many dimensions such as:
- Pages
- Page Types
- Query
- Query Types
- Device Type
At least those for a start.
Let me explain with another example. Here is the nutrition site which “lost” in August 2018 but “won” in March 2019:
This domain is a “winner” right?
I’d say that’s not quite right. They won on some query adjustments:
But “lost” on others:
They look like a winner when you don’t segment — but in reality just happened to improve on more queries than they lost.
A New(ish?*) Paradigm of Google Updates: There Are No “Winners” or “Losers”
*2020 Update: I think we have all collectively have gotten better at this and started to shake off the old paradigm of “what sites won or lost” – so remember this was written in March of 2019, but I think it’s all still very worth reviewing! I WANT you all to be successful at evaluating core updates correctly. A domain “losing” implies there is some site-wide fix that will “fix” your traffic – core updates are not at all like Penguin, Panda etc
In this new paradigm, whole sites don’t win or lose. I mean, you can still of course show domains that lost traffic more than gained or vice versa – but this doesn’t really help tell us anything meaningful or actionable.
Queries, query types, pages, and page types are adjusted (and “win/lose”) – and your whole site may happen to “win” or “lose” when looking at it all in aggregate, depending on several complex factors.
Hypothetically, if core updates updates were domain based, and punitive, like in the past (with Penguin, Panda, etc) you’d see uniform consistent drops across all pages, all rankings, and all keywords.
But that’s not what I see when analyzing several sites (remember this was back in 2019, but still true today)
And let me throw you for another loop and show the need for segmentation even more.
Here’s the data from one query on the site that “won” in March 2019 above:
They “lost” clicks on this keyword. But it wasn’t even the position that changed. Impressions dropped, click-through rate dropped slightly yet they remained at position #1.
This could be because of seasonality or just Google deciding to just show them less at the top. I’d have to dig in even more to figure out exactly what’s happening.
And that’s my point – when you segment and pick apart the metrics to the root cause these things are not so simple as a domain winning or losing.
EAT and The Quality Raters Guidelines
And this brings me to EAT.
I think because of past algo updates – like Panda, Penguin, Ads above the fold, that were punitive in nature, and seemed to hit domains at a site-wide level — we’ve been conditioned as SEOs to look for sitewide issues and fixes.
While I am a HUGE believer in improving your site as a whole – addressing things like architecture, page layout, UX, speed, accessibility, crawl efficiency, supplementary navigation – the list goes on – if you’re not segmenting by pages, queries, device type… you’re leaving a lot on the table.
If your content does not intentionally match a query/topic and is not aligned topically, structure-wise in quality and in intent — none of the other stuff matters. It could matter in outlying situations if you’re a very high authority site, you may have an easier time ranking when slightly off-topic, etc content — but for most sites content/query matching is THE most important thing.
Because these updates appear to happen at a query and page level, I do not think it’s totally accurate to only focus on possible sitewide issues.
What does this have to do with “EAT”?
Well SEO’s have defined EAT as a sitewide issue (and query type, or page type agnostic). (2020 update! Less so now, I think now more SEO’s now are looking at “EAT” on a page by page basis, but it’s worth reiterating I think it’s very specific to single pages and not sitewide).
In other words, it is (had been) said that EAT looks at site-level “signals” while completely ignoring keyword and page segments. And this is misleading because – well, let’s go back to the nutrition site that “lost” in August 2018 and “won” in March 2019:
Awesome right?? This is a “recovery” screenshot worth tweeting about!
Except for one thing I didn’t tell you.
They literally had changed nothing since August 2018. In fact, if one were to analyze this site post “recovery”, you’d still give it a pretty low “EAT” score (by Quality Raters Guidelines standards).
So what is going on? In the past, let’s say you got hit by Penguin. You generally had to fix something before seeing a recovery. Why not now?
Algo E-A-T vs Human E-A-T: For Better Rankings Focus on Algo E-A-T
Consider something.
What if EAT to the algorithm isn’t what we think it is?
If algorithms could easily score pages based upon the EAT factors in the Quality Guidelines, why would Google contract over 10,000 humans to do it manually for them?
What if Google algorithmically analyzes content factors for “EAT”? Like, for example, an analysis of just the words and content like images. I’m no search engineer, but that seems way more efficient to me than writing an algo to go looking for BBB ratings or verifying author credentials etc. We know efficiency (and saving costs) through algos that scale is hugely important. We also know that eliminating noisy signals is key too. NLP processing and content analysis seems way easier at cost/scale than writing algos to do literally what the QRG describe.
For example – maybe experts just … create better content.
And how could Google easily determine how good content is? Well for one, maybe when analyzed with NLP it lands closer to the correct category it expects for a query?
According to Google’s own natural language tool, (er, free widget version thing — some people got mad when I called it a tool) — here’s the content category of the “winning” site above for an article in which they are ranking above Healthline, WebMD and many other “higher authority” and “higher EAT” sites:
And, here’s the content categorization of the Healthline article which does not rank as well:
Trust me “home appliances” is not right at all, and “drink and beverage” is also off from “Nutrition”. Perhaps one reason Healthline’s article is low “authority” and low “expertise” simply because a NLP analysis of the words does not land it in the category Google expects to see for a high-quality piece of content for that query.
What if for “algo EAT” Google got better at understanding the content – and either saw it better matched what the user wants? Or saw it as having more “expertise” and “authority” because content created by a true expert – someone experienced, passionate, hard-working – or all three – creates content with certain characteristics that collaborate to “better” content. Algo EAT could in part be Google just better analyzing the plain words better with NLP. (Of course “EAT” could factor in links, off-site signals, brand search volume, etc — plenty of other signals.)
I shared these thoughts with Aj Kohn back in March of 2019 and he strongly agreed:
AJ had written what I consider to be one of the most forward thinking, on the money and important SEO blog posts in recent time: Algo Analysis in the Age Of Embeddings – in fact, I spent hours and hours studying it and the resources he references (including the Natural Language API) and it in large part inspired this post back in 2019 and how I looked at all SEO in general.
Other On-page Content “Algo EAT” Signals?
And I think other pure content signals that could align with “Algo EAT” are:
- sentiment (does it match the expected sentiment that represents quality content in a given topical area)
- topical completeness (a term I use to describe how well you address the common themes, sub-topics, questions etc Google expects to find on a high-quality document about a given topic)
- clean/efficient writing (no “hard to read sentences” according to tools like Hemingway etc – which in my opinion makes the content easier for users and Google to “read” and understand — limiting of fluff words etc)
- document structure (do the topics flow in the structure Google and users expect, and do you answer the main query at the top?)
- intent matching (ie: NOT using transactional language on an informational page or vice versa)
I have helped clients create and rank dozens maybe hundreds of pieces of content with just … surprise … creating really “good content” — content that is topically intentional fills in topical gaps, shows expertise by just being really well written etc.
Hopefully, this helps you start to think about what to focus on and ways to make the content itself better!
To Be Continued?
As mentioned this was a post I had not finished, and clearly it just sort of ends abruptly.
I had sections analyzing Dr Axe’s “recovery” and showing that it was actually architecture and internal linking updates that helped them recover … not content changes (in fact they had made not content updates at all post Medic Update and pre “recovery”).
ie: Classifying all sentences by information type…
And since writing this post I’ve gotten even more granular in my content approach with clients — and for example, categorizing all sentences by the “information type” let’s call it.
For example, if your article is about “mailchimp alternatives” the opening sentence should be able to be classified as an “alternative statement” — NOT a general definition sentence about what Mailchimp is.
This site, ranking #1, does a good job:
However, this example below, ranks on page two for “mailchimp alternatives”, despite having a high DA of 72 and authoritative on email marketing, does NOT stay on topic in their opening – the “info type” category would be “statements about mailchimp” and even just broad “statements about email marketing”.
That’s the idea — which we can go into a lot more depth on…
If you’re not quite following that it’s because it would require a whole other post… which I’m happy to create if you want me too!
Leave a comment here or ping me on Twitter and let me know what you think: @dan_shure
And remember… I listen to ALL views and value ALL input. Just be nice, respectful, open to a productive discussion and I think we can ALL learn fro one another.
April 24, 2020
Bill
Hi Dan,
It takes guts to go contradict the Industry experts, but I do agree with your ideas on how EAT could be measured and have used some of the tactics to rank a specific page. Great article!
May 7, 2020
Miron@ROD Koszalin
Yep, lots of good hints here. Can personally confirm, that content is huge part fo seo. The website am optimizing now, is still fresh, and has very small content yet. With specific keywords it took fairly long time to position 1, with more general and shorter keyword, it looks like climbing Everest. Didnt even made to page 1 (top 10 position) yet, even dropped from top of page 2, to page 3 today. Proper content, and think proper content/keyword relation, is maybe even the most important thing for Google. Godbless and thanks for interesting article.