SEO: Testing…Testing…And More Testing…

28Aug, 2016

In a recent blog post on Moz, the site formerly known as SEOMoz followed up on their excellent 2013 Search Engine Ranking Factors study by continuing to analyze intriguing aspects of the data gleaned in that study – via a new analysis of Google SERPs across search volume and site type.

The blog post unveils some of this new analysis, digging deeper into the data that was obtained in the previous study, which used a broad keyword set (as in around 15,000 keywords), along with more then 100 different factors (such as links, anchor text, on-page factors, and social signals, among others) to capture Google’s overall search algorithm.

The blog goes on to address one of the “most frequently asked questions” that arose after publication of the earlier study, namely…

“Do you see any systematic differences in Google’s search results across search volume or topic category?”

It’s a fascinating and insightful blog post, and we encourage you to read it to learn more yourself. But what really stood out to us here at the Fang Digital Marketing blog about this post – and what got us pretty freaking excited to see – is that Moz was able to do some real testing here, primarily because of the massive amounts of data they have to work with in their own database. It also helped that the post’s author, in-house Moz scientist Matt Peters (also known as Dr. Pete), has a deep wealth and base of knowledge when it comes to all things SEO. He’s a good bit different from Dr. Eldon Tyrell, but Dr. Pete is still a serious force to be reckoned with in his own right.

Peters has presented other outstanding articles and blogs over the years like this one, in which he spells out exactly how he did the testing, as well as what the results looked like. Additionally, he would be the first to bring up the “correlation does NOT imply causation” rule of statistics – a rule that is unfortunately quite often ignored, twisted and abused in the realms of SEO, paid search and digital marketing today.

At the end of the day, the bottom line is that the word “test” is abused all too often – in both SEO and paid search circles. In many (most?) cases, people in our industry simply don’t do nearly enough testing to even get close to publishing credible results. Meanwhile, the good folks over at Moz are actually doing enough tests to publish credible – as well as interesting and helpful – results, and they are making sure to be the first to tell you that the data mostly proves that we do not have all the answers.

That should be par for the course when it comes to statistical data reporting and analysis, of course. Alas, this is not always the case here in 2013. Certainly not among people – even some so-called “experts” – in our industry. No, too often these days, we see SEOs coming back with some sort of “trick” that they “tested” on a single site…and declaring it a success because they got one page to jump in rank. Wow. Congratulations. Big deal.

In reality, it would take many hundreds – or possibly even thousands – of individual tests to prove most theories. Especially a theory within a complex set of data as large as the entire Google index of sites.

Even if a test was done, say, a few hundred times, you can run into all sorts of issues if those tests aren’t done concurrently – especially when it comes to paid search, and especially if you’re dealing with something like a seasonal product or service.

It’s important to remember, then, that proper testing involves many different factors. Including establishing and utilizing the proper sample size needed to obtain meaningful and real results.

It’s also important to remember that we are all human. And even the most well-intentioned and intelligent among us can make mistakes from time to time. The good folks at Moz are no different here. They are not immune to mistakes, as was illustrated just days ago, when Google Webspam Czar Matt Cutts politely admonished them for publishing an article that linked Google+ “+1” to better search rankings. Even though the article featured tons of data on the subject, it wasn’t really “true” or “right”…and Cutts had to step in and more or less tell them, “Hey, guys, pay attention here, please. I already said Google+ “+1” was NOT influencing search rankings.” This, then, was clearly a case of causation not equaling correlation. In fact, Cutts even led with that point in his rebuttal to the Moz “findings.”

We can’t tell you how to eliminate potential mistakes from your calculations and testing. But we can tell you that this recent example serves as a very powerful reminder as to why all our SEO methods here at at Fang Digital Marketing are based on things that come straight from the Google horse’s mouth…and not on any bag of “tricks” that might be circulating around SEO circles as the Hot New Thing Right Now.

And we can also give you a solid piece of advice on how to determine the proper sample size needed for your particular testing. Here at Fang Digital Marketing, we’re a big fan of the free calculator from Creative Research Systems that allows you to determine the proper sample size for a test. Check it out right here to learn more…and make sure you’re testing properly.

So the next time you think you’ve discovered something, make sure you steer your browser on over to this site, and fire up your sample size calculator. And figure out how many more times you need to recreate that test before you publish.

Just remember to follow the old saying…

Testing…Testing…1…2…3.

And then keep going. And going. And going…

Oh and when you are finished, please do let us know. We’d love to take the time to read about another legitimate, insightful and well-tested study.

Leave a Comment