Your browser does not support JavaScript!

How HubSpot Ranked a Competitive Product Page in 3 Months (for a Product That Didn’t Exist Yet)

SEO is hard. SEO for product pages is even harder. SEO for product pages for products that don’t exist? Seemingly impossible. However, with the right approach, it’s really not much harder than ranking top-of-the-funnel blog content. In fact, the process can be scalable across multiple product pages and you can leverage quality content creation to […]

SEO is hard. SEO for product pages is even harder. SEO for product pages for products that don’t exist? Seemingly impossible.

However, with the right approach, it’s really not much harder than ranking top-of-the-funnel blog content. In fact, the process can be scalable across multiple product pages and you can leverage quality content creation to do so (not just a high domain authority or thousands of pages and user generated content).

At HubSpot, Scott Tousley and I took on the challenge of ranking product pages for products that are yet to exist. We did it through a traditional HubSpot content-heavy approach, and leveraged what is known as our Pillar & Cluster model (more details on this in a minute) to feed product page SEO (more details on the model in a bit). We also heavily invested in content promotion and link building tactics, which anyone can do, regardless of resources or company size.

The results in only 3 months (October through December 2017) were solid. Here’s an example of our customer feedback software page ranking at number 3 (above companies with live products):

In this article, I’ll walk through the higher level strategy of why we chose the approach we did, and then I’ll also dive into tactical tips to apply this knowledge at whatever business scale you’re operating.

Product Page SEO: An Indirect Content-Based Approach

It’s really hard to build links to product pages.

For obvious reasons, those writing content are not jumping in excitement to write promotional pieces with product links. Some opportunities exist, but not enough to outrank product pages that have been around for years and that have acquired natural editorial links with time.

Similarly, it’s not feasible to create massive content-heavy product pages. We needed something simple, because as I mentioned, the product wasn’t actually live yet.

So to get past those hurdles, we leveraged link equity and site architecture.

Specifically, we followed HubSpot’s Pillar and Cluster model and relied heavily on internal linking and external link building on our massive “Pillar” content.

Let’s step back and define some of those terms, because there’s a bit of jargon here that’s necessary to understand.

  • Link Equity: Also known as “link juice,” it’s the idea that certain links pass value and authority from one page to another.
  • Site Architecture: The planning and structuring of website content.
  • Internal Linking: Hyperlinking content within your own site.
  • Pillar & Cluster model: A model created by HubSpot that values topics over keywords to boost SEO as well as UX.

While some of it is speculative, there are some general heuristics when it comes to link equity that come in handy when it comes to ranking product or transactional pages. According to an episode of Whiteboard Friday, here are three principles for link equity:

  1. External links generally give more ranking value and potential ranking boosts than internal links.
  2. Well-linked-to pages, both internal and external, pass more link equity than those that are poorly linked to.
  3. Pages with fewer links tend to pass more equity to their targets than pages with more links.

With that in mind, we created massive guides for topics with lots of search traffic volume. These are our “pillar pages,” or in other words, our 10X content.

We expected these to attract the most links, and we directed all of our link building efforts towards these pieces.

We complemented these pillar pages with “cluster” content, articles with similar topic themes that focus on longer tail keywords. These linked back to our pillar pages as well as to each other.

Finally, on all of our pillar pages and our cluster content, we linked as high on the page as possible to our product pages.

Again, this is based on the HubSpot Pillar & Cluster framework, which looks like this, structurally:

Image Source

Or to get super specific, here’s the actual visualization we used to show our particular efforts:

As a specific illustration, here are three pieces of content we created that actually correspond to this strategic architecture:

This video does a good job explaining the general gist of the idea. Essentially it’s an architectural view of content creation, particularly for blogging and SEO:

This process of content planning is an iteration of website architecture, which is basically the “planning and design of the technical, functional and visual components of a website – before it is designed, developed and deployed.” It’s also a strategic method of content planning that helps build authority on specific topics.

Image Source

Organizing your site in a logical way isn’t just good for SEO, it’s good for user experience and navigation in general.

With that in mind, here’s how the specific process looked when it came to content creation.

 

Planning Content for SEO Volume and Easier Link Acquisition

To start, we aligned our content from the bottom up, meaning we knew which products we were attempting to rank and had to work from bottom-of-the-funnel up to the top.

So, let’s go back to our example, Customer Feedback Software.

We knew we’d have a product page for this where people could actually sign up. But from there, we worked backwards to research which terms commanded the most search traffic around that theme.

In this case, both Customer Feedback and Customer Satisfaction drove a ton of demand, so we created pillar pages for both of those that both linked back to the feedback software product page. Here are the specific pillar pages we ended up creating:

SEO research for the pillar pages also included comprehensive long tail keyword research based on questions we could answer on the broader topic. These longer tail keywords would eventually be spun out into individual cluster posts (i.e. blog posts), but also incorporated into the pillar content itself. Some of these posts included:

…and many, many more of course. These were more specific and lower search volume posts that complemented the larger themes of customer satisfaction and feedback.

So, for a post on Customer Satisfaction, we included sections on things like customer satisfaction software and how to improve customer satisfaction scores:

In addition, we aligned with our content team to create tons and tons of cluster blog posts that linked to and supported the themes set up by the pillar pages. An example is this blog post we published on customer feedback survey mistakes:

Finally, all of these posts – whether pillar or cluster – included product page links, and cluster blog posts also included links to our pillar pages (with exact match anchor text, as you can see above).

We also included CTAs on our pillar pages that lead to our product landing pages:

In all cases, no matter what type of content we were created, we sought to create 10x content, the kind you’d actually want to link to. Particularly with our pillar pages, this meant included linkable content “hooks,” such as:

  • Original data & stats
  • Original Images
  • Charts and Graphs
  • Quotes from influencers
  • Frameworks
  • Pros and Cons Tables

We tried to include anything we could that was outside the typical Wikipedia-style me-too content. We didn’t want to rehash what was already out there, we wanted to be better and different. So, for example, we designed our own survey examples, like this one for NPS:

Or, for example, for our Customer Feedback page, we included pros and cons tables to help visitors decide which type of feedback surveys to use:

Now that we had a solid base of quality content, we built out a distribution and link building process to make sure we rose in the rankings and got some eyes on the pages.

 

Building Links and Distributing Content

Link building is its own monster, and to do it true justice would require its own multi-thousand word blog post.

To summarize, however, we tried all the major link building tactics and some worked better than others.

Generally speaking, the ones that worked the best were the least scalable: they involved relationships that had been built over months and years. On a similar point, the ones that were the least effective were the ones that are the most popular and overused: Skyscraper Technique link building, HARO pitches, roundup posts, etc.

The most important part, in fact, was the process of discovering influencers and link targets to begin with. Since we wanted both high relevance and high authority links, we created a “bullseye” framework to distinguish between Tier 1, 2, and 3 targets.

  • Tier 1 – Blogs and influencers directly related to the Service Hub. These include bloggers who write about customer success and customer success practitioner. It may also include direct competitors to our tools.
  • Tier 2 – Blogs and influencers who are semi-related to customer success. Includes customer experience, survey tools, and user experience software & experts. It may also include other products that don’t directly compete, but they are still kind of related to customer success/support.
  • Tier 3 – Larger blogs and influencers who focus on broader marketing and business topics. Not super related, but due to high domain authority, still opportune link building opportunities (plus, there’s no competitive nature to these sites, so they’re more willing to link to us).

There existed an inverse correlation between our Tiers and the ease of link acquisition.

Tier 1 was the most difficult, mainly because most of the sites and influencers were competing for the same keywords. On the other end of the spectrum, large blogs that write on broad marketing topics generally weren’t too concerned about competing, so it was much easier to work with them.

 

Measuring, Optimizing, and Beyond

Measurement is important in SEO, and in marketing in general. You need to know if you’re moving in the right direction, and if not, how you can possibly remedy that or optimize your efforts.

To do that without going too crazy watching too many keywords, we followed only the spearheads topics of our content strategy using Accuranker. We figured that if we ranked these, the longer tail keywords and most specific cluster posts would easily follow (and if they didn’t, it would be easy enough to optimize them later on).

Here’s what a typical Accuranker report looked like earlier on in our efforts:

Which is much better than where we started, which is from scratch:

You can use other tools for this, such as Ahrefs and I’m sure a dozen more, as well.

If you’re operating at large scale and want to customize your reports more, you can build a homebrew tool, though if you’re just beginning in your SEO and measurement efforts, it might not hurt to start with a software solution so you can focus on your actual SEO execution.

Hopefully, you can choose a solution that allows you to get a weekly email report with your rankings. Peaking too often can be tempting but ultimately unhelpful due to natural fluctuations in SERP rankings (especially in the first few months of publishing content).

When you know your weekly ranking trends, you can spot early ineffectiveness and course correct.

For instance, our knowledge base guide wasn’t moving at all into the top 500, but all our other content was. So, we 1) added about 1000 words and more influencers quotes to our pillar page 2) changed the title and the H2 subtitles and 3) went super hard on link building to our pillar page. As of today it’s sitting at position 5.

Similarly, I saw that we weren’t ranking for the cluster article we wrote on “customer satisfaction surveys,” even though it was a few thousand words, and in my mind, quality content. We simply changed the title and the following week it was ranking.

Finally, we tracked user acquisition as well. You can and should be doing this anyway through your tool of choice (Google Analytics, Amplitude, etc.). As a result of our SEO efforts, our beta requests saw a sharp increase:

 

Conclusion

Good SEO is rarely the result of sporadic hacks and luck, rather, at scale and for acquisition, it’s the result of a solid process and playbook (just like any other aspect of growth marketing).

This playbook should have multiple components, ranging from content strategy and architecture to differentiated and compelling content creation and all the way to promotion and link building. All the pieces matter, some more than others depending on your specific situation.

For instance, a smaller authority site may have to put a lot more effort into content creation and link building that a large site like HubSpot or Shopify. It all depends on where your competitive advantage lies.

However, with a bit of strategy and content architecture, no matter the size or scale of your company, you can get product pages to rank and actually acquire users from SEO, not just top-of-funnel vanity traffic.

Why Growth Marketers Need Qualitative Data, Too

As growth marketers, we love to swim in quantitative data. We tend to be analytical and love to find hard answers to tough problems. But the “softer,” qualitative stuff is important, too. In fact, at certain stages and for certain problems, it can be even more valuable than quantitative data. Qualitative Data: Finding The Why […]

As growth marketers, we love to swim in quantitative data. We tend to be analytical and love to find hard answers to tough problems.

But the “softer,” qualitative stuff is important, too. In fact, at certain stages and for certain problems, it can be even more valuable than quantitative data.

Qualitative Data: Finding The Why

Quantitative data is great at finding who your customers are, what the problems may be, and where you need to focus your efforts. At the beginning of any project, I set baselines using quantitative data. It’s simply hard to plan for growth otherwise.

Similarly, quantitative data can answer questions related to causality if you’re running A/B tests. This is the gold standard in web and product optimization, and that likely won’t change soon.

But if you’re chasing after answers to “why” and “how” questions, the qualitative stuff is your friend.

Qualitative data can help you answer questions like:

  • Who is my audience (not just demographically, but psychologically)?
  • Why are people not converting?
  • What do I test?
  • What specific language do people use when describing their problem/motivation?
  • What hesitations do people have that I need to address in an experience?
  • What part of an experience brings delight (Aha moment analysis comes to mind)?

Generally speaking, if you’re rigorous and asking important growth questions, qualitative data usually has a place in helping you solve them.

Qualitative insights are an excellent complement to quantitative as well. For example, your form builder tool may even have some analytics that show you the drop-off on form fields and submissions. But running a few user tests will really help you get to the emotion users feel when filling out the form and why they are dropping off. Both forms of data work in tandem.

Note: I’m using the term “qualitative” kind of broadly here, and maybe not specific to its technical definition. For the purpose of this article, qualitative data is basically any data that isn’t behavioral, revenue, demographic, or technically related. It includes things like attitudinal surveys, user tests, and interviews. The results of this data are things like psychographics, user journey maps, “psych” scores, and voice of customer data.

Methods to Gather Qualitative Data

Depending on the creativity of your research design and your specific problem, you can gather qualitative insights in a variety of ways, from the colloquial (coffee shop user testing) to the formal (complex survey design and semantic analysis).

That being said, there are a few that are more common (and cheap) than others. Here are a few favorites that have helped me out of tough problems…

Customer Surveys

Customer surveys are sort of the go-to tool for marketers, probably because they don’t have to actually talk to customers (my cynical answer).

They’re also used often because, nowadays, you can get incredibly granular with who you target and when you do so. Additionally, survey tools today are endlessly customizable, as well as personalizable, so they can be used for almost any means. You can even integrate survey tools with your marketing tools to create better campaigns (but that’s means for a different article).

Image Source

Customer surveys allow you to quantify data as well using survey response scales. However, in the pursuit of growth, I’ve always found the open-ended qualitative questions to be more valuable.

Customer Interviews

I’m a big fan of customer interviews, for two very different purposes.

First, for the exploratory phase of a campaign, product, or even a company. When you need help even determining the direction, well-executed conversions with customers (or potential customers) can really help put you on path.

Second, for honing in on specific motivations and language that the customer uses. This is super useful when you’re setting up paid campaigns, writing copy for landing pages, or building narratives for product marketing.

In an “informal” conversation filled with open-ended questions, you can often get people to lower their guard and you can get closer to the truth (surveys are pretty stuffy after all).

On-Site Polls

On-site polls are another form of survey tool, but they’re usually anonymous and target website visitors at large instead of a segment of your customer, lead, or subscriber base.

These are great for finding “real-time” feedback on your site. For instance, you can slap an on-site poll on your thank you page to find out what factors led to the final purchase (and what almost prevented it).

Image Source

Or you could use on-site polls to determine if your blog article or knowledge base article is providing enough context and value. While content analytics can provide a lot of insight, there’s nothing like an on-site poll to triage content-based problems.

Image Source

The use cases here are pretty endless. Use a tool like Qualaroo to accomplish all of this super easily.

Session Replays

Session replays are anonymized videos of users actually using your site or product. Creepy? Somewhat. Useful? Totally.

The first time you watch people interact with your landing page or product, that you pored time and energy into, will probably be a painful experience. Inevitably, you’ll find usability errors, confusing user behavior, and “obvious” fixes to make.

Image Source

I find that session replays are wonderful at honing in on user experience and usability problems. Sometimes, the fixes are obvious (if a button or link is broken, for instance). Sometimes the videos simply make you aware of problems that will require further research (e.g. why did these users all drop off after putting all their info in already? How can we fix this funnel step).

User Tests

User tests are my favorite qualitative research method. I love both moderated and unmoderated tests, and I think if you run these rigorously, they can be the most bang for your buck in terms of costs vs. insights.

Basically, you can run them any way you please, but you’ll be watching users interact with your site or product in a way that you direct. You can set broad prompts (e.g. “find an item you would like to buy”), or you can set specific prompts (e.g. “find a pair of men’s boot cut jeans in size 33/32 for under $80”). I recommend doing both.

You can also ask survey questions after task completions, and you can make this stuff quantitative by timing completion rates and asking quantitative satisfaction survey questions between tasks.

Much more on user tests and satisfaction in these articles:

 

“Qualitative” Does Not Mean Without Rigor

I mentioned that qualitative data is usually looked at as a “softer” and more malleable form of data than the quantitative stuff. Not totally true. While it’s easy to bias your quantitative analytics with poor tracking, poor analysis, or poor strategy, you can also easily (maybe more easily) pollute your qualitative data.

Like my university English literature professor said while knocking postmodernism, “you can always find what you’re looking for if you look hard enough.”

Think of it like this. If you have a strong suspicion that your form is too long, and you’re running users tests to find out if that’s the case, it’s very likely you’ll notice every little hesitation a user has when filling out the form. You’ll put on the blinders to those testers who just breeze through it without problem.

Basically, if we’re not careful, we just see what we want to see. That’s a big problem with qualitative data, and there’s not an easy fix to it.

One big way to mitigate biases on decision making is to know that they exist and that you’re affected, too. No one is exempt.

Another way is to systematize your approach to analyzing qualitative data. Depending on what method of research you’re using, that could look like a few things.

For example, if you’re analyzing session replay videos, perhaps you have a scorecard with specific issues that you mark while watching the videos. You can scribble comments by these points, and every time you analyze videos, you do it with a cross-functional team of 4 other employees.

Perhaps, you’ve systematized surveys to be analyzed according to buckets of issue based on classifying algorithms. Maybe you’re just looking for some new copy to write so you snag a phrase you like and you’re fine with that.

It’s all up to you. My point is that you should think about these things up front or risk post-hoc rationalization.

A lot of what the book Sprint covers is a framework for systematizing user feedback on early prototypes. It’s helpful to outline a system like this in advanced and iterate and improve on it with time.

Image Source

The topic of bias in research, especially qualitative, is here. Here are three great articles on the topic:

In addition, make sure that your qualitative research is done in pursuit of insights you can actually take action on. Any type of information gathering is a cost, and you need to have proportional reward for that cost.

Most of the time, growth marketers are seeking out specific solutions to copy, design, or usability fixes, but you can also draw out more complex use psychology maps using frameworks like ELMR.

Whatever the case, make sure you’re actually using this stuff to grow your business, otherwise it’s just work for work’s sake.

Conclusion

As growth marketers, what we’re really working on is decision optimization, and we’re always operating under conditions of uncertainty (h/t Matt Gershoff).

More information, either through qualitative or quantitative data, can reduce that uncertainty (at the cost of collecting it). The best use of resources is to find the cheapest way to make an optimal decision at a certainty level you’re comfortable with.

Sometimes that’s true quantitative data, such as in the case of quick and cheap A/B tests. But sometimes that’s through qualitative data (a few quick user tests is usually cheaper than building a predictive churn model).

Additionally, when you’re just starting out or when you’re trying to actually create better experiences, qualitative data is really your leg to stand on. It’s what can tell you your customers’ desires, fears, hesitations, and doubts better than any Google Analytics report ever could.