ReachLocal: Reverse Proxy Does Not Affect Your Site’s Organic Rank

 

One of the elements that makes the ReachLocal platform powerful is our ability to track the conversions (phone calls, emails, web forms, etc.) generated by the advertising campaigns we run for our clients.  We do this through our patent-pending technology based on a Reverse Proxy architecture.

Over the years, some people have commented online making incorrect claims about our Reverse Proxy technology.  One persistent rumor claimed that it “mirrors” an advertiser’s website. It doesn’t, which we outlined in an earlier blog post. But there have also been other claims that ReachLocal’s Reverse Proxy negatively impacts the organic rank (SEO) of an advertiser’s website.

The complaint tends to go something like this: When a search bot sees a ReachLocal proxy site as it crawls the web, it indexes the proxy site. Then, Google’s Duplicate Content rule (which refers to multiple sites featuring identical content) starts a chain reaction which ends up penalizing both sites, banishing them from the web forever, so you’ll have to start from square one if you want to have any kind of organic ranking ever again.

Actually, this is quite wrong. The truth is that the ReachLocal Reverse Proxy technology blocks search bots from indexing proxy sites, so the proxy sites don’t interfere with the way search engines rank websites. We use several proven technical methods to accomplish this which include meta tags and X-Robot-Tag header tags.

Here’s how they work:

Meta Tag and X-Robots-Tag Header Tag: “No index, no follow.”

The meta and X-Robots-Tag header tags give the search bot instructions to: “No index, no follow” the site. The meta and X-Robots-Tag header tags instruct the search bot to not index the proxy URL.

The Duplicate Content Rule

So now that you have a better idea of how ReachLocal employs meta and X-Robots-Tag header tags to disregard the proxy sites as much as possible, how exactly does the Duplicate Content Rule factor in? Well, because search bots never end up seeing a ReachLocal proxy site’s content, there’s no way for proxy sites to run afoul of Google’s Duplicate Content rule. Besides, the rule exists less as a vehicle by which to punish sites, and more as a way to identify the more authentic, or original, site between multiple sites that contain duplicate, identical content.

Conclusion

Google’s ultimate objective is to determine which site (between multiple sites containing identical content) represents the content “best” so it can be identified as a source, or origin site, with a higher SEO value. So when gauging a proxy site with no discernible content (and, at best, only a URL to index) against an advertiser’s original site with content it can see, there’s really no causal relation between the two – even though to a consumer, each website would appear to have identical content. Thanks to  the meta and X-Robot-Tag header tags that all our proxy sites have, there’s no way for our efforts violate Google’s Duplicate Content rule.

This means that ReachLocal Reverse Proxy technology does not impact the organic ranking of an advertiser’s original website.

We want all local advertisers to understand how this technology works and that what we’ve put in place to ensure that reverse proxy does not put your website at risk when it comes to organic ranking. If you have any technical questions about our Reverse Proxy technology, please leave a comment!

Additional reading:

This post has been updated.

ReachLocal

The ReachLocal online marketing blog shares practical tips and advice for those who want to reach local customers online. Learn about online marketing from our in-house experts and thought leaders.

View all articles

You Might Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>