Why Is the SRSLTID Parameter Showing Up in Organic Results, and Why Aren’t Google More Concerned?
There’s been a strange glitch happening in SERPs since late August. URLs that include the SRSLTID parameter – which is normally reserved for auto-tagging in Google’s Merchant Center – have been showing up in non-Shopping organic search results. This means that, for some websites, these unusual versions of URLs are turning up instead of “normal” page URLs. This isn’t something that your average Google user may even notice, or care about, but it does potentially cause some issues for website owners and SEOs. Yet Google are dismissing the issue as nothing to worry about. Why is this? Should we just take their word and stop worrying, or is there more to be concerned about than they’re letting on?
Simply put, the SRSLTID parameter is a tag that’s added to the end of a URL, to identify that it appeared in an organic shopping result. This helps GMC users to track shopping results in their analytics and attribute sales accordingly. But it’s appearance in “normal” search results confuses the issue. There were suggestions by others who were seeing this on their sites that disabling auto-tagging in Merchant Center would resolve it, but we’ve seen the tag continue to appear in SERPs for websites where we’ve done this.
To us, it doesn’t seem like this is something that can be completely dismissed out of hand. In rare cases, it could cause a search result to lead users to a 404 “Page Not Found” error page, rather than the working page you’d find via the “normal” URL. The page is likely to stop ranking because of this. This won’t happen for most websites, but the fact that it’s possible is concerning.
A more widespread issue is that all of these “new” URLs having been crawled and indexed by Googlebot, which is the reason we’re seeing this happen in the first place. If your site is properly optimised for search, then your pages should already have canonical tags which let crawlers know what the “default” version of the page URL is, regardless of what other versions they crawl. If you haven’t already done this, then you’re going to have a time-consuming job ahead of you setting these up. I recommend getting it done even if you aren’t yet seeing any issues, as it’s SEO best practice and will future proof you from a number of potential problems (and possibly eliminate existing ones you don’t yet know about).
Even with canonical tags in place, these URLs are still popping up in SERPS, often leading to repeated changes in what URL is ranking. This will lead to difficulties with tracking and reporting, as it could look to many analytical tools that specific URLs are dropping out of SERPs altogether, even when the page has retained the same position. We’re aware that a number of tools have already compensated for this, scrubbing SRSLTID parameters from URLs in their data. That’s helpful for reporting, but could also mean that related issues don’t get spotted if your tools are telling you everything’s fine.
There are potential solutions to avoid, or at least mitigate, this issue. But all of them require time and effort from SEOs and web developers, when the problem clearly comes from Google. So why are they washing their hands of any responsibility in this case? Google have made errors before; while we don’t always get an apology or even an explanation, bugs are fixed and controversial changes rolled back fairly quickly in most cases. Why not this time?
About a month ago, John Mueller of Google did give a response to an SEO on LinkedIn regarding this issue. It basically explained what the URL parameter was for and that Google are “expanding it to traditional search results”. He also stated that it would not affecting crawling, index or ranking. But we’re seeing some correlation between this issue and ranking changes. If those URLs are ranking, that means they’ve been indexed, so at some point they must have been crawled. And Mueller has offered no explanation as to why this change has been rolled out to non-shopping organic results. Won’t this impact analytics for Merchant Center users?
There’s also another challenge, which adds a layer of difficulty to identifying which pages of your site may be affected. Logically, the only pages that should have the SRSLTID parameter applied should be individual product pages. That being the case, this should allow you to produce a list of URLs and their potential tags that you can go through and fix. But we’ve seen category page and even blog post URLs showing up in SERPs with this tag added. That makes tracking every affected URL much harder – and surely this makes it more of an obvious error that Google should be treating more seriously?
Perhaps there will be a more considered response to come from someone at Google, but for the meantime we’re left a little puzzled and frustrated at their nonchalance around this problem.