Google announced a few days ago that they would change the way rel="nofollow"
works for them. They would start treating nofollow
as a “hint” instead of a “directive”. This means they go from “oh you don’t want us to go into that room? ok” to “oh you don’t want us to go into that room? We’ll see about that”. Basically, they went from friendly neighbor to annoying parent real quick.
Now rel="nofollow"
is something you’d attach to links on your site. So I could nofollow one link and not nofollow another. There is also a meta robots
directive, used like this:
<meta name="robots" content="noindex,nofollow"/>
This would historically direct Google to not show that page in its index, and not follow its links. Now Gary Ilyes tweeted this last night:
When Gary says “meta robots nofollow is a hint now”, I become slightly nervous. Because if nofollow
in a meta robots element is a hint, what is noindex
? Do they now want to treat that as a hint? or as a directive? Turns out, noindex
remains a directive:
But, even if Google now says “noindex
will remain a directive”, won’t that lead to years and years of discussion? In my experience, even many experienced SEOs don’t always understand the difference between directives and hints, and think they’ve excluded something when they haven’t. This change will only make this worse.
Google unilaterally makes changes
My biggest gripe with this is that Google is making these changes unilaterally. Bing, Yandex, Baidu: all support rel="nofollow"
and other search engines probably do too. The same is true for meta robots nofollow
. I don’t think it’s a good idea when Google decides on its own that it changes the “laws” of the web.
Google were the ones to introduce rel="nofollow"
, which gives them some rights to change that “standard”. However, meta robots nofollow
has been around since 1996. In fact, this part of the meta robots page made me chuckle:
robots can ignore your
Taken from the Robots pages<META>
tag. Especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention.
Apparently, I should consider Googlebot malware from now on 😉
Real world implications
Let’s look at real world implications: a link in a comment on a WordPress site used to have rel="nofollow"
added to it automatically. We’ll now have to change rel="nofollow"
to rel="nofollow ugc"
. We can’t take out the nofollow
, because other search engines don’t support the ugc
part, but Google, with its market domination, will urge us to make that ugc
change.
Now Google’s first reply to this will be “you don’t have to change anything if you don’t want to, we even said that in our post”. And they did:
There’s absolutely no need to change any nofollow links that you already have.
Google’s Danny Sullivan in their announcement blog post on the nofollow changes
I read this and chuckled. Obviously Google needs to read up on the murder of Thomas Becket. Because of Google’s market dominance, people will do anything to get into their favor. They can make changes like this and the web will follow. The real question here is: shouldn’t we have legislation that prevents them from making these changes unilaterally?
In fact I’d say it’s time to go one step further: the web needs to have true standards for this. Standards that are preferably turned into law by the European Union, the US and China. But I dream too much perhaps. They at least need to be standards. Standards that all non-malware crawlers, including Google, will adhere too.
I get very nervous when somebody proposes new regulations and laws, because in the end they would be created and imposed by bureaucrats and not by specialists.
Just look at all the joy and happiness that GDPR has brought us.
A few bad actors ruined the party for all of us.
If Google starts acting out more and more, if they start forcing their position, people will move elsewhere.
What stops other companies from competing with Google? Provide better services, give out a lot for free, and let them ruin themselves.
The world is still run by a market economy.
Supply and demand.
Honestly, in the end, the results of the GDPR are mostly positive. Companies started thinking about what data they kept on their customers again and how they used it. I think that really is a good thing.
The Internet would not be what it is today if GDPR existed 20 years ago. The world somehow managed to exist without it.
The fact that the Gs, FBs and Cambridge Analytica’s of the world abused the user’s trust – that is on them. Slap them and slap them hard.
Imposing the same level of regulations on a multi-billion dollar company and on one-man operations is wrong and unfair.
Annoying the billions of Internet users with tens of daily popups on every website is also not “mostly positive”. 99.99% of users really don’t care about cookies or tracking. Otherwise the Instagrams and the TikToks would not even exist.
That’s why I think that asking for even more regulation from external powers is dangerous and a bad path to take.
“The Internet would not be what it is today if GDPR existed 20 years ago.” Sure but what was possible 20 years ago is no longer possible in the today world. We could say the same for car driving. Rules become necessary when bad habits become the rule!