This guide explains how to block SemrushBot from crawling a website.
Below, you’ll find this tutorial divided into two sections: the first part includes steps for blocking each SemrushBot User-Agent that is used by the Semrush software to crawl website content and link data, and the second part contains the full set of rules you can copy and paste into the robots.txt file to prevent all of Semrush’s bots from accessing your site.
By the end of this page, you’ll know how to make your website data unavailable to Semrush users who are trying to analyze the domain for search engine optimization (SEO) and digital marketing campaigns.
Note: If you want to learn more about what SemrushBot is and how it works, then see this related page on SemrushBot. You can also try Semrush for free using my affiliate link to test out all of its capabilities for your SEO campaigns and to verify the blocking rules you set up are properly working for your website.
Table of Contents
How to Block SemrushBot
SemrushBot can be blocked by updating the robots.txt file to Disallow the User-Agent. After blocking the SemrushBot User-Agent, the spider will not be able to crawl the website.
1. Block the Main SemrushBot
Add the following rule to the robots.txt file to block the main SemrushBot that builds a webgraph of links for the site that get reported in the Backlink Analytics tool:
User-agent: SemrushBot Disallow: /
2. Block SemrushAuditBot
This rule will prevent SemrushBot from crawling your website for SEO auditing and technical issues:
User-agent: SiteAuditBot Disallow: /
3. Block SemrushBot-BA
This robots.txt rule stops SemrushBot from crawling your website with the Backlink Audit tool:
User-agent: SemrushBot-BA Disallow: /
4. Block SemrushBot-SI
This directive blocks SemrushBot from accessing your website with the On Page SEO Checker tool:
User-agent: SemrushBot-SI Disallow: /
5. Block SemrushBot-SWA
This robots.txt rule prevents SemrushBot from checking individual web pages for analysis with the SEO Writing Assistant tool:
User-agent: SemrushBot-SWA Disallow: /
6. Block SemrushBot-CT
This directive stops SemrushBot from crawling your website with the Content Analyzer and Post Tracking tools:
User-agent: SemrushBot-CT Disallow: /
7. Block SplitSignalBot
This rule blocks the Semrush SplitSignalBot from crawling your website for the SplitSignal tool which is used to conduct A/B tests for digital marketing:
User-agent: SplitSignalBot Disallow: /
8. Block SemrushBot-COUB
This robots.txt directive can be used to block SemrushBot from accessing your website for the Content Outline Builder tool:
User-agent: SemrushBot-COUB Disallow: /
Blocking All SemrushBot User-Agents
If you want to block every SemrushBot User-Agent from your website, you can copy and paste the following code into the website’s robots.txt file:
User-agent: SemrushBot Disallow: / User-agent: SiteAuditBot Disallow: / User-agent: SemrushBot-BA Disallow: / User-agent: SemrushBot-SI Disallow: / User-agent: SemrushBot-SWA Disallow: / User-agent: SemrushBot-CT Disallow: / User-agent: SplitSignalBot Disallow: / User-agent: SemrushBot-COUB Disallow: /
Note: AhrefsBot can also be blocked using the Disallow directive in the robots.txt file. See this related guide on how to block AhrefsBot if you want to prevent that bot from crawling your website.
Blocking Google Analytics and Search Console
If you’re a current Semrush user and you’ve connected your Google Analytics or Search Console properties to your Semrush account, then you’ll also need to disconnect those websites. Otherwise, Semrush can still access your private website data for reporting purposes.
Details About Blocking Delays
It can take between one hour to 100 requests for the SemrushBots to discover changes made to your robots.txt file and honor those directives for crawling the website. If you want to confirm that the SemrushBot is obeying your rules, then you can try Semrush for free and test out the various tools yourself to see if they work or not.
Block SemrushBot Summary
I hope you enjoyed this guide on how to block SemrushBot.
As you discovered, SemrushBot can be blocked by updating the robots.txt file to Disallow the User-Agent. After blocking the main SemrushBot User-Agent, the spider will not be able to crawl the website to log data for the webgraph of links. However, Semrush has multiple User-Agents for each bot that collects website data for its toolset.
Therefore, you’ll have to block each User-Agent individually (as explained in this guide) from accessing the website to make it unavailable to Semrush users who are trying to analyze the domain for digital marketing and SEO campaigns.