Why does SEMrush say I have duplicate content? Question
If the Site Audit bot identifies multiple pages with 80% similarities in content, it will flag them as duplicate content.
HTTP & HTTPS / WWW & non-WWW
In most cases, domains have duplicate content due to http/https issues. According to W3C standards, whenever you have two versions of a URL (one on http and the other https), they are considered two separate documents.
The same goes for when a site has a www version of a page as well as a non-www version of the same page - search bots see these as two separate documents.
So when the SEMrushBot sees these two separate documents, it will identify them as duplicate because that’s how the GoogleBot would see them.
To avoid this issue, you need to use canonical tags pointing from the duplicate pages to the correct page that you set as the canonical (or indexed) version.
You should also set up a 301 redirect from the http page to the https page so that users and search engine bots strictly see your https version.
Pages with little content
Another reason would be if Site Audit sees two pages that have the same content in the header and footer of your website, but there is so little body content (1 or 2 sentences on the page) that the bot sees the pages as at least 80% similar and therefore duplicates. In this case, you would need to expand the content on your pages so that bots can identify them as unique.
- How is Site Health Score calculated in the Site Audit tool?
- How many pages can I crawl in a Site Audit?
- How Does SEMrush Check for Schema Markup?
- How do I audit a subdomain?
- What Issues Can Site Audit Identify?
- Why does SEMrush say I have an incorrect certificate?
- Configuring Site Audit
- Site Audit Overview Report
- Site Audit Thematic Reports
- Reviewing Your Site Audit Issues
- Site Audit Crawled Pages Report
- Site Audit Statistics
- Compare Crawls and Progress
- Troubleshooting Site Audit
- Exporting Site Audit Results
- How to Optimize your Site Audit Crawl Speed