If the Site Audit bot identifies multiple pages with 80% similarities in content, it will flag them as duplicate content.
In most cases, domains have duplicate content due to http/https issues. According to W3C standards, whenever you have two versions of a URL (one on http and the other https), they are considered two separate documents.
The same goes for when a site has a www version of a page as well as a non-www version of the same page - search bots see these as two separate documents.
So when the SEMrushBot sees these two separate documents, it will identify them as duplicate because that’s how the GoogleBot would see them.
To avoid this issue, you need to use canonical tags pointing from the duplicate pages to the correct page that you set as the canonical (or indexed) version.
You should also set up a 301 redirect from the http page to the https page so that users and search engine bots strictly see your https version.
Another reason would be if Site Audit sees two pages that have the same content in the header and footer of your website, but there is so little body content (1 or 2 sentences on the page) that the bot sees the pages as at least 80% similar and therefore duplicates. In this case, you would need to expand the content on your pages so that bots can identify them as unique.