How Does Site Audit Select Pages to Analyze for Core Web Vitals?
Site Audit picks the first ten pages that have the ‘200 OK’ HTTP Status Code.
For the website crawl source, the system will most often select the main page and those pages that have links from the main page.
For crawling by sitemap and by the list of URLs, the system will take the first ten pages in a row, since they are considered the most crucial.
The list of analyzed pages doesn’t change with each crawl. Site Audit algorithms fix it in order to show the historical trends. Please make sure that the pages are accessible for bots, otherwise, they won’t be analyzed.
Can I Select the Pages Manually?
We are currently working on adding the possibility to change the pages by specifying those you need. This option will be available in the report interface soon.
- How is Site Health Score calculated in the Site Audit tool?
- Why does Semrush say I have duplicate content?
- How many pages can I crawl in a Site Audit?
- How do I audit a subdomain?
- What Issues Can Site Audit Identify?
- Why does Semrush say I have an incorrect certificate?
- What are unoptimized anchors and how does Site Audit identify them?
- What do the Structured Data Markup Items in Site Audit Mean?
- Why do working pages on my website appear as broken?
- Can I change automatic Audit re-run schedule or stop it completely?
- Why are only a few of my website’s pages being crawled?
- Why can’t I find URLs from the Audit report on my website?
- How long does it take to crawl a website? It appears that my audit is stuck.
- Configuring Site Audit
- Site Audit Overview Report
- Site Audit Thematic Reports
- Reviewing Your Site Audit Issues
- Site Audit Crawled Pages Report
- Site Audit Statistics
- Compare Crawls and Progress
- Troubleshooting Site Audit
- Exporting Site Audit Results
- How to Optimize your Site Audit Crawl Speed
- How To Integrate Site Audit with Zapier