Screaming Frog SEO Spider

The SEO Spider is a desktop website auditor for PC, Mac or Linux which crawls websites’ links, images, CSS, script, and apps like a search engine to evaluate onsite SEO.

That allows you to crawl websites’ URLs and fetch key onsite elements to analyze onsite SEO. Download for free, or purchase a licence for additional advanced features.

What can you do with the SEO Spider Tool?

The SEO Spider is powerful, flexible and able to crawl both small and very large websites efficiently, allowing you to analyse the results in real-time. It gathers key onsite data to allow SEOs to make informed decisions. Some of the common uses include –

The SEO Spider Tool Crawls & Reports On…

The Screaming Frog SEO Spider is an SEO auditing tool, built by real SEOs with thousands of users worldwide. A quick summary of some of the data collected in a crawl include –

✔ Errors – Client errors such as broken links & server errors (No responses, 4XX, 5XX).
✔ Redirects – Permanent, temporary redirects (3XX responses) & JS redirects.
✔ Blocked URLs – View & audit URLs disallowed by the robots.txt protocol.
✔ Blocked Resources – View & audit blocked resources in rendering mode.
✔ External Links – All external links and their status codes.
✔ Protocol – Whether the URLs are secure (HTTPS) or insecure (HTTP).
✔ URI Issues – Non ASCII characters, underscores, uppercase characters, parameters, or long URLs.
✔ Duplicate Pages – Hash value / MD5checksums algorithmic check for exact duplicate pages.
✔ Page Titles – Missing, duplicate, over 65 characters, short, pixel width truncation, same as h1, or multiple.
✔ Meta Description – Missing, duplicate, over 156 characters, short, pixel width truncation or multiple.
✔ Meta Keywords – Mainly for reference, as they are not used by Google, Bing or Yahoo.
✔ File Size – Size of URLs & images.
✔ Response Time.
✔ Last-Modified Header.
✔ Page Depth Level.
✔ Word Count.
✔ H1 – Missing, duplicate, over 70 characters, multiple.
✔ H2 – Missing, duplicate, over 70 characters, multiple.
✔ Meta Robots – Index, noindex, follow, nofollow, noarchive, nosnippet, noodp, noydir etc.
✔ Meta Refresh – Including target page and time delay.
✔ Canonical link element & canonical HTTP headers.
✔ X-Robots-Tag.
✔ rel=“next” and rel=“prev”.
✔ Follow & Nofollow – At page and link level (true/false).
✔ hreflang Attributes – Audit missing confirmation links, inconsistent & incorrect languages codes, non canonical  hreflang and more.
✔ Rendering – Crawl JavaScript frameworks like AngularJS and React, by crawling the rendered HTML after JavaScript has executed.
✔ AJAX – Select to obey Google’s now deprecated AJAX Crawling Scheme.
✔ Inlinks – All pages linking to a URI.
✔ Outlinks – All pages a URI links out to.
✔ Anchor Text – All link text. Alt text from images with links.
✔ Images – All URIs with the image link & all images from a given page. Images over 100kb, missing alt text, alt text over 100 characters.
✔ User-Agent Switcher – Crawl as Googlebot, Bingbot, Yahoo! Slurp, mobile user-agents or your own custom UA.
✔ Custom HTTP Headers – Supply any header value in a request, from Accept-Language to cookie.
Redirect Chains – Discover redirect chains and loops.
✔ Custom Source Code Search – Find anything you want in the source code of a website! Whether that’s Google Analytics code, specific text, or code etc.
✔ Custom Extraction – Scrape any data from the HTML of a URL using XPath, CSS Path selectors or regex.
✔ Google Analytics Integration – Connect to the Google Analytics API and pull in user and conversion data directly during a crawl.
✔ Google Search Console Integration – Connect to the Google Search Analytics API and collect impression, click and average position data against URLs.
✔ External Link Metrics – Pull external link metrics from Majestic, Ahrefs and Moz APIs into a crawl to perform content audits or profile links.
✔ XML Sitemap Generator – Create an XML sitemap and an image sitemap using the SEO spider.
✔ Custom robots.txt – Download, edit and test a site’s robots.txt using the new custom robots.txt.
✔ Rendered Screen Shots – Fetch, view and analyse the rendered pages crawled.
✔ Store & View HTML & Rendered HTML – Essential for analysing the DOM.




NO CRACK, NO PATCH!!

YOU WILL GET 1  YEAR LEGIT LICENSE ON YOUR OWN NAME!!