WebLight is a web crawler that lets you efficiently maintain xml sitemaps and find markup, css and link problems so you can maintain error free, fully indexed sites without sacrificing productivity.
Painlessly Analyze Entire Sites
The heart of WebLight is its fast web crawler that makes validating and mapping entire sites, even very large sites, easy. You simply tell WebLight where to start and use rules like robots.txt to control the resources it validates and the links it follows. You can use as many starting urls and rules as necessary to scan your all of the resources you want and none that you don't.
Analyze Everything, Everywhere
Unlike most web crawlers that only scan public html pages, WebLight can scan most commonly used web resources - css, (x)html, atom, rss, and xml sitemaps - on local disks, public and private web sites. It supports proxies, cookies, custom headers and http authentication (basic and digest). If you can browse it, WebLight can analyze it.
Efficiently Find CSS, Markup & Link Problems
WebLight is like a link checker that validates css, html, news feeds and sitemaps while it crawls. But, WebLight doesn't just find non-standard code and broken links, it makes locating, evaluating, categorizing and filtering problems easy so that you can maintain web sites that comply with your standards efficiently.
Easily Maintain Perfect Sitemaps
Maintaining xml sitemaps that help search engines efficiently index your sites is easy with WebLight. It's fast, flexible scanner finds all of your documents linked from news feeds, sitemaps and other documents. Then you can customize your SiteMap, setting the priority, change frequency and last modified date for urls using WebLight's spreadsheet like interface. Finally, when your sitemap is ready, WebLight will ping the search engines so they can index your new pages as soon as possible.
Fixed some bugs.