v0.6.5: Single Page Fetch Command
New crawler fetch command to grab a single URL with smart, path-based filenames. Skip the full crawl when you only need one page.
What’s New in v0.6.5
Single Page Fetch
The new crawler fetch command grabs a single URL without crawling the entire site. It is the fastest way to capture one page when you do not need a full site crawl.
crawler fetch https://example.com/about/teamThis creates example-com-about-team.crawl - a predictable, path-based filename instead of the domain-only name that crawler crawl produces.
Smart Filenames
The fetch command generates filenames from the URL path so every page gets a unique, readable output file:
| URL | Output file |
|---|---|
https://example.com/ | example-com-index.crawl |
https://example.com/about/team | example-com-about-team.crawl |
https://example.com/blog/my-post.html | example-com-blog-my-post.crawl |
https://example.com/a/b?q=1 | example-com-a-b.crawl |
Path segments are joined with hyphens. File extensions like .html and .php are stripped. Query parameters and fragments are ignored. Long paths are truncated at a clean hyphen boundary.
CLI Usage
Basic fetch:
crawler fetch https://example.com/pricingOverride the output filename:
crawler fetch -o pricing-page.crawl https://example.com/pricingFetch as JSON or sitemap format:
crawler fetch --format json https://example.com/pricingDisable content extraction for a smaller, faster result:
crawler fetch --no-extract https://example.com/pricingAll Options
crawler fetch [OPTIONS] <URL>
Arguments: <URL> The URL to fetch
Options: -o, --output <PATH> Output file path -f, --format <FORMAT> Output format: ndjson, json, sitemap [default: ndjson] --no-extract Disable content extraction --user-agent <UA> Custom User-Agent header -v, --verbose Enable verbose logging -q, --quiet Suppress all output except errorsWhen to Use Fetch vs Crawl
Use crawler fetch when you need a single page - checking one URL, grabbing content from a specific article, or testing how the crawler handles a particular page. Use crawler crawl when you need to analyze an entire site.
The output format is identical. Files produced by fetch work with all existing commands - crawler info, crawler seo, and crawler export.
Who Benefits
- Content teams can quickly capture individual pages for review without waiting for a full site crawl
- Developers can test and debug single-page crawl behavior with a lightweight command
- SEO professionals can spot-check specific pages and run targeted
crawler seoanalysis on them
Related
Wrap-up
A CMS shouldn't slow you down. Crawler aims to expand into your workflow — whether you're coding content models, collaborating on product copy, or launching updates at 2am.
If that sounds like the kind of tooling you want to use — try Crawler .