NAME - simple HTML scraping from the command line


This is a simple program to extract data from HTML by specifying CSS3 or XPath selectors.

SYNOPSIS URL selector selector ...

    # Print page title title
    # The Perl Programming Language -

    # Print links with titles, make links absolute a //a/@href --uri=2

    # Print all links to JPG images, make links absolute a[@href=$"jpg"]

    # print JSON about Amazon prices
        --format json
        --name "title" #productTitle
        --name "price" #priceblock_ourprice
        --name "deal" #priceblock_dealprice

    # print JSON about Amazon prices for multiple products --format json
        --name "title" #productTitle
        --name "price" #priceblock_ourprice
        --name "deal" #priceblock_dealprice


This program fetches an HTML page and extracts nodes matched by XPath or CSS selectors from it.

If URL is -, input will be read from STDIN.



Output format, the default is csv. Valid values are csv or json.


URL to fetch. This can be given multiple times to fetch multiple URLs in one run. If this is not given, the first argument on the command line will be taken as the only URL to be fetched.


Add the fetched URL as another column with the given name in the output. If you use CSV output, the URL will always be in the first column.


Name of the output column.


Separator character to use for columns. Default is tab.


Numbers of columns to convert into absolute URIs, if the known attributes do not everything you want.


Switches off the automatic translation to absolute URIs for known attributes like href and src.


The public repository of this module is


The public support forum of this program is


Max Maischein


Copyright 2011-2018 by Max Maischein


This module is released under the same terms as Perl itself.