fetch_story - fetch a story from the internet


version 0.2201


fetch_story [ --help | --manpage | --list ]

fetch_story [ --basename string ] [ --firefox_cookies filename | --wget_cookies filename ] [ --first_is_toc ] [ --toc | --epub ] [ --use_wget ] [ --yaml ] url [ url ] ...


This fetches a story from the net (including multi-part stories) tidies up the HTML, and saves it. If multiple URLs are given, they are assumed to be URLs for all the chapters of the story. If one URL is given, and a story is deduced to be a multi-chapter story, all chapters of the story are downloaded. In both cases, the chapters will be saved to separate files.

Fetcher Plugins

In order to tidy the HTML and parse the pages for data about the story, site-specific "Fetcher" plugins have been written for various sites such as, LiveJournal and others. These plugins can scrape meta-information about the story from the given page, including the URLs of all the chapters of a multi-chapter story.


--basename string

This option uses the given string as the "base" name of the story file(s) instead of constructing the name from the title of the story.


Convert the story file(s) into EPUB format.

--firefox_cookies filename

The name of a Firefox 4+ cookie file to use for cookies, for logging in to restricted sites. This option will be ignored if using wget.


Print help and exit.


List the available fetcher plugins.


Print manual page and exit. Requires "perldoc" to be installed.


Don't be verbose.


Build a table-of-contents file for the story.


Use wget to download files rather than the default Perl module, LWP. This may be necessary for some sites (such as LiveJournal) which require cookies to access restricted content, but which also don't play nice with the way LWP handles cookies.


Be verbose. If this option is repeated, the output will be even more verbose.

--wget_cookies filename

The name of a cookie file in the format used by wget for cookies, (also known as Netscape cookie format) for logging in to restricted sites. This option works whether one is using wget or not.


Put the meta-data into a YAML file.