# Auto-generated file -- DO NOT EDIT!!!!!

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

=head1 NAME

Lucy::Analysis::RegexTokenizer - Split a string into tokens.

=head1 SYNOPSIS

    my $whitespace_tokenizer
        = Lucy::Analysis::RegexTokenizer->new( pattern => '\S+' );

    # or...
    my $word_char_tokenizer
        = Lucy::Analysis::RegexTokenizer->new( pattern => '\w+' );

    # or...
    my $apostrophising_tokenizer = Lucy::Analysis::RegexTokenizer->new;

    # Then... once you have a tokenizer, put it into a PolyAnalyzer:
    my $polyanalyzer = Lucy::Analysis::PolyAnalyzer->new(
        analyzers => [ $word_char_tokenizer, $normalizer, $stemmer ], );



=head1 DESCRIPTION

Generically, "tokenizing" is a process of breaking up a string into an
array of "tokens".  For instance, the string "three blind mice" might be
tokenized into "three", "blind", "mice".

Lucy::Analysis::RegexTokenizer decides where it should break up the text
based on a regular expression compiled from a supplied C<< pattern >>
matching one token.  If our source string is...

    "Eats, Shoots and Leaves."

... then a "whitespace tokenizer" with a C<< pattern >> of
C<< "\\S+" >> produces...

    Eats,
    Shoots
    and
    Leaves.

... while a "word character tokenizer" with a C<< pattern >> of
C<< "\\w+" >> produces...

    Eats
    Shoots
    and
    Leaves

... the difference being that the word character tokenizer skips over
punctuation as well as whitespace when determining token boundaries.

=head1 CONSTRUCTORS

=head2 new( I<[labeled params]> )

    my $word_char_tokenizer = Lucy::Analysis::RegexTokenizer->new(
        pattern => '\w+',    # required
    );

=over

=item *

B<pattern> - A string specifying a Perl-syntax regular expression
which should match one token.  The default value is
C<< \w+(?:[\x{2019}']\w+)* >>, which matches "it's" as well as
"it" and "O'Henry's" as well as "Henry".

=back





=head1 INHERITANCE

Lucy::Analysis::RegexTokenizer isa L<Lucy::Analysis::Analyzer> isa Clownfish::Obj.


=cut