public class EdgeNGramTokenizer extends NGramTokenizer
This Tokenizer
create n-grams from the beginning edge of a input token.
As of Lucene 4.4, this class supports
pre-tokenization
and correctly handles
supplementary characters.
AttributeSource.State
Modifier and Type | Field and Description |
---|---|
static int |
DEFAULT_MAX_GRAM_SIZE |
static int |
DEFAULT_MIN_GRAM_SIZE |
DEFAULT_MAX_NGRAM_SIZE, DEFAULT_MIN_NGRAM_SIZE
DEFAULT_TOKEN_ATTRIBUTE_FACTORY
Constructor and Description |
---|
EdgeNGramTokenizer(AttributeFactory factory,
int minGram,
int maxGram)
Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
|
EdgeNGramTokenizer(int minGram,
int maxGram)
Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
|
end, incrementToken, isTokenChar, reset
close, correctOffset, setReader
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, endAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, removeAllAttributes, restoreState, toString
public static final int DEFAULT_MAX_GRAM_SIZE
public static final int DEFAULT_MIN_GRAM_SIZE
public EdgeNGramTokenizer(int minGram, int maxGram)
minGram
- the smallest n-gram to generatemaxGram
- the largest n-gram to generatepublic EdgeNGramTokenizer(AttributeFactory factory, int minGram, int maxGram)
factory
- AttributeFactory
to useminGram
- the smallest n-gram to generatemaxGram
- the largest n-gram to generateCopyright © 2000-2021 Apache Software Foundation. All Rights Reserved.