就 ES 基础部分来说这暂时就是最后一篇的文章,写完之后就会学习 MQ 了。本篇内容简单了解 ES 的分析器,最重要的还是根据自己需求去定制自定义分析器,自定义分析器自行了解,这里只是基础。其他比较重要的就是中文分词器了,只需要知道常用的几种中文分词器就可以了。
无论是内置的分析器,还是自定义的分析器,都是由一个分词器(tokenizers
) 、0或多个词项过滤器(token filters
)、0或多个字符过滤器(character filters
)组成。
在实际使用中我们大多数听到的都是分词器,原本是因为过滤器用到的很少,所以在叫法上几乎分析器等价于分词器了。而且分词的结果经常与高亮结合使用。
在ElasticSearch引擎中,分析器的任务是 分析(analyze)
文本数据,分析是分词,规范化文本的意思,其工作流程是:
分析(analyzed)
文本进行过滤和处理,例如:从原始文本中移除HTML标记,根据字符映射替换文本等。ElasticSearch引擎在收到用户的查询请求时,会使用分析器对查询条件进行分析,根据分析的结构,重新构造查询,以搜索倒排索引,完成全文搜索请求,
分析器的处理过程发生在 创建倒排索引时 和 搜索时 两个时期,创建和搜索时使用的分词器规则一致,才可以搜索到准确的数据。
Elasticsearch 有许多内置的字符过滤器,可用于构建自定义分析器。
参考:Elasticsearch:分析器中的 character filter 介绍
html_strip
字符过滤器去除像 这样的 HTML 元素并解码成像
&
这样的 HTML 实体。
POST _analyze
{"text":"Where is my cheese?
","tokenizer": "standard", "char_filter": ["html_strip"]
}
//返回:
{"tokens": [{"token": "Where","start_offset": 4,"end_offset": 9,"type": "","position": 0},{"token": "is","start_offset": 10,"end_offset": 12,"type": "","position": 1},{"token": "my","start_offset": 13,"end_offset": 15,"type": "","position": 2},{"token": "cheese","start_offset": 16,"end_offset": 22,"type": "","position": 3}]
}
字符过滤器只是从输入字段中剥离 标签,但是有时候一些标签需要保留,例如:业务需求可能是从句子中去除
标签,但保留预格式化 (
) 标签;这时候就需要使用所需的自定义分析器去创建索引。
PUT
{"settings": {"analysis": {"analyzer": { # 这这里设置分析器"ANALYZER_NAME":{ # 分析器名"tokenizer":"TOKENIZER", # 设置分词器"char_filter":["CHAR_FILTER_NAME"] # 设置过滤器}},"char_filter": { # 这里设置过滤器"CHAR_FILTER_NAME":{ # 过滤器名"type":"FILTER", # 设置过滤器类型"escaped_tags":["html标签","pre","br", "...."] # 需要保留的标签}}}}
}
mapping
字符过滤器用指定的替换替换任何出现的指定字符串。
POST _analyze
{"text": "I am from CN","char_filter": [{"type": "mapping","mappings": ["CN => 中国"]}]
}
//返回:
{"tokens": [{"token": "I am from 中国","start_offset": 0,"end_offset": 12,"type": "word","position": 0}]
}
自定义分析器创建索引:
PUT
{"settings": {"analysis": {"analyzer": {"ANALYZER_NAME": { # 分析器名"tokenizer": "TOKENIZER", # 选择分词器"char_filter": [ # 选择过滤器"CHAR_FILTER_NAME" ]}},"char_filter": {"CHAR_FILTER_NAME": { # 过滤器名"type": "FILTER", # 设置过滤器类型"mappings": [ # 替换配置...]}}}}
}
pattern_replace
字符过滤器用指定的字符替换与正则表达式匹配的任何字符。
PUT
{"settings": {"analysis": {"analyzer": {"ANALYZER_NAME":{ "tokenizer":"keyword","char_filter":["CHAR_FILTER_NAME"] }},"char_filter": { "CHAR_FILTER_NAME":{ "type":"FILTER", "pattern":"被替换字符", "replacement":"替换后的字符" }}}}
}
分词器(tokenizers
)是对文本进行分析处理的一种手段,基本处理逻辑为按照预先制定的分词规则,把原始文档分割成若干更小粒度的词项(term
),粒度大小取决于分词器规则。
设置分词器:
PUT /
{"settings": {}, # 可在 analysis 下设置自定义分词器"mappings": {"properties": {"FIELD_NAME":{"type": "text","analyzer": "standard|letter|whitespace|lowercase|..." # 指定分词器}}}
}
下面简单介绍几个常用的系统默认分词器:
标准分词器类型是 standard
,用于大多数欧洲语言,使用 unicode文本分割算法
对文档进行分词。
//测试:
GET /_analyze
{"tokenizer" : "standard","text" : "I am from CN"
}
//分词结果:
{"tokens": [{"token": "I","start_offset": 0,"end_offset": 1,"type": "","position": 0},{"token": "am","start_offset": 2,"end_offset": 4,"type": "","position": 1},{"token": "from","start_offset": 5,"end_offset": 9,"type": "","position": 2},{"token": "CN","start_offset": 10,"end_offset": 12,"type": "","position": 3}]
}
字符分词器类型是 letter
,在非字母位置上分割文本,这就是说,根据相邻的词之间是否存在非字母(例如空格,逗号等)的字符,对文本进行分词,对大多数欧洲语言非常有用。
//测试:
GET /_analyze
{"tokenizer" : "letter","text" : "I'm from CN"
}
//分词结果:
{"tokens": [{"token": "I","start_offset": 0,"end_offset": 1,"type": "word","position": 0},{"token": "m","start_offset": 2,"end_offset": 3,"type": "word","position": 1},{"token": "from","start_offset": 4,"end_offset": 8,"type": "word","position": 2},{"token": "CN","start_offset": 9,"end_offset": 11,"type": "word","position": 3}]
}
空格分词类型是 whitespace
,在空格处分割文本。
GET /_analyze
{"tokenizer" : "whitespace","text" : "I'm from CN"
}
//分词结果:
{"tokens": [{"token": "I'm","start_offset": 0,"end_offset": 3,"type": "word","position": 0},{"token": "from","start_offset": 4,"end_offset": 8,"type": "word","position": 1},{"token": "CN","start_offset": 9,"end_offset": 11,"type": "word","position": 2}]
}
小写分词器类型是 lowercase
,在非字母位置上分割文本,并把分词转换为小写形式,功能上是 letter tokenizer
和 lower case token filter
的结合,但是性能更高,一次性完成两个任务。
GET /_analyze
{"tokenizer" : "lowercase","text" : "I'm from CN"
}
//分词结果:
{"tokens": [{"token": "i","start_offset": 0,"end_offset": 1,"type": "word","position": 0},{"token": "m","start_offset": 2,"end_offset": 3,"type": "word","position": 1},{"token": "from","start_offset": 4,"end_offset": 8,"type": "word","position": 2},{"token": "cn","start_offset": 9,"end_offset": 11,"type": "word","position": 3}]
}
经典分词器类型是 classic
,基于语法规则对文本进行分词,对英语文档分词非常有用,在处理首字母缩写,公司名称,邮件地址和Internet主机名上效果非常好。
GET /_analyze
{"tokenizer" : "classic","text" : "I'm from CN"
}
// 分词结果:
{"tokens": [{"token": "I'm","start_offset": 0,"end_offset": 3,"type": "","position": 0},{"token": "from","start_offset": 4,"end_offset": 8,"type": "","position": 1},{"token": "CN","start_offset": 9,"end_offset": 11,"type": "","position": 2}]
}
等其它多种分词器,这里不再一一列出,因为不是经常用的。
其它分词器参考官网:Tokenizer。
将不在 Basic Latin Unicode 块中的字母、数字和符号字符(前 127 个 ASCII 字符)转换为它们的 ASCII 等效字符(如果存在)。 例如,过滤器将 à 更改为 a。
GET /_analyze
{"tokenizer" : "standard","filter" : ["asciifolding"],"text" : "açaí à la carte"
}
//返回:
{"tokens": [{"token": "acai","start_offset": 0,"end_offset": 4,"type": "","position": 0},{"token": "a","start_offset": 5,"end_offset": 6,"type": "","position": 1},{"token": "la","start_offset": 7,"end_offset": 9,"type": "","position": 2},{"token": "carte","start_offset": 10,"end_offset": 15,"type": "","position": 3}]
}
删除比指定字符长度更短或更长的标记,它是可配置的,我们可以在settings中设置。 例如,你可以使用长度过滤器来排除短于 2 个字符的标记和长于 5 个字符的标记。
GET _analyze
{"tokenizer": "whitespace","filter": [{"type": "length","min": 0,"max": 4}],"text": "the quick brown fox jumps over the lazy dog"
}
//返回:
{"tokens": [{"token": "the","start_offset": 0,"end_offset": 3,"type": "word","position": 0},{"token": "fox","start_offset": 16,"end_offset": 19,"type": "word","position": 3},{"token": "over","start_offset": 26,"end_offset": 30,"type": "word","position": 5},{"token": "the","start_offset": 31,"end_offset": 34,"type": "word","position": 6},{"token": "lazy","start_offset": 35,"end_offset": 39,"type": "word","position": 7},{"token": "dog","start_offset": 40,"end_offset": 43,"type": "word","position": 8}]
}
将分词规范化为小写,它通过language参数支持希腊语、爱尔兰语和土耳其语小写标记过滤器。
GET _analyze
{"tokenizer" : "standard","filter" : ["lowercase"],"text" : "THE Quick FoX JUMPs"
}
//返回:
{"tokens": [{"token": "the","start_offset": 0,"end_offset": 3,"type": "","position": 0},{"token": "quick","start_offset": 4,"end_offset": 9,"type": "","position": 1},{"token": "fox","start_offset": 10,"end_offset": 13,"type": "","position": 2},{"token": "jumps","start_offset": 14,"end_offset": 19,"type": "","position": 3}]
}
将分词规范为大写。
GET _analyze
{"tokenizer" : "standard","filter" : ["uppercase"],"text" : "the Quick FoX JUMPs"
}
//返回:
{"tokens": [{"token": "THE","start_offset": 0,"end_offset": 3,"type": "","position": 0},{"token": "QUICK","start_offset": 4,"end_offset": 9,"type": "","position": 1},{"token": "FOX","start_offset": 10,"end_offset": 13,"type": "","position": 2},{"token": "JUMPS","start_offset": 14,"end_offset": 19,"type": "","position": 3}]
}
其余分词过滤器不一一列举。详情参见 官网。
中文分词器主要介绍 IK 分词器
和 拼音分词器
。
安装参考:Elasticsearch:IK 中文分词器。
IK Analyzer是一个开源的,基于java语言开发的轻量级的中文分词工具包。
ik 的分词粒度:
ik_max_word
:会将文本做较细粒度的拆分,例如「我爱北京天安门」会被拆分为「我、爱、北京、天安门、天安、门」,会穷尽各种可能的组合。ik_smart
:会将文本做简单粗粒度的拆分,例如「我爱北京天安门」会被拆分为「我、爱、北京、天安门」。GET /_analyze
{"analyzer" : "ik_max_word|ik_smart","text" : "我爱北京天安门"
}
安装使用参考:Elasticsearch:Pinyin 分词器。
拼音分词器可以让用户输入拼音,就能查找到相关的关键词。比如在某个商城搜索中,输入yonghui,就能匹配到永辉。
安装使用参考:Elasticsearch:简体繁体转换分词器 - STConvert analysis。