Elasticsearch学习(七):Elasticsearch分析

Stella981
• 阅读 588

一、分析

1. 分析(analysis)

  • 首先,标记化一个文本块为适用于倒排索引单独的词(term)
  • 然后标准化这些词为标准形式,提高它们的“可搜索性”或“查全率” 分析是由分析器(analyzer)完成的。

2. 分析器(analyzer)

  • 字符过滤器(character filter) 过滤处理字符串(比如去掉多余的空格之类的),让字符串在被分词前变得更加“整洁”,一个分析器可能包含零到多个字符过滤器。
  • 分词器(tokenizer) 字符串被标记化成独立的词(比如按空格划分成一个个单词),一个分析器必须包含一个分词器。
  • 标记过滤器(token filters) 所有的词经过标记过滤,标记过滤器可能修改,添加或删除标记。

只有字段是全文字段(full-text fields)的时候分析器才会被使用,当字段是一个确切的值(exact value)时,不会对该字段做分析。

  • 全文字段:类似于string、text
  • 确切值:类似于数值、日期

二、自定义分析器

1. char_filter(字符过滤器)

  • html_strip(html标签过滤) 参数:
    • escaped_tags不应该从原始文本中删除的HTML标签数组
  • mapping(自定义映射过滤) 参数:
    • mappings一个映射数组,每个元素的格式为key => value
    • mappings_path一个以UTF-8编码的文件的绝对路径或者是相对于config目录的路径,文件每一行都是一个格式为key => value映射
  • pattern_replace(使用正则表达式来匹配字符并使用指定的字符串替换) 参数:

2. tokenizer(分词器)

这里只列出常用的几个,更多分词器请查阅官方文档

  • standard(标准分词,默认使用的分词。根据Unicode Consortium的定义的单词边界来切分文本,然后去掉大部分标点符号对于文本分析,所以对于任何语言都是最佳选择) 参数:
    • max_token_length最大标记长度。如果一个标记超过这个长度,就会被分割。默认值为255
  • letter(遇到不是字母的字符就分割) 参数:无
  • lowercase(在letter基础上把所分词都转为小写) 参数:无
  • whitespace(以空格分词) 参数:无
  • keyword(相当于不分词,接收啥输出啥) 参数:
    • buffer_size缓冲区大小。默认为256。缓冲区将以这种大小增长,直到所有文本被消耗。建议不要改变这个设置。

3. filter(标记过滤器)

由于标记过滤器太多,这里就不一一介绍了,请查阅官方文档

4. 自定义分析器

newindex PUT

{
  "settings": {
    "analysis": {
      "char_filter": {
        "my_char_filter": {
          "type": "mapping",
          "mappings": [
            "&=>and",
            ":)=>happy",
            ":(=>sad"
          ]
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "standard",
          "max_token_length": 5
        }
      },
      "filter": {
        "my_filter": {
          "type": "stop",
          "stopwords": [
            "the",
            "a"
          ]
        }
      },
      "analyzer": {
        "my_analyzer": {
          "type": "custom",
          "char_filter": [
            "html_strip",
            "my_char_filter"
          ],
          "tokenizer": "my_tokenizer",
          "filter": [
            "lowercase",
            "my_filter"
          ]
        }
      }
    }
  }
}

然后用自定义分析器分析一段字符串:

newindex/_analyze POST

{
  "analyzer": "my_analyzer",
  "text": "<span>If you are :(, I will be :).</span> The people & a banana",
  "explain": true
}

可以看到分析过程:

{
  "detail": {
    "custom_analyzer": true,
    "charfilters": [
      {
        "name": "html_strip",
        "filtered_text": [
          "if you are :(, I will be :). the people & a banana"
        ]
      },
      {
        "name": "my_char_filter",
        "filtered_text": [
          "if you are sad, I will be happy. the people and a banana"
        ]
      }
    ],
    "tokenizer": {
      "name": "my_tokenizer",
      "tokens": [
        {
          "token": "if",
          "start_offset": 6,
          "end_offset": 8,
          "type": "<ALPHANUM>",
          "position": 0,
          "bytes": "[69 66]",
          "positionLength": 1
        },
        {
          "token": "you",
          "start_offset": 9,
          "end_offset": 12,
          "type": "<ALPHANUM>",
          "position": 1,
          "bytes": "[79 6f 75]",
          "positionLength": 1
        },
        {
          "token": "are",
          "start_offset": 13,
          "end_offset": 16,
          "type": "<ALPHANUM>",
          "position": 2,
          "bytes": "[61 72 65]",
          "positionLength": 1
        },
        {
          "token": "sad",
          "start_offset": 17,
          "end_offset": 19,
          "type": "<ALPHANUM>",
          "position": 3,
          "bytes": "[73 61 64]",
          "positionLength": 1
        },
        {
          "token": "I",
          "start_offset": 21,
          "end_offset": 22,
          "type": "<ALPHANUM>",
          "position": 4,
          "bytes": "[49]",
          "positionLength": 1
        },
        {
          "token": "will",
          "start_offset": 23,
          "end_offset": 27,
          "type": "<ALPHANUM>",
          "position": 5,
          "bytes": "[77 69 6c 6c]",
          "positionLength": 1
        },
        {
          "token": "be",
          "start_offset": 28,
          "end_offset": 30,
          "type": "<ALPHANUM>",
          "position": 6,
          "bytes": "[62 65]",
          "positionLength": 1
        },
        {
          "token": "happy",
          "start_offset": 31,
          "end_offset": 33,
          "type": "<ALPHANUM>",
          "position": 7,
          "bytes": "[68 61 70 70 79]",
          "positionLength": 1
        },
        {
          "token": "the",
          "start_offset": 42,
          "end_offset": 45,
          "type": "<ALPHANUM>",
          "position": 8,
          "bytes": "[74 68 65]",
          "positionLength": 1
        },
        {
          "token": "peopl",
          "start_offset": 46,
          "end_offset": 51,
          "type": "<ALPHANUM>",
          "position": 9,
          "bytes": "[70 65 6f 70 6c]",
          "positionLength": 1
        },
        {
          "token": "e",
          "start_offset": 51,
          "end_offset": 52,
          "type": "<ALPHANUM>",
          "position": 10,
          "bytes": "[65]",
          "positionLength": 1
        },
        {
          "token": "and",
          "start_offset": 53,
          "end_offset": 54,
          "type": "<ALPHANUM>",
          "position": 11,
          "bytes": "[61 6e 64]",
          "positionLength": 1
        },
        {
          "token": "a",
          "start_offset": 55,
          "end_offset": 56,
          "type": "<ALPHANUM>",
          "position": 12,
          "bytes": "[61]",
          "positionLength": 1
        },
        {
          "token": "banan",
          "start_offset": 57,
          "end_offset": 62,
          "type": "<ALPHANUM>",
          "position": 13,
          "bytes": "[62 61 6e 61 6e]",
          "positionLength": 1
        },
        {
          "token": "a",
          "start_offset": 62,
          "end_offset": 63,
          "type": "<ALPHANUM>",
          "position": 14,
          "bytes": "[61]",
          "positionLength": 1
        }
      ]
    },
    "tokenfilters": [
      {
        "name": "lowercase",
        "tokens": [
          {
            "token": "if",
            "start_offset": 6,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "bytes": "[69 66]",
            "positionLength": 1
          },
          {
            "token": "you",
            "start_offset": 9,
            "end_offset": 12,
            "type": "<ALPHANUM>",
            "position": 1,
            "bytes": "[79 6f 75]",
            "positionLength": 1
          },
          {
            "token": "are",
            "start_offset": 13,
            "end_offset": 16,
            "type": "<ALPHANUM>",
            "position": 2,
            "bytes": "[61 72 65]",
            "positionLength": 1
          },
          {
            "token": "sad",
            "start_offset": 17,
            "end_offset": 19,
            "type": "<ALPHANUM>",
            "position": 3,
            "bytes": "[73 61 64]",
            "positionLength": 1
          },
          {
            "token": "i",
            "start_offset": 21,
            "end_offset": 22,
            "type": "<ALPHANUM>",
            "position": 4,
            "bytes": "[69]",
            "positionLength": 1
          },
          {
            "token": "will",
            "start_offset": 23,
            "end_offset": 27,
            "type": "<ALPHANUM>",
            "position": 5,
            "bytes": "[77 69 6c 6c]",
            "positionLength": 1
          },
          {
            "token": "be",
            "start_offset": 28,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 6,
            "bytes": "[62 65]",
            "positionLength": 1
          },
          {
            "token": "happy",
            "start_offset": 31,
            "end_offset": 33,
            "type": "<ALPHANUM>",
            "position": 7,
            "bytes": "[68 61 70 70 79]",
            "positionLength": 1
          },
          {
            "token": "the",
            "start_offset": 42,
            "end_offset": 45,
            "type": "<ALPHANUM>",
            "position": 8,
            "bytes": "[74 68 65]",
            "positionLength": 1
          },
          {
            "token": "peopl",
            "start_offset": 46,
            "end_offset": 51,
            "type": "<ALPHANUM>",
            "position": 9,
            "bytes": "[70 65 6f 70 6c]",
            "positionLength": 1
          },
          {
            "token": "e",
            "start_offset": 51,
            "end_offset": 52,
            "type": "<ALPHANUM>",
            "position": 10,
            "bytes": "[65]",
            "positionLength": 1
          },
          {
            "token": "and",
            "start_offset": 53,
            "end_offset": 54,
            "type": "<ALPHANUM>",
            "position": 11,
            "bytes": "[61 6e 64]",
            "positionLength": 1
          },
          {
            "token": "a",
            "start_offset": 55,
            "end_offset": 56,
            "type": "<ALPHANUM>",
            "position": 12,
            "bytes": "[61]",
            "positionLength": 1
          },
          {
            "token": "banan",
            "start_offset": 57,
            "end_offset": 62,
            "type": "<ALPHANUM>",
            "position": 13,
            "bytes": "[62 61 6e 61 6e]",
            "positionLength": 1
          },
          {
            "token": "a",
            "start_offset": 62,
            "end_offset": 63,
            "type": "<ALPHANUM>",
            "position": 14,
            "bytes": "[61]",
            "positionLength": 1
          }
        ]
      },
      {
        "name": "my_filter",
        "tokens": [
          {
            "token": "if",
            "start_offset": 6,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "bytes": "[69 66]",
            "positionLength": 1
          },
          {
            "token": "you",
            "start_offset": 9,
            "end_offset": 12,
            "type": "<ALPHANUM>",
            "position": 1,
            "bytes": "[79 6f 75]",
            "positionLength": 1
          },
          {
            "token": "are",
            "start_offset": 13,
            "end_offset": 16,
            "type": "<ALPHANUM>",
            "position": 2,
            "bytes": "[61 72 65]",
            "positionLength": 1
          },
          {
            "token": "sad",
            "start_offset": 17,
            "end_offset": 19,
            "type": "<ALPHANUM>",
            "position": 3,
            "bytes": "[73 61 64]",
            "positionLength": 1
          },
          {
            "token": "i",
            "start_offset": 21,
            "end_offset": 22,
            "type": "<ALPHANUM>",
            "position": 4,
            "bytes": "[69]",
            "positionLength": 1
          },
          {
            "token": "will",
            "start_offset": 23,
            "end_offset": 27,
            "type": "<ALPHANUM>",
            "position": 5,
            "bytes": "[77 69 6c 6c]",
            "positionLength": 1
          },
          {
            "token": "be",
            "start_offset": 28,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 6,
            "bytes": "[62 65]",
            "positionLength": 1
          },
          {
            "token": "happy",
            "start_offset": 31,
            "end_offset": 33,
            "type": "<ALPHANUM>",
            "position": 7,
            "bytes": "[68 61 70 70 79]",
            "positionLength": 1
          },
          {
            "token": "peopl",
            "start_offset": 46,
            "end_offset": 51,
            "type": "<ALPHANUM>",
            "position": 9,
            "bytes": "[70 65 6f 70 6c]",
            "positionLength": 1
          },
          {
            "token": "e",
            "start_offset": 51,
            "end_offset": 52,
            "type": "<ALPHANUM>",
            "position": 10,
            "bytes": "[65]",
            "positionLength": 1
          },
          {
            "token": "and",
            "start_offset": 53,
            "end_offset": 54,
            "type": "<ALPHANUM>",
            "position": 11,
            "bytes": "[61 6e 64]",
            "positionLength": 1
          },
          {
            "token": "banan",
            "start_offset": 57,
            "end_offset": 62,
            "type": "<ALPHANUM>",
            "position": 13,
            "bytes": "[62 61 6e 61 6e]",
            "positionLength": 1
          }
        ]
      }
    ]
  }
}
点赞
收藏
评论区
推荐文章
blmius blmius
3年前
MySQL:[Err] 1292 - Incorrect datetime value: ‘0000-00-00 00:00:00‘ for column ‘CREATE_TIME‘ at row 1
文章目录问题用navicat导入数据时,报错:原因这是因为当前的MySQL不支持datetime为0的情况。解决修改sql\mode:sql\mode:SQLMode定义了MySQL应支持的SQL语法、数据校验等,这样可以更容易地在不同的环境中使用MySQL。全局s
皕杰报表之UUID
​在我们用皕杰报表工具设计填报报表时,如何在新增行里自动增加id呢?能新增整数排序id吗?目前可以在新增行里自动增加id,但只能用uuid函数增加UUID编码,不能新增整数排序id。uuid函数说明:获取一个UUID,可以在填报表中用来创建数据ID语法:uuid()或uuid(sep)参数说明:sep布尔值,生成的uuid中是否包含分隔符'',缺省为
待兔 待兔
4个月前
手写Java HashMap源码
HashMap的使用教程HashMap的使用教程HashMap的使用教程HashMap的使用教程HashMap的使用教程22
Jacquelyn38 Jacquelyn38
3年前
2020年前端实用代码段,为你的工作保驾护航
有空的时候,自己总结了几个代码段,在开发中也经常使用,谢谢。1、使用解构获取json数据let jsonData  id: 1,status: "OK",data: 'a', 'b';let  id, status, data: number   jsonData;console.log(id, status, number )
Wesley13 Wesley13
3年前
mysql设置时区
mysql设置时区mysql\_query("SETtime\_zone'8:00'")ordie('时区设置失败,请联系管理员!');中国在东8区所以加8方法二:selectcount(user\_id)asdevice,CONVERT\_TZ(FROM\_UNIXTIME(reg\_time),'08:00','0
Easter79 Easter79
3年前
SpringBoot整合Redis乱码原因及解决方案
问题描述:springboot使用springdataredis存储数据时乱码rediskey/value出现\\xAC\\xED\\x00\\x05t\\x00\\x05问题分析:查看RedisTemplate类!(https://oscimg.oschina.net/oscnet/0a85565fa
Wesley13 Wesley13
3年前
00:Java简单了解
浅谈Java之概述Java是SUN(StanfordUniversityNetwork),斯坦福大学网络公司)1995年推出的一门高级编程语言。Java是一种面向Internet的编程语言。随着Java技术在web方面的不断成熟,已经成为Web应用程序的首选开发语言。Java是简单易学,完全面向对象,安全可靠,与平台无关的编程语言。
Stella981 Stella981
3年前
Django中Admin中的一些参数配置
设置在列表中显示的字段,id为django模型默认的主键list_display('id','name','sex','profession','email','qq','phone','status','create_time')设置在列表可编辑字段list_editable
Wesley13 Wesley13
3年前
MySQL部分从库上面因为大量的临时表tmp_table造成慢查询
背景描述Time:20190124T00:08:14.70572408:00User@Host:@Id:Schema:sentrymetaLast_errno:0Killed:0Query_time:0.315758Lock_
Python进阶者 Python进阶者
10个月前
Excel中这日期老是出来00:00:00,怎么用Pandas把这个去除
大家好,我是皮皮。一、前言前几天在Python白银交流群【上海新年人】问了一个Pandas数据筛选的问题。问题如下:这日期老是出来00:00:00,怎么把这个去除。二、实现过程后来【论草莓如何成为冻干莓】给了一个思路和代码如下:pd.toexcel之前把这