Elasticsearch 7.6.2安装-RPM方式
Elasticsearch
安装准备
服务器准备
为搭建ElasticSearch集群,准备了三台服务器,主机IP分别为:
服务器IP | 系统版本 |
---|---|
192.168.1.107 | Centos6.5 |
192.168.1.108 | Centos6.5 |
192.168.1.109 | Centos6.5 |
调整系统参数
内核参数调整
1. `vim /etc/sysctl.conf`
2. `# 增加下面的内容`
3. `fs.file-max = 65536`
4. `vm.max_map_count = 262144`
5. `# 执行命令`
6. `sysctl -p`
资源参数调整
1. `vim /etc/security/limits.conf`
2. `# 修改`
3. `* soft nofile 65536`
4. `* hard nofile 65536`
5. `* soft nproc 4096`
6. `* hard nproc 4096`
7. `elasticsearch soft memlock unlimited`
8. `elasticsearch hard memlock unlimited`
调整线程数
1. `vim /etc/security/limits.d/90-nproc.conf`
2. `找到如下内容:`
3. `* soft nproc 1024`
4. `#修改为`
5. `* soft nproc 4096`
ES安装及配置
导入Elasticsearch PGP密钥
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
配置yum
vim /etc/yum.repos.d/elasticsearch.repo
加入以下信息
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
安装
# 展示所有版本
sudo yum --showduplicates list elasticsearch
# 安装指定版本
sudo yum install elasticsearch-7.6.2-1
配置开机启动
sudo chkconfig --add elasticsearch
配置
JAVA_HOME
# 编辑配置文件
vim /etc/sysconfig/elasticsearch
# 修改JAVA_HOME的值
JAVA_HOME=/opt/jdk1.8.0_181/
elasticsearch.yml
vim /etc/elasticsearch/elasticsearch.yml
1. `# 集群名称,各个节点的值必须一致`
2. `cluster.name: elasticsearch_production`
3. `# 节点名称,区分节点,各个节点的值不能一致`
4. `node.name: node-1`
6. `# 数据文件路径,需要配置datanode所有数据磁盘路径`
7. `path.data:`
8. `- /data/dfs/dfs00/dfs/elasticsearch`
9. `- /data/dfs/dfs01/dfs/elasticsearch`
10. `- /data/dfs/dfs02/dfs/elasticsearch`
12. `# 日志文件路径`
13. `path.logs: /var/log/elasticsearch`
15. `# 设置为true来锁住内存。因为当jvm开始swapping时es的效率会降低,所以要保证它不swap,可以把ES_MIN_MEM和ES_MAX_MEM两个环境变量设置成同一个值,并且保证机器有足够的内存分配给es。同时也要允许elasticsearch的进程可以锁住内存,Linux下可以通过ulimit -l unlimited命令`
16. `bootstrap.memory_lock: true`
17. `# 因为Centos6不支持SecComp,而ES6.1.2默认bootstrap.system_call_filter为true进行检测,所以导致检测失败,失败后直接导致ES不能启动`
18. `bootstrap.system_call_filter: false`
19. `# 本机IP`
20. `network.host: 192.168.1.107`
22. `# 集群种子节点`
23. `discovery.seed_hosts:`
24. `- 192.168.1.107:9300`
25. `- 192.168.1.108:9300`
26. `- 192.168.1.109:9300`
27. `# 集群初始主节点,需要填写node.name中的值`
28. `cluster.initial_master_nodes:`
29. `- node-1`
30. `- node-2`
31. `- node-3`
33. `# 存在至少2个节点(数据节点或者 master 节点)才进行数据恢复`
34. `gateway.recover_after_nodes: 2`
35. `# 等待10分钟,或者3个节点上线后,才进行数据恢复,这取决于哪个条件先达到`
36. `gateway.expected_nodes: 3`
37. `gateway.recover_after_time: 10m`
39. `search.max_buckets: 200000`
40. `action.destructive_requires_name: true`
jvm.options
vim /etc/elasticsearch/jvm.options
# 调整内存大小根据系统资源而定,最好不要超过总资源一半
-Xms16g
-Xmx16g
启动
sudo -i service elasticsearch start
# 查看节点状态
curl -XGET '192.168.1.107:9200/_cat/nodes?v'
# 查看集群状态
curl -XGET '192.168.1.107:9200/_cat/health?v'
Kibana安装
Kibana安装在1.109上面
导入Elasticsearch PGP密钥
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
配置yum
vim /etc/yum.repos.d/kibana.repo
加入以下信息
[kibana]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
安装
# 展示所有版本
sudo yum --showduplicates list kibana --enablerepo=kibana kibana
# 安装指定版本
sudo yum install kibana-7.6.2-1 --enablerepo=kibana kibana
配置开机启动
sudo chkconfig --add kibana
修改配置
vim /etc/kibana/kibana.yml
1. `# 本机IP`
2. `server.host: "192.168.1.109"`
3. `# 访问es的地址`
4. `elasticsearch.hosts: ["http://192.168.1.107:9200","http://192.168.1.108:9200","http://192.168.1.109:9200"]`
启动
sudo -i service kibana start
Logstash安装
导入Elasticsearch PGP密钥
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
配置yum
vim /etc/yum.repos.d/logstash.repo
加入以下信息
[logstash]
name=logstash repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
安装
mv /usr/bin/java /usr/bin/java7
ln -s $JAVA_HOME/bin/java /usr/bin/java
sudo yum --showduplicates list logstash
sudo yum install logstash-7.6.2-1
修改配置
vim /etc/logstash/logstash.yml
1. `# 配置热加载`
2. `config.reload.automatic: true`
4. `# 开启持久化队列`
5. `queue.type: persisted`
6. `queue.max_bytes: 8gb`
管道配置
在/etc/logstash/conf.d下新建 xxx.conf 编写管道处理逻辑
启动
nohup /usr/share/logstash/bin/logstash --path.settings /etc/logstash >/dev/null 2>&1 &
Filebeat安装
导入Elasticsearch PGP密钥
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
配置yum
vim /etc/yum.repos.d/filebeat.repo
加入以下信息
[filebeat]
name=filebeat repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
安装
sudo yum install filebeat-7.6.2-1
sudo chkconfig --add filebeat
修改配置
vim /etc/filebeat/filebeat.yml
1. `filebeat.inputs:`
3. `# Each - is an input. Most options can be set at the input level, so`
4. `# you can use different inputs for various configurations.`
5. `# Below are the input specific configurations.`
7. `- type: log`
9. `# Change to true to enable this input configuration.`
10. `enabled: true`
12. `# Paths that should be crawled and fetched. Glob based paths.`
13. `paths:`
14. `- /opt/web_app/openservice/elk/elk*.log`
15. `#- c:\programdata\elasticsearch\logs\*`
16. `tags: ["log_api"]`
18. `fields:`
19. `log_topic: log_api`
22. `-------------------------- Kafka output ------------------------------`
23. `output.kafka:`
24. `# initial brokers for reading cluster metadata`
25. `hosts: ["kafka01.bitnei.cn:9092", "kafka02.bitnei.cn:9092", "kafka03.bitnei.cn:9092"]`
27. `# message topic selection + partitioning`
28. `topic: '%{[fields.log_topic]}'`
29. `#key: '%{[interfaceName]}'`
31. `required_acks: -1`
32. `compression: gzip`
33. `# (bytes) This value should be equal to or less than the broker’s message.max.bytes.`
34. `max_message_bytes: 10000000`
启动
service filebeat start