100字范文,内容丰富有趣,生活中的好帮手!
100字范文 > CentOS 7.2下Filebeat+Kafka+ELK生产部署(安全加固)

CentOS 7.2下Filebeat+Kafka+ELK生产部署(安全加固)

时间:2018-09-19 07:21:49

相关推荐

CentOS 7.2下Filebeat+Kafka+ELK生产部署(安全加固)

01架构说明

在需要采集日志的服务器上部署Filebeat服务,它将采集到的日志数据推送到Kafka集群;Logstash服务通过input插件读取Kafka集群对应主题的数据,期间可以使用filter插数据做自定义过滤解析处理,然后通过output插件将数据推送到Elasticsearch集群中;

最后用户通过Kibana服务提供的web界面,对索引数据做汇总,分析,搜索和展示等功能

本文旨在部署安全可靠的生产架构,对ELK做XPack安全加固,对Kafka做SASL安全加固!

02准备工作

本文的ELK全家桶版本为7.2.0,Kafka版本为2.12-2.3.0。

03环境配置

主要的操作有:关闭selinux安全机制,关闭firewalld防火墙,关闭swap交换内存空间,文件及内存限制配置,设置jvm参数,创建普通用户,准备磁盘存储目录等;建议做好服务器间的免密登陆操作。

auto_elk_env.sh

#!/bin/bashecho "##### Update /etc/hosts #####"cat >> /etc/hosts <<EOF192.168.100.83 es83192.168.100.86 es86192.168.100.87 es87EOFecho "##### Stop firewalld #####"systemctl stop firewalldsystemctl disable firewalldecho "##### Close selinux #####"setenforce 0sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/configecho "##### Close swap #####"swapoff -a# 提示:修改完该文件后,需重新登录终端才可生效,可通过ulimit -a查看。echo "##### Modify /etc/security/limits.conf #####"cat > /etc/security/limits.conf <<EOF* soft nofile 65536* hard nofile 131072* soft nproc 65536* hard nproc 65536* soft memlock unlimited* hard memlock unlimitedEOFecho "##### Modify /etc/sysctl.conf #####"cat >> /etc/sysctl.conf <<EOFvm.max_map_count=562144EOFsysctl -pecho "##### Create user(密码随意) #####"useradd elkuserecho 123456 | passwd --stdin elkuserecho "##### 配置SSH免密通信 #####"ssh-keygen # 一路回车即可ssh-copy-id 192.168.100.83ssh-copy-id 192.168.100.86ssh-copy-id 192.168.100.87

04Elasticsearch集群部署

Elasticsearch 是一个分布式、RESTful风格的搜索和数据分析引擎;它实现了用于全文检索的倒排索引,而且为每个数据都编入了索引,搜索速度非常快;它具有可扩展性和弹性,每秒钟能处理海量事件,并且它适用于所有数据类型,例如结构化数据、非结构化数据、地理位置等。

笔者在生产环境上,为Elasticsearch分配了30G内存(最大不要超过32G),6块446G的SSD磁盘,并使用G1的垃圾回收策略,关于硬件配置大家根据实际情况来分配使用!

提示:笔者已事先下载好了所有软件包到服务器上;本文的三个es节点默认都做主节点和数据节点,当使用xpack加密时,主节点也必须做数据节点,否则加密配置写入不进es存储!

在本文中,笔者直接在83节点上完成了es集群的部署,请仔细阅读下方的命令!

# 下载方式:wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gzecho "##### 解压Elasticsearch #####"[root@es83 ~]# cd /home/elkuser/[root@es83 elkuser]# tar -xvf elasticsearch-7.2.0-linux-x86_64.tar.gzecho "##### 修改jvm文件 #####"[root@es83 elkuser]# cd ./elasticsearch-7.2.0/[root@es83 elasticsearch-7.2.0]# sed -i -e 's/1g/30g/g' -e '36,38s/^-/#&/g' ./config/jvm.options[root@es83 elasticsearch-7.2.0]# sed -i -e 'N;38 a -XX:+UseG1GC \n-XX:MaxGCPauseMillis=50' ./config/jvm.optionsecho "##### 生成关键证书文件 #####"[root@es83 elasticsearch-7.2.0]# ./bin/elasticsearch-certutil ca......Please enter the desired output file [elastic-stack-ca.p12]: 回车EnterEnter password for elastic-stack-ca.p12 : 回车Enterecho "##### 利用关键证书生成所有es节点证书文件 #####"[root@es83 elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.83......Enter password for CA (elastic-stack-ca.p12) : 回车EnterPlease enter the desired output file [elastic-certificates.p12]: es83.p12Enter password for es83.p12 : 回车Enter[root@es83 elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.86......Enter password for CA (elastic-stack-ca.p12) : 回车EnterPlease enter the desired output file [elastic-certificates.p12]: es86.p12Enter password for es86.p12 : 回车Enter[root@es83 elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.87......Enter password for CA (elastic-stack-ca.p12) : 回车EnterPlease enter the desired output file [elastic-certificates.p12]: es87.p12Enter password for es87.p12 : 回车Enterecho "##### 利用关键证书生成后续logstash所需证书 #####"[root@es83 elasticsearch-7.2.0]# openssl pkcs12 -in elastic-stack-ca.p12 -clcerts -nokeys > root.cer[root@es83 elasticsearch-7.2.0]# openssl x509 -in root.cer -out root.pemecho "##### 利用关键证书生成后续kibana所需证书 #####"[root@es83 elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 -name "CN=something,OU=Consulting Team,DC=mydomain,DC=com"......Enter password for CA (elastic-stack-ca.p12) : 回车EnterPlease enter the desired output file [CN=something,OU=Consulting Team,DC=mydomain,DC=com.p12]: client.p12Enter password for client.p12 : 回车Enterecho "##### 移动所生成的证书文件到指定目录下 #####"[root@es83 elasticsearch-7.2.0]# cp *.p12 ./config/echo "##### 修改es配置文件 #####"[root@es83 elasticsearch-7.2.0]# cat > ./config/elasticsearch.yml <<EOFcluster.name: chilu_elknode.name: es83node.master: truenode.data: truepath.data: /logdata/data1,/logdata/data2,/logdata/data3,/logdata/data4,/logdata/data5,/logdata/data6bootstrap.memory_lock: truebootstrap.system_call_filter: falsenetwork.host: 192.168.100.83http.port: 9200transport.tcp.port: 9300discovery.zen.ping.unicast.hosts: ["192.168.100.83:9300","192.168.100.86:9300","192.168.100.87:9300"]cluster.initial_master_nodes: ["192.168.100.83:9300","192.168.100.86:9300","192.168.100.87:9300"]node.max_local_storage_nodes: 256indices.fielddata.cache.size: 50%http.cors.enabled: truehttp.cors.allow-origin: "*"xpack.security.enabled: truexpack.security.transport.ssl.enabled: truexpack.security.transport.ssl.verification_mode: certificatexpack.security.transport.ssl.keystore.path: es83.p12xpack.security.transport.ssl.truststore.path: elastic-stack-ca.p12xpack.security.http.ssl.enabled: truexpack.security.http.ssl.keystore.path: es83.p12xpack.security.http.ssl.truststore.path: elastic-stack-ca.p12xpack.security.http.ssl.client_authentication: optionalEOFecho "##### scp目录到其他节点上,并修改配置 #####"[root@es83 elasticsearch-7.2.0]# cd ../[root@es83 elkuser]# scp -r ./elasticsearch-7.2.0 192.168.100.86:/home/elkuser/[root@es83 elkuser]# scp -r ./elasticsearch-7.2.0 192.168.100.87:/home/elkuser/[root@es83 elkuser]# ssh 192.168.100.86 "sed -i -e 's/es83/es86/g' -e '8s/192.168.100.83/192.168.100.86/' /home/elkuser/elasticsearch-7.2.0/config/elasticsearch.yml"[root@es83 elkuser]# ssh 192.168.100.87 "sed -i -e 's/es83/es87/g' -e '8s/192.168.100.83/192.168.100.87/' /home/elkuser/elasticsearch-7.2.0/config/elasticsearch.yml"echo "##### 修改各目录的属主和组 #####"[root@es83 elkuser]# chown -R elkuser:elkuser /logdata ./elasticsearch-7.2.0[root@es83 elkuser]# ssh 192.168.100.86 "chown -R elkuser:elkuser /logdata /home/elkuser/elasticsearch-7.2.0"[root@es83 elkuser]# ssh 192.168.100.87 "chown -R elkuser:elkuser /logdata /home/elkuser/elasticsearch-7.2.0"echo "##### 切换普通用户,后台运行elasticsearch服务 #####"[root@es83 elasticsearch-7.2.0]# su elkuser[elkuser@es83 elasticsearch-7.2.0]$ ./bin/elasticsearch -d[elkuser@es83 elasticsearch-7.2.0]$ ssh elkuser@192.168.100.86 "/home/elkuser/elasticsearch-7.2.0/bin/elasticsearch -d"[elkuser@es83 elasticsearch-7.2.0]$ ssh elkuser@192.168.100.87 "/home/elkuser/elasticsearch-7.2.0/bin/elasticsearch -d"echo "##### 自动生成用户密码(记得保存好用户密码) #####"[elkuser@es83 elasticsearch-7.2.0]$ echo y | ./bin/elasticsearch-setup-passwords auto | tee elk_pwd.logInitiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.The passwords will be randomly generated and printed to the console.Changed password for user apm_systemPASSWORD apm_system = HojN4w88Nwgl51Oe7o12Changed password for user kibanaPASSWORD kibana = JPYDvJYn2CDmls5gIlNGChanged password for user logstash_systemPASSWORD logstash_system = kXxmVCX34PGpUluBXABXChanged password for user beats_systemPASSWORD beats_system = rY90aBHjAdidQPwgX87uChanged password for user remote_monitoring_userPASSWORD remote_monitoring_user = 0VxaGROqo255y60P1kBVChanged password for user elasticPASSWORD elastic = NvOBRGpUE3DoaSbYaUp3echo "##### 测试es加密,查看集群状态是否为green #####"[elkuser@es83 elasticsearch-7.2.0]$ curl --tlsv1 -XGET "https://192.168.100.83:9200/_cluster/health?pretty" --user elastic:NvOBRGpUE3DoaSbYaUp3 -k

05Kafka集群部署

Kafka 是最初由Linkedin公司开发,是一个分布式、分区的、多副本的、多订阅者,基于zookeeper协调的分布式消息系统;它具有高吞吐量、低延迟、可扩展性、持久性、可靠性、容错性和高并发等特点,可以处理几十万条消息,延迟只有几毫秒,集群式部署支持热扩展,消息可被持久化到本地磁盘,防止数据丢失,而且支持数千个客户端同时读写。

在本文的架构中,kafka是用作缓存消息队列,用来实时接收日志和发送日志到logstash,实现解耦和流量削峰,解决logstash消费能力跟不上导致的数据丢失问题;笔者采用的是kafka内置的zookeeper,也是以集群方式部署,无需再单独搭建zookeeper集群服务。

注意:kafka的集群配置信息,状态维护是存储在zookeeper这个进程里的,所以kafka在启动前需要先配置启动zookeeper!

笔者为zookeeper服务分配了4G内存,为kafka服务分配了31G内存和5块SSD磁盘,关于硬件配置大家根据实际情况来分配使用!

# 下载方式:wget /dist/kafka/2.3.0/kafka_2.12-2.3.0.tgzecho "##### 解压Kafka #####"[root@es83 ~]# cd /opt/[root@es83 opt]# tar -xvf ./kafka_2.12-2.3.0.tgzecho "##### 修改zookeeper配置文件 #####"[root@es83 opt]# cd ./kafka_2.12-2.3.0/[root@es83 kafka_2.12-2.3.0]# cat > ./config/zookeeper.properties <<EOFdataDir=/opt/zookeeperclientPort=2181maxClientCnxns=0tickTime=2000initLimit=10syncLimit=5server.1=192.168.100.83:2888:3888server.2=192.168.100.86:2888:3888server.3=192.168.100.87:2888:3888authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProviderrequireClientAuthScheme=sasljaasLoginRenew=mands.whitelist=EOFecho "##### 创建zookeeper数据目录和对应的myid文件 #####"[root@es83 kafka_2.12-2.3.0]# mkdir /opt/zookeeper[root@es83 kafka_2.12-2.3.0]# echo 1 > /opt/zookeeper/myidecho "##### 修改kafka配置文件 #####"[root@es83 kafka_2.12-2.3.0]# cat > ./config/server.properties <<EOFbroker.id=83listeners=SASL_PLAINTEXT://192.168.100.83:9092advertised.listeners=SASL_PLAINTEXT://192.168.100.83:work.threads=5num.io.threads=8socket.send.buffer.bytes=1024000socket.receive.buffer.bytes=1024000socket.request.max.bytes=1048576000log.dirs=/logdata/kfkdata1,/logdata/kfkdata2,/logdata/kfkdata3,/logdata/kfkdata4,/logdata/kfkdata5num.partitions=1num.recovery.threads.per.data.dir=1offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1log.retention.hours=72log.segment.delete.delay.ms=1000log.cleaner.enable=truelog.cleanup.policy=deletelog.segment.bytes=1073741824log.retention.check.interval.ms=300000zookeeper.connect=192.168.100.83:2181,192.168.100.86:2181,192.168.100.87:2181zookeeper.connection.timeout.ms=60000group.initial.rebalance.delay.ms=0delete.topic.enable=truesecurity.inter.broker.protocol=SASL_PLAINTEXT sasl.enabled.mechanisms=PLAIN sasl.mechanism.inter.broker.protocol=PLAIN authorizer.class.name=kafka.security.auth.SimpleAclAuthorizerallow.everyone.if.no.acl.found=truesuper.users=User:admin;User:kafkaEOFecho "##### 创建zk和kafka的sasl jaas文件 #####"[root@es83 kafka_2.12-2.3.0]# cat > ./config/zk_server_jaas.conf <<EOFServer {org.mon.security.plain.PlainLoginModule required username="admin" password="chilu@rljie" user_kafka="chilu@rljie" user_producer="chilu@rljie";};EOF[root@es83 kafka_2.12-2.3.0]# cat > ./config/kafka_server_jaas.conf <<EOFKafkaServer {org.mon.security.plain.PlainLoginModule requiredusername="admin"password="chilu@rljie"user_admin="chilu@rljie"user_producer="chilu@rljie"user_consumer="chilu@rljie";};KafkaClient {org.mon.security.plain.PlainLoginModule requiredusername="kafka"password="chilu@rljie";};Client {org.mon.security.plain.PlainLoginModule requiredusername="kafka"password="chilu@rljie";};EOFecho "##### 修改zk和kafka的启动文件(增加SASL的环境配置) #####"[root@es83 kafka_2.12-2.3.0]# sed -i -e 's/512M/4G/g' -e 's#Xms4G#Xms4G -Djava.security.auth.login.config=/opt/kafka_2.12-2.3.0/config/zk_server_jaas.conf#' ./bin/zookeeper-server-start.sh[root@es83 kafka_2.12-2.3.0]# sed -i -e 's/1G/31G/g' -e 's#Xms31G#Xms31G -Djava.security.auth.login.config=/opt/kafka_2.12-2.3.0/config/kafka_server_jaas.conf#' ./bin/kafka-server-start.shecho "##### 将相关目录复制到其他两台节点上,并进行修改 #####"[root@es83 kafka_2.12-2.3.0]# cd ../[root@es83 opt]# scp -r ./zookeeper ./kafka_2.12-2.3.0 192.168.100.86:/opt/[root@es83 opt]# scp -r ./zookeeper ./kafka_2.12-2.3.0 192.168.100.87:/opt/[root@es83 opt]# ssh 192.168.100.86 "echo 2 > /opt/zookeeper/myid ; sed -i '1,3s/83/86/' /opt/kafka_2.12-2.3.0/config/server.properties"[root@es83 opt]# ssh 192.168.100.87 "echo 3 > /opt/zookeeper/myid ; sed -i '1,3s/83/87/' /opt/kafka_2.12-2.3.0/config/server.properties"echo "##### 后台启动zookeeper服务 #####"[root@es83 opt]# cd ./kafka_2.12-2.3.0/[root@es83 kafka_2.12-2.3.0]# ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties[root@es83 kafka_2.12-2.3.0]# ssh 192.168.100.86 "/opt/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/zookeeper.properties"[root@es83 kafka_2.12-2.3.0]# ssh 192.168.100.87 "/opt/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/zookeeper.properties"echo "##### 后台启动kafka服务 #####"[root@es83 kafka_2.12-2.3.0]# ./bin/kafka-server-start.sh -daemon ./config/server.properties[root@es83 kafka_2.12-2.3.0]# ssh 192.168.100.86 "/opt/kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/server.properties"[root@es83 kafka_2.12-2.3.0]# ssh 192.168.100.87 "/opt/kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/server.properties"

当zk和kafka服务都启动后,可以先检查下相关端口状态是否正常

[root@es83 kafka_2.12-2.3.0]# netstat -antlp | grep -E "2888|3888|2181|9092"

当集群服务一切正常后,即可在其中一台kafka节点上配置ACL访问控制权限,对生产者producer和消费者consumer的主题topic和组group设置访问权限,可以限制只允许指定的机器访问。

提示:下面的mykafka是通过/etc/hosts自定义一个IP的域名,例如:192.168.100.83 mykafka;如果写成localhost可能没有权限,执行命令后会报NoAuth;如果写成IP地址会报CONNECT !!!

echo "##### 编写配置ACL访问权限脚本 #####"[root@es83 kafka_2.12-2.3.0]# cat > ./kfkacls.sh <<EOF#!/bin/bash/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:producer --allow-host 0.0.0.0 --operation Read --operation Write --topic elk/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:producer --topic elk --producer --group chilu/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:consumer --allow-host 0.0.0.0 --operation Read --operation Write --topic elk/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:consumer --topic elk --consumer --group chiluEOFecho "##### 执行脚本 #####"[root@es83 kafka_2.12-2.3.0]# bash ./kfkacls.shecho "##### 查看ACL权限列表 #####"[root@es83 kafka_2.12-2.3.0]# ./bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --list# 提示:下面是交互式的命令配置echo "##### 增加ACL访问权限 #####"[root@es83 kafka_2.12-2.3.0]# ./bin/zookeeper-shell.sh mykafka:2181Welcome to ZooKeeper!JLine support is disabledWATCHER::WatchedEvent state:SyncConnected type:None path:null此时可以直接在这个控制台输入命令如ls / 查看ZK的目录 检查默认权限getAcl /默认所有人可以查看添加权限命令为:(仅添加kafka主机的IP)setAcl / ip:192.168.100.83:cdrwa,ip:192.168.100.86:cdrwa,ip:192.168.100.87:cdrwasetAcl /kafka-acl ip:192.168.100.83:cdrwa,ip:192.168.100.86:cdrwa,ip:192.168.100.87:cdrwa检查是否生效getAcl / 输出:'ip,'192.168.100.83: cdrwa'ip,'192.168.100.86: cdrwa'ip,'192.168.100.87: cdrwa退出quit

06Logstash服务部署

Logstash 是免费且开放的服务器端数据处理管道,采用的是可插拔框架,拥有200多个插件,支持各种输入和输出选择,能够实时解析和转换数据,具有可伸缩性、弹性和灵活性;但是它比较消耗资源,运行时占用较高的CPU和内存,如果缺少消息队列缓存,会有数据丢失的隐患,所以小伙伴们要结合自身情况来使用!

笔者在生产环境上,也为Logstash分配了30G内存,关于硬件配置大家根据实际情况来分配使用!

# 下载方式:wget https://artifacts.elastic.co/downloads/logstash/logstash-7.2.0.tar.gzecho "##### 解压Logstash #####"[root@es83 ~]# cd /home/elkuser/[root@es83 elkuser]# tar -xvf ./logstash-7.2.0.tar.gzecho "##### 修改启动内存 #####"[root@es83 elkuser]# cd ./logstash-7.2.0/[root@es83 logstash-7.2.0]# sed -i -e 's/1g/30g/g' ./config/jvm.optionsecho "##### 复制相关所需证书到logstash目录下 #####"[root@es83 elkuser]# cd ./logstash-7.2.0/config/[root@es83 config]# cp /home/elkuser/elasticsearch-7.2.0/root.pem ./echo "##### 修改logstash配置文件 #####"[root@es83 config]# cat > ./logstash.yml <<EOFhttp.host: "192.168.100.83"node.name: "logstash83"xpack.monitoring.elasticsearch.hosts: [ "https://192.168.100.83:9200" ]xpack.monitoring.enabled: truexpack.monitoring.elasticsearch.username: "elastic"xpack.monitoring.elasticsearch.password: "NvOBRGpUE3DoaSbYaUp3"xpack.monitoring.elasticsearch.ssl.certificate_authority: config/root.pemxpack.monitoring.elasticsearch.ssl.verification_mode: certificatexpack.monitoring.collection.interval: 30sxpack.monitoring.collection.pipeline.details.enabled: trueEOF# 提示:配置的用户名和密码要跟kafka配置的一致!echo "##### 配置接入kafka的客户端文件 #####"[root@es83 config]# cat > ./kafka-client-jaas.conf <<EOFKafkaClient {org.mon.security.plain.PlainLoginModule requiredusername="consumer"password="chilu@rljie";};EOFecho "##### input和ouput的配置示例 #####"[root@es83 config]# cat > ./test.cfg <<EOFinput {kafka {bootstrap_servers => "192.168.100.83:9092,192.168.100.86:9092,192.168.100.87:9092"client_id => "chilu83"auto_offset_reset => "latest"topics => "elk"group_id => "chilu"security_protocol => "SASL_PLAINTEXT"sasl_mechanism => "PLAIN"jaas_path => "/home/elkuser/logstash-7.2.0/config/kafka-client-jaas.conf"}}filter {}output {elasticsearch {hosts => ["192.168.4.1:9200","192.168.4.2:9200","192.168.4.3:9200"]user => "elastic"password => "NvOBRGpUE3DoaSbYaUp3"ssl => truecacert => "/home/elkuser/logstash-7.2.0/config/root.pem"index => "chilu_elk%{+YYYY.MM.dd}"}}EOFecho "##### 启动logstash服务 #####"[root@es83 config]# ../bin/logstash -r -f ./test.cfg

07Kibana服务部署

Kibana 是一个开源的分析和可视化平台,可以为Logstash和ElasticSearch提供的日志数据进行高效的搜索、可视化汇总和多维度分析,并且与Elasticsearch索引中的数据进行交互;它基于浏览器的界面操作可以快速创建动态仪表板,实时监控ElasticSearch的数据状态与更改等。

笔者在生产环境上,为Kibana分配了8G内存,关于硬件配置大家根据实际情况来分配使用!

# 下载方式:wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gzecho "##### 解压Kibana #####"[root@es83 ~]# cd /home/elkuser/[root@es83 elkuser]# tar -xvf kibana-7.2.0-linux-x86_64.tar.gzecho "##### 修改启动内存 #####"[root@es83 elkuser]# cd ./kibana-7.2.0-linux-x86_64/[root@es83 kibana-7.2.0-linux-x86_64]# sed -i 's/warnings/warnings --max_old_space_size=8096/' ./bin/kibanaecho "##### 复制相关所需证书到kibana目录下 #####"[root@es83 kibana-7.2.0-linux-x86_64]# cd ./config/[root@es83 config]# cp /home/elkuser/elasticsearch-7.2.0/client.p12 ./echo "##### 利用client.p12证书生成其他所需证书 #####"[root@es83 config]# openssl pkcs12 -in client.p12 -nocerts -nodes > client.keyEnter Import Password: 回车EnterMAC verified OK[root@es83 config]# openssl pkcs12 -in client.p12 -clcerts -nokeys > client.cerEnter Import Password: 回车EnterMAC verified OK[root@es83 config]# openssl pkcs12 -in client.p12 -cacerts -nokeys -chain > client-ca.cerEnter Import Password: 回车EnterMAC verified OKecho "##### 升级kibana的web界面为https访问 #####"[root@es83 config]# cd ../[root@es83 kibana-7.2.0-linux-x86_64]# openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 3650 -out server.crt -subj "/C=CN/ST=guangzhou/L=rljie/O=chilu/OU=linux/"echo "##### 修改kibana的配置文件 #####"[root@es83 kibana-7.2.0-linux-x86_64]# cat > ./config/kibana.yml <<EOFserver.name: kibanaserver.host: "192.168.100.83"elasticsearch.hosts: [ "https://192.168.100.83:9200" ]xpack.monitoring.ui.container.elasticsearch.enabled: trueelasticsearch.username: "elastic"elasticsearch.password: "NvOBRGpUE3DoaSbYaUp3"xpack.security.enabled: trueelasticsearch.ssl.certificateAuthorities: config/client-ca.cerelasticsearch.ssl.verificationMode: certificatexpack.security.encryptionKey: "4297f44b13955235245b2497399d7a93"xpack.reporting.encryptionKey: "4297f44b13955235245b2497399d7a93"server.ssl.enabled: trueserver.ssl.certificate: server.crtserver.ssl.key: server.keyEOFecho "##### nohup后台启动kibana服务(自行选择后台方式) #####"[root@es83 kibana-7.2.0-linux-x86_64]# nohup ./bin/kibana --allow-root &

完成以上操作后,可使用浏览器访问kibana地址https://192.168.100.83,输入elastic用户密码即可!

curl 示例

curl --tlsv1 -XGET 'https://192.168.100.83:9200/_cluster/health?pretty' --cacert '/home/elkuser/elasticsearch-7.2.0/root.pem' --user elastic:NvOBRGpUE3DoaSbYaUp3

08Filebeat服务部署

Filebeat 是一个用于转发和集中日志数据的轻量级采集器,基于go语言开发,性能稳定,配置简单,占用资源很少;它作为agent安装在服务器上,可以监控你指定的日志文件或位置,收集日志事件,并将其转发到配置的输出;主要通过探测器prospector和收集器harvester组件完成工作。

# 下载方式:wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gzecho "##### 解压Filebeat #####"[root@es83 ~]# cd /home/elkuser/[root@es83 elkuser]# tar -xvf filebeat-7.2.0-linux-x86_64.tar.gzecho "##### 修改filebeat配置文件 #####"[root@es83 elkuser]# cd ./filebeat-7.2.0-linux-x86_64/[root@es83 filebeat-7.2.0-linux-x86_64]# cat > ./filebeat.yml <<\EOFfilebeat.inputs:- type: logenabled: truepaths:- /var/log/access.logclose_timeout: 1hclean_inactive: 3hignore_older: 2hfilebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: truesetup.template.settings:index.number_of_shards: 3setup.kibana:output.kafka:hosts: ["192.168.100.83:9092","192.168.100.86:9092","192.168.100.87:9092"]topic: elkrequired_acks: 1username: "producer"password: "chilu@rljie"EOFecho "##### nohup后台启动filebeat服务 #####"[root@es83 filebeat-7.2.0-linux-x86_64]# nohup ./filebeat -e -c filebeat.yml &

参考链接 :

Filebeat+Kafka+ELK生产部署(安全加固) :https://mp./s/sJJYgzN6y5L5DBAQOvUq3A

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。