055. SkyWalking 集群环境搭建

可莉
• 阅读 1332

1. 环境准备

1.1. 用于搭建 SkyWalking 的三台服务器

1.1.1. 服务器

  • 10.1.62.78
  • 10.1.62.79
  • 10.1.62.80

1.1.2. 需要端口

  • 11800(gRPC 数据收集和内网节点间通信)
  • 12800(SkyWalking UI 查询和 HTTP 数据收集)
  • 8080(SkyWalking UI 端口)

1.1.3. 需要 JDK

1.1.4. 创建 SkyWalking 用户

  • 三台都要执行

    useradd skywalking passwd skywalking

1.2. Nacos 集群

  • 10.1.62.78:8848
  • 10.1.63.117:8848
  • 10.1.63.118:8848

1.3. ElasticSearch 集群

  • 10.1.63.116:9200
  • 10.1.63.116:9201
  • 10.1.63.117:9200

1.4. Nginx

  • 10.1.62.78

2. 安装过程

  • **之后过程没有明确说明都在 10.1.62.78 上执行,用户为 skywalking **。

2.1. 无密码 ssh 登录其他节点

  • 执行命令 ssh-keygen -t rsa

  • 设置无密码登录。

    ssh-copy-id -p 19222 skywalking@10.1.62.78
    ssh-copy-id -p 19222 skywalking@10.1.62.79
    ssh-copy-id -p 19222 skywalking@10.1.62.80
    

2.2. 配置环境变量

2.2.1 创建环境变量脚本

cd /home/skywalking

cat > environment.sh << EOF
#!/usr/bin/bash

# SkyWalking 集群各机器 IP 数组
export SKYWALKING_NODE_IPS=(10.1.62.78 10.1.62.79 10.1.62.80)
export SKYWALKING_NODES="10.1.62.78:12800,10.1.62.79:12800,10.1.62.80:12800"

# Nacos 集群各机器 IP 及端口
export NACOS_NODE_IPS=(10.1.62.78 10.1.63.117 10.1.63.118)
export NACOS_NODE_PORTS=(8848 8848 8848)
export SW_CLUSTER_NACOS_HOST_PORT="10.1.62.78:8848,10.1.63.117:8848,10.1.63.118:8848"

# ES 集群名
export ELASTICSEARCH_NODE_IPS=(10.1.63.116 10.1.63.116 10.1.63.117)
export ELASTICSEARCH_NODE_PORTS=(9200 9201 9200)
export ELASTICSEARCH_CLUSTER_NAME="elasticsearch-cluster"
# ES 集群各机器 IP 及端口
export ELASTICSEARCH_CLUSTER_NODES="10.1.63.116:9200,10.1.63.116:9201,10.1.63.117:9200"
# ES 用户名
export ELASTICSEARCH_CLUSTER_USER="elastic"
# ES 密码
export ELASTICSEARCH_CLUSTER_PASSWORD="elastic"

EOF

2.2.2. 脚本分发到其他服务器

source /home/skywalking/environment.sh

for node_ip in ${SKYWALKING_NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp -P 19222 /home/skywalking/environment.sh skywalking@${node_ip}:/home/skywalking/
  done

2.3. 环境检测

2.3.1. Nacos 集群检测

source /home/skywalking/environment.sh

for i in "${!NACOS_NODE_IPS[@]}";
  do
    echo ">>> ${NACOS_NODE_IPS[$i]}"
    echo -e "\n" | telnet ${NACOS_NODE_IPS[$i]} ${NACOS_NODE_PORTS[$i]} | grep Connected
  done

2.3.2. ES 集群检测

source /home/skywalking/environment.sh

for i in "${!ELASTICSEARCH_NODE_IPS[@]}";
  do
    echo ">>> ${ELASTICSEARCH_NODE_IPS[$i]}:${ELASTICSEARCH_NODE_PORTS[$i]}"
    echo -e "\n" | telnet ${ELASTICSEARCH_NODE_IPS[$i]} ${ELASTICSEARCH_NODE_PORTS[$i]} | grep Connected
  done

2.4. 下载并解压 SkyWalking 官方包

2.4.1. 下载

mkdir /home/skywalking/Softwares
cd /home/skywalking/Softwares/
wget https://mirror.bit.edu.cn/apache/skywalking/8.1.0/apache-skywalking-apm-es7-8.1.0.tar.gz

2.4.2. 分发服务器

source /home/skywalking/environment.sh

for node_ip in ${SKYWALKING_NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh -p 19222 skywalking@${node_ip} "mkdir /home/skywalking/Softwares"
    scp -P 19222 /home/skywalking/Softwares/apache-skywalking-apm-es7-8.1.0.tar.gz skywalking@${node_ip}:/home/skywalking/Softwares/
  done

2.4.3. 解压

source /home/skywalking/environment.sh

for node_ip in ${SKYWALKING_NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh -p 19222 skywalking@${node_ip} "tar zxvf /home/skywalking/Softwares/apache-skywalking-apm-es7-8.1.0.tar.gz -C /home/skywalking/"
  done

2.5. SkyWalking OAP 搭建

2.5.1. 备份 OAP 配置文件

source /home/skywalking/environment.sh

for node_ip in ${SKYWALKING_NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh -p 19222 skywalking@${node_ip} "mv /home/skywalking/apache-skywalking-apm-bin-es7/config/application.yml /home/skywalking/apache-skywalking-apm-bin-es7/config/application.yml.bak"
  done

2.5.2. 创建新的 OAP 配置文件

cd /home/skywalking/apache-skywalking-apm-bin-es7/config/
touch application.yml

2.5.3. 使用 vi 命令编辑 application.yml 文件,写入以下内容

  • 如果缩进和注释有问题,配合 :set paste:set nopaste 命令写入。

    Licensed to the Apache Software Foundation (ASF) under one or more

    contributor license agreements. See the NOTICE file distributed with

    this work for additional information regarding copyright ownership.

    The ASF licenses this file to You under the Apache License, Version 2.0

    (the "License"); you may not use this file except in compliance with

    the License. You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software

    distributed under the License is distributed on an "AS IS" BASIS,

    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

    See the License for the specific language governing permissions and

    limitations under the License.

    cluster: selector: ##SW_CLUSTER_SELECTOR## standalone:

    Please check your ZooKeeper is 3.5+, However, it is also compatible with ZooKeeper 3.4.x. Replace the ZooKeeper 3.5+

    library the oap-libs folder with your ZooKeeper 3.4.x library.

    zookeeper: nameSpace: ${SW_NAMESPACE:""} hostPort: ${SW_CLUSTER_ZK_HOST_PORT:localhost:2181} # Retry Policy baseSleepTimeMs: ${SW_CLUSTER_ZK_SLEEP_TIME:1000} # initial amount of time to wait between retries maxRetries: ${SW_CLUSTER_ZK_MAX_RETRIES:3} # max number of times to retry # Enable ACL enableACL: ${SW_ZK_ENABLE_ACL:false} # disable ACL in default schema: ${SW_ZK_SCHEMA:digest} # only support digest schema expression: ${SW_ZK_EXPRESSION:skywalking:skywalking} kubernetes: namespace: ${SW_CLUSTER_K8S_NAMESPACE:default} labelSelector: ${SW_CLUSTER_K8S_LABEL:app=collector,release=skywalking} uidEnvName: ${SW_CLUSTER_K8S_UID:SKYWALKING_COLLECTOR_UID} consul: serviceName: ${SW_SERVICE_NAME:"SkyWalking_OAP_Cluster"} # Consul cluster nodes, example: 10.0.0.1:8500,10.0.0.2:8500,10.0.0.3:8500 hostPort: ${SW_CLUSTER_CONSUL_HOST_PORT:localhost:8500} aclToken: ${SW_CLUSTER_CONSUL_ACLTOKEN:""} etcd: serviceName: ${SW_SERVICE_NAME:"SkyWalking_OAP_Cluster"} # etcd cluster nodes, example: 10.0.0.1:2379,10.0.0.2:2379,10.0.0.3:2379 hostPort: ${SW_CLUSTER_ETCD_HOST_PORT:localhost:2379} nacos: serviceName: ##SW_NACOS_SERVICE_NAME## hostPort: ##SW_CLUSTER_NACOS_HOST_PORT## # Nacos Configuration namespace namespace: ##SW_CLUSTER_NACOS_NAMESPACE## # the host registered and other oap node use this to communicate with current node internalComHost: ##SW_CLUSTER_NACOS_INTERNAL_COM_HOST## # the port registered and other oap node use this to communicate with current node internalComPort: ##SW_CLUSTER_NACOS_INTERNAL_COM_PORT## core: selector: ${SW_CORE:default} default: # Mixed: Receive agent data, Level 1 aggregate, Level 2 aggregate # Receiver: Receive agent data, Level 1 aggregate # Aggregator: Level 2 aggregate role: ${SW_CORE_ROLE:Mixed} # Mixed/Receiver/Aggregator restHost: ${SW_CORE_REST_HOST:0.0.0.0} restPort: ${SW_CORE_REST_PORT:12800} restContextPath: ${SW_CORE_REST_CONTEXT_PATH:/} restMinThreads: ${SW_CORE_REST_JETTY_MIN_THREADS:1} restMaxThreads: ${SW_CORE_REST_JETTY_MAX_THREADS:200} restIdleTimeOut: ${SW_CORE_REST_JETTY_IDLE_TIMEOUT:30000} restAcceptorPriorityDelta: ${SW_CORE_REST_JETTY_DELTA:0} restAcceptQueueSize: ${SW_CORE_REST_JETTY_QUEUE_SIZE:0} gRPCHost: ${SW_CORE_GRPC_HOST:0.0.0.0} gRPCPort: ${SW_CORE_GRPC_PORT:11800} gRPCSslEnabled: ${SW_CORE_GRPC_SSL_ENABLED:false} gRPCSslKeyPath: ${SW_CORE_GRPC_SSL_KEY_PATH:""} gRPCSslCertChainPath: ${SW_CORE_GRPC_SSL_CERT_CHAIN_PATH:""} gRPCSslTrustedCAPath: ${SW_CORE_GRPC_SSL_TRUSTED_CA_PATH:""} downsampling: - Hour - Day # Set a timeout on metrics data. After the timeout has expired, the metrics data will automatically be deleted. enableDataKeeperExecutor: ${SW_CORE_ENABLE_DATA_KEEPER_EXECUTOR:true} # Turn it off then automatically metrics data delete will be close. dataKeeperExecutePeriod: ${SW_CORE_DATA_KEEPER_EXECUTE_PERIOD:5} # How often the data keeper executor runs periodically, unit is minute recordDataTTL: ${SW_CORE_RECORD_DATA_TTL:3} # Unit is day metricsDataTTL: ${SW_CORE_METRICS_DATA_TTL:7} # Unit is day # Cache metrics data for 1 minute to reduce database queries, and if the OAP cluster changes within that minute, # the metrics may not be accurate within that minute. enableDatabaseSession: ${SW_CORE_ENABLE_DATABASE_SESSION:true} topNReportPeriod: ${SW_CORE_TOPN_REPORT_PERIOD:10} # top_n record worker report cycle, unit is minute # Extra model column are the column defined by in the codes, These columns of model are not required logically in aggregation or further query, # and it will cause more load for memory, network of OAP and storage. # But, being activated, user could see the name in the storage entities, which make users easier to use 3rd party tool, such as Kibana->ES, to query the data by themselves. activeExtraModelColumns: ${SW_CORE_ACTIVE_EXTRA_MODEL_COLUMNS:false} # The max length of service + instance names should be less than 200 serviceNameMaxLength: ${SW_SERVICE_NAME_MAX_LENGTH:70} instanceNameMaxLength: ${SW_INSTANCE_NAME_MAX_LENGTH:70} # The max length of service + endpoint names should be less than 240 endpointNameMaxLength: ${SW_ENDPOINT_NAME_MAX_LENGTH:150} storage: selector: ##SW_STORAGE_SELECTOR## elasticsearch: nameSpace: ${SW_NAMESPACE:""} clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200} protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"} trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""} trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""} user: ${SW_ES_USER:""} password: ${SW_ES_PASSWORD:""} secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:""} # Secrets management file in the properties format includes the username, password, which are managed by 3rd party tool. dayStep: ${SW_STORAGE_DAY_STEP:1} # Represent the number of days in the one minute/hour/day index. indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # Shard number of new indexes superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} # Super data set has been defined in the codes, such as trace segments. This factor provides more shards for the super data set, shards number = indexShardsNumber * superDatasetIndexShardsFactor. Also, this factor effects Zipkin and Jaeger traces. indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:0} bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:1000} # Execute the bulk every 1000 requests flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:10} # flush the bulk every 10 seconds whatever the number of requests concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests resultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000} metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:5000} segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200} profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200} advanced: ${SW_STORAGE_ES_ADVANCED:""} elasticsearch7: nameSpace: ##SW_NAMESPACE_ES7## clusterNodes: ##SW_STORAGE_ES_CLUSTER_NODES## protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"} trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""} trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""} dayStep: ${SW_STORAGE_DAY_STEP:1} # Represent the number of days in the one minute/hour/day index. user: ##SW_ES_USER## password: ##SW_ES_PASSWORD## secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:""} # Secrets management file in the properties format includes the username, password, which are managed by 3rd party tool. indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # Shard number of new indexes superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} # Super data set has been defined in the codes, such as trace segments. This factor provides more shards for the super data set, shards number = indexShardsNumber * superDatasetIndexShardsFactor. Also, this factor effects Zipkin and Jaeger traces. indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:0} bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:1000} # Execute the bulk every 1000 requests flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:10} # flush the bulk every 10 seconds whatever the number of requests concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests resultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000} metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:5000} segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200} profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200} advanced: ${SW_STORAGE_ES_ADVANCED:""} h2: driver: ${SW_STORAGE_H2_DRIVER:org.h2.jdbcx.JdbcDataSource} url: ${SW_STORAGE_H2_URL:jdbc:h2:mem:skywalking-oap-db} user: ${SW_STORAGE_H2_USER:sa} metadataQueryMaxSize: ${SW_STORAGE_H2_QUERY_MAX_SIZE:5000} mysql: properties: jdbcUrl: ${SW_JDBC_URL:"jdbc:mysql://localhost:3306/swtest"} dataSource.user: ${SW_DATA_SOURCE_USER:root} dataSource.password: ${SW_DATA_SOURCE_PASSWORD:root@1234} dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true} dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250} dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048} dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true} metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000} influxdb: # InfluxDB configuration url: ${SW_STORAGE_INFLUXDB_URL:http://localhost:8086} user: ${SW_STORAGE_INFLUXDB_USER:root} password: ${SW_STORAGE_INFLUXDB_PASSWORD:} database: ${SW_STORAGE_INFLUXDB_DATABASE:skywalking} actions: ${SW_STORAGE_INFLUXDB_ACTIONS:1000} # the number of actions to collect duration: ${SW_STORAGE_INFLUXDB_DURATION:1000} # the time to wait at most (milliseconds) fetchTaskLogMaxSize: ${SW_STORAGE_INFLUXDB_FETCH_TASK_LOG_MAX_SIZE:5000} # the max number of fetch task log in a request agent-analyzer: selector: ${SW_AGENT_ANALYZER:default} default: sampleRate: ${SW_TRACE_SAMPLE_RATE:10000} # The sample rate precision is 1/10000. 10000 means 100% sample in default. slowDBAccessThreshold: ${SW_SLOW_DB_THRESHOLD:default:200,mongodb:100} # The slow database access thresholds. Unit ms. receiver-sharing-server: selector: ${SW_RECEIVER_SHARING_SERVER:default} default: host: ${SW_RECEIVER_JETTY_HOST:0.0.0.0} contextPath: ${SW_RECEIVER_JETTY_CONTEXT_PATH:/} authentication: ${SW_AUTHENTICATION:""} jettyMinThreads: ${SW_RECEIVER_SHARING_JETTY_MIN_THREADS:1} jettyMaxThreads: ${SW_RECEIVER_SHARING_JETTY_MAX_THREADS:200} jettyIdleTimeOut: ${SW_RECEIVER_SHARING_JETTY_IDLE_TIMEOUT:30000} jettyAcceptorPriorityDelta: ${SW_RECEIVER_SHARING_JETTY_DELTA:0} jettyAcceptQueueSize: ${SW_RECEIVER_SHARING_JETTY_QUEUE_SIZE:0} receiver-register: selector: ${SW_RECEIVER_REGISTER:default} default:

    receiver-trace: selector: ${SW_RECEIVER_TRACE:default} default:

    receiver-jvm: selector: ${SW_RECEIVER_JVM:default} default:

    receiver-clr: selector: ${SW_RECEIVER_CLR:default} default:

    receiver-profile: selector: ${SW_RECEIVER_PROFILE:default} default:

    service-mesh: selector: ${SW_SERVICE_MESH:default} default:

    istio-telemetry: selector: ${SW_ISTIO_TELEMETRY:default} default:

    envoy-metric: selector: ${SW_ENVOY_METRIC:default} default: acceptMetricsService: ${SW_ENVOY_METRIC_SERVICE:true} alsHTTPAnalysis: ${SW_ENVOY_METRIC_ALS_HTTP_ANALYSIS:""} prometheus-fetcher: selector: ${SW_PROMETHEUS_FETCHER:default} default: active: ${SW_PROMETHEUS_FETCHER_ACTIVE:false} kafka-fetcher: selector: ${SW_KAFKA_FETCHER:-} default: bootstrapServers: ${SW_KAFKA_FETCHER_SERVERS:localhost:9092} partitions: ${SW_KAFKA_FETCHER_PARTITIONS:3} replicationFactor: ${SW_KAFKA_FETCHER_PARTITIONS_FACTOR:2} enableMeterSystem: ${SW_KAFKA_FETCHER_ENABLE_METER_SYSTEM:false} isSharding: ${SW_KAFKA_FETCHER_IS_SHARDING:false} consumePartitions: ${SW_KAFKA_FETCHER_CONSUME_PARTITIONS:""} receiver-meter: selector: ${SW_RECEIVER_METER:-} default:

    receiver-oc: selector: ${SW_OC_RECEIVER:-} default: gRPCHost: ${SW_OC_RECEIVER_GRPC_HOST:0.0.0.0} gRPCPort: ${SW_OC_RECEIVER_GRPC_PORT:55678} receiver_zipkin: selector: ${SW_RECEIVER_ZIPKIN:-} default: host: ${SW_RECEIVER_ZIPKIN_HOST:0.0.0.0} port: ${SW_RECEIVER_ZIPKIN_PORT:9411} contextPath: ${SW_RECEIVER_ZIPKIN_CONTEXT_PATH:/} jettyMinThreads: ${SW_RECEIVER_ZIPKIN_JETTY_MIN_THREADS:1} jettyMaxThreads: ${SW_RECEIVER_ZIPKIN_JETTY_MAX_THREADS:200} jettyIdleTimeOut: ${SW_RECEIVER_ZIPKIN_JETTY_IDLE_TIMEOUT:30000} jettyAcceptorPriorityDelta: ${SW_RECEIVER_ZIPKIN_JETTY_DELTA:0} jettyAcceptQueueSize: ${SW_RECEIVER_ZIPKIN_QUEUE_SIZE:0} receiver_jaeger: selector: ${SW_RECEIVER_JAEGER:-} default: gRPCHost: ${SW_RECEIVER_JAEGER_HOST:0.0.0.0} gRPCPort: ${SW_RECEIVER_JAEGER_PORT:14250} query: selector: ${SW_QUERY:graphql} graphql: path: ${SW_QUERY_GRAPHQL_PATH:/graphql} alarm: selector: ${SW_ALARM:default} default:

    telemetry: selector: ${SW_TELEMETRY:none} none: prometheus: host: ${SW_TELEMETRY_PROMETHEUS_HOST:0.0.0.0} port: ${SW_TELEMETRY_PROMETHEUS_PORT:1234} configuration: selector: ${SW_CONFIGURATION:none} none: grpc: host: ${SW_DCS_SERVER_HOST:""} port: ${SW_DCS_SERVER_PORT:80} clusterName: ${SW_DCS_CLUSTER_NAME:SkyWalking} period: ${SW_DCS_PERIOD:20} apollo: apolloMeta: ${SW_CONFIG_APOLLO:http://106.12.25.204:8080} apolloCluster: ${SW_CONFIG_APOLLO_CLUSTER:default} apolloEnv: ${SW_CONFIG_APOLLO_ENV:""} appId: ${SW_CONFIG_APOLLO_APP_ID:skywalking} period: ${SW_CONFIG_APOLLO_PERIOD:5} zookeeper: period: ${SW_CONFIG_ZK_PERIOD:60} # Unit seconds, sync period. Default fetch every 60 seconds. nameSpace: ${SW_CONFIG_ZK_NAMESPACE:/default} hostPort: ${SW_CONFIG_ZK_HOST_PORT:localhost:2181} # Retry Policy baseSleepTimeMs: ${SW_CONFIG_ZK_BASE_SLEEP_TIME_MS:1000} # initial amount of time to wait between retries maxRetries: ${SW_CONFIG_ZK_MAX_RETRIES:3} # max number of times to retry etcd: period: ${SW_CONFIG_ETCD_PERIOD:60} # Unit seconds, sync period. Default fetch every 60 seconds. group: ${SW_CONFIG_ETCD_GROUP:skywalking} serverAddr: ${SW_CONFIG_ETCD_SERVER_ADDR:localhost:2379} clusterName: ${SW_CONFIG_ETCD_CLUSTER_NAME:default} consul: # Consul host and ports, separated by comma, e.g. 1.2.3.4:8500,2.3.4.5:8500 hostAndPorts: ${SW_CONFIG_CONSUL_HOST_AND_PORTS:1.2.3.4:8500} # Sync period in seconds. Defaults to 60 seconds. period: ${SW_CONFIG_CONSUL_PERIOD:60} # Consul aclToken aclToken: ${SW_CONFIG_CONSUL_ACL_TOKEN:""} k8s-configmap: period: ${SW_CONFIG_CONFIGMAP_PERIOD:60} namespace: ${SW_CLUSTER_K8S_NAMESPACE:default} labelSelector: ${SW_CLUSTER_K8S_LABEL:app=collector,release=skywalking} nacos: # Nacos Server Host serverAddr: ${SW_CONFIG_NACOS_SERVER_ADDR:127.0.0.1} # Nacos Server Port port: ${SW_CONFIG_NACOS_SERVER_PORT:8848} # Nacos Configuration Group group: ${SW_CONFIG_NACOS_SERVER_GROUP:skywalking} # Nacos Configuration namespace namespace: ${SW_CONFIG_NACOS_SERVER_NAMESPACE:} # Unit seconds, sync period. Default fetch every 60 seconds. period: ${SW_CONFIG_NACOS_PERIOD:60} exporter: selector: ${SW_EXPORTER:-} grpc: targetHost: ${SW_EXPORTER_GRPC_HOST:127.0.0.1} targetPort: ${SW_EXPORTER_GRPC_PORT:9870} health-checker: selector: ${SW_HEALTH_CHECKER:-} default: checkIntervalSeconds: ${SW_HEALTH_CHECKER_INTERVAL_SECONDS:5}

2.5.4. 修改 OAP 配置文件

source /home/skywalking/environment.sh

cd /home/skywalking/apache-skywalking-apm-bin-es7/config

sed -i -e 's/##SW_CLUSTER_SELECTOR##/nacos/g' application.yml
sed -i -e 's/##SW_NACOS_SERVICE_NAME##/SkyWalking-OAP-Cluster/g' application.yml
sed -i -e 's/##SW_CLUSTER_NACOS_HOST_PORT##/'${SW_CLUSTER_NACOS_HOST_PORT}'/g' application.yml
sed -i -e 's/##SW_CLUSTER_NACOS_NAMESPACE##/public/g' application.yml
sed -i -e 's/##SW_CLUSTER_NACOS_INTERNAL_COM_PORT##/11800/g' application.yml

sed -i -e 's/##SW_STORAGE_SELECTOR##/elasticsearch7/g' application.yml
sed -i -e 's/##SW_NAMESPACE_ES7##/'${ELASTICSEARCH_CLUSTER_NAME}'/g' application.yml
sed -i -e 's/##SW_STORAGE_ES_CLUSTER_NODES##/'${ELASTICSEARCH_CLUSTER_NODES}'/g' application.yml
sed -i -e 's/##SW_ES_USER##/'${ELASTICSEARCH_CLUSTER_USER}'/g' application.yml
sed -i -e 's/##SW_ES_PASSWORD##/'${ELASTICSEARCH_CLUSTER_PASSWORD}'/g' application.yml

2.5.5. 分发 OAP 配置文件

source /home/skywalking/environment.sh

for node_ip in ${SKYWALKING_NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp -P 19222 /home/skywalking/apache-skywalking-apm-bin-es7/config/application.yml skywalking@${node_ip}:/home/skywalking/apache-skywalking-apm-bin-es7/config/
  done

2.5.6. 针对每台服务器的配置修改

source /home/skywalking/environment.sh

for node_ip in ${SKYWALKING_NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh -p 19222 skywalking@${node_ip} "sed -i -e 's/##SW_CLUSTER_NACOS_INTERNAL_COM_HOST##/'${node_ip}'/g' /home/skywalking/apache-skywalking-apm-bin-es7/config/application.yml"
  done

2.5.7. 启动 OAP 服务

source /home/skywalking/environment.sh

for node_ip in ${SKYWALKING_NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh -p 19222 skywalking@${node_ip} "export PATH=$PATH:/usr/local/java/jdk1.8.0_152/bin/ && /home/skywalking/apache-skywalking-apm-bin-es7/bin/oapService.sh"
  done
  • JDK 目录自行修改。

2.5.8. 查看日志

tail -f /home/skywalking/apache-skywalking-apm-bin-es7/logs/skywalking-oap-server.log

2.6. SkyWalking UI 搭建

2.6.1. 备份 UI 配置文件

source /home/skywalking/environment.sh

for node_ip in ${SKYWALKING_NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh -p 19222 skywalking@${node_ip} "mv /home/skywalking/apache-skywalking-apm-bin-es7/webapp/webapp.yml /home/skywalking/apache-skywalking-apm-bin-es7/webapp/webapp.yml.bak"
  done

2.6.2. 创建新的 UI 配置文件

cd /home/skywalking/apache-skywalking-apm-bin-es7/webapp
touch webapp.yml

2.6.3. 使用 vi 命令编辑 webapp.yml 文件,写入以下内容

  • 如果缩进和注释有问题,配合 :set paste:set nopaste 命令写入。

    Licensed to the Apache Software Foundation (ASF) under one

    or more contributor license agreements. See the NOTICE file

    distributed with this work for additional information

    regarding copyright ownership. The ASF licenses this file

    to you under the Apache License, Version 2.0 (the

    "License"); you may not use this file except in compliance

    with the License. You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software

    distributed under the License is distributed on an "AS IS" BASIS,

    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

    See the License for the specific language governing permissions and

    limitations under the License.

    server: port: ##SW_UI_PORT##

    collector: path: /graphql ribbon: ReadTimeout: 10000 # Point to all backend's restHost:restPort, split by , listOfServers: ##SW_SERVERS_LIST##

2.6.4. 修改 UI 配置文件

source /home/skywalking/environment.sh

cd /home/skywalking/apache-skywalking-apm-bin-es7/webapp/

sed -i -e 's/##SW_UI_PORT##/8080/g' webapp.yml
sed -i -e 's/##SW_SERVERS_LIST##/'${SKYWALKING_NODES}'/g' webapp.yml

2.6.5. 分发 UI 配置文件

source /home/skywalking/environment.sh

for node_ip in ${SKYWALKING_NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp -P 19222 /home/skywalking/apache-skywalking-apm-bin-es7/webapp/webapp.yml skywalking@${node_ip}:/home/skywalking/apache-skywalking-apm-bin-es7/webapp/
  done

2.6.6. 启动 UI 服务

source /home/skywalking/environment.sh

for node_ip in ${SKYWALKING_NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh -p 19222 skywalking@${node_ip} "export PATH=$PATH:/usr/local/java/jdk1.8.0_152/bin/ && /home/skywalking/apache-skywalking-apm-bin-es7/bin/webappService.sh"
  done
  • JDK 目录自行修改。

2.6.7. 查看日志

tail -f /home/skywalking/apache-skywalking-apm-bin-es7/logs/webapp.log

2.7. Nginx 配置代理

  • 使用 nginx 对应的用户操作。

2.7.1. 创建 nginx 日志目录

mkdir /home/skywalking/logs/nginx

2.7.2. 创建配置文件

cd /usr/local/nginx-1.19.2/conf/conf.d
touch skywalking.conf
  • 自行修改 nginx 配置目录。

2.7.3. 使用 vi 编辑 skywalking.conf 文件,写入以下内容

  • 如果缩进和注释有问题,配合 :set paste:set nopaste 命令写入。

    upstream skywalking.com { server 10.1.62.78:8080; server 10.1.62.79:8080; server 10.1.62.80:8080; }

    server{

    listen       8090;
    server_name 10.1.62.78;
    access_log /home/skywalking/logs/nginx/access_skywalking.log main;
    
    location / {
        proxy_pass http://skywalking.com;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
    

    }

2.7.4. 配置生效

/usr/local/nginx-1.19.2/sbin/nginx -s reload
  • 自行修改 nginx 目录。

2.8. UI 访问

点赞
收藏
评论区
推荐文章
blmius blmius
3年前
MySQL:[Err] 1292 - Incorrect datetime value: ‘0000-00-00 00:00:00‘ for column ‘CREATE_TIME‘ at row 1
文章目录问题用navicat导入数据时,报错:原因这是因为当前的MySQL不支持datetime为0的情况。解决修改sql\mode:sql\mode:SQLMode定义了MySQL应支持的SQL语法、数据校验等,这样可以更容易地在不同的环境中使用MySQL。全局s
皕杰报表之UUID
​在我们用皕杰报表工具设计填报报表时,如何在新增行里自动增加id呢?能新增整数排序id吗?目前可以在新增行里自动增加id,但只能用uuid函数增加UUID编码,不能新增整数排序id。uuid函数说明:获取一个UUID,可以在填报表中用来创建数据ID语法:uuid()或uuid(sep)参数说明:sep布尔值,生成的uuid中是否包含分隔符'',缺省为
待兔 待兔
4个月前
手写Java HashMap源码
HashMap的使用教程HashMap的使用教程HashMap的使用教程HashMap的使用教程HashMap的使用教程22
Jacquelyn38 Jacquelyn38
3年前
2020年前端实用代码段,为你的工作保驾护航
有空的时候,自己总结了几个代码段,在开发中也经常使用,谢谢。1、使用解构获取json数据let jsonData  id: 1,status: "OK",data: 'a', 'b';let  id, status, data: number   jsonData;console.log(id, status, number )
Wesley13 Wesley13
3年前
mysql设置时区
mysql设置时区mysql\_query("SETtime\_zone'8:00'")ordie('时区设置失败,请联系管理员!');中国在东8区所以加8方法二:selectcount(user\_id)asdevice,CONVERT\_TZ(FROM\_UNIXTIME(reg\_time),'08:00','0
Stella981 Stella981
3年前
055. SkyWalking 集群环境搭建
1\.环境准备1.1.用于搭建SkyWalking的三台服务器1.1.1.服务器10.1.62.7810.1.62.7910.1.62.801.1.2.需要端口11800(gRPC数据收集和内网节点间通信)12800(SkyWalki
Wesley13 Wesley13
3年前
00:Java简单了解
浅谈Java之概述Java是SUN(StanfordUniversityNetwork),斯坦福大学网络公司)1995年推出的一门高级编程语言。Java是一种面向Internet的编程语言。随着Java技术在web方面的不断成熟,已经成为Web应用程序的首选开发语言。Java是简单易学,完全面向对象,安全可靠,与平台无关的编程语言。
Stella981 Stella981
3年前
Django中Admin中的一些参数配置
设置在列表中显示的字段,id为django模型默认的主键list_display('id','name','sex','profession','email','qq','phone','status','create_time')设置在列表可编辑字段list_editable
Wesley13 Wesley13
3年前
MySQL部分从库上面因为大量的临时表tmp_table造成慢查询
背景描述Time:20190124T00:08:14.70572408:00User@Host:@Id:Schema:sentrymetaLast_errno:0Killed:0Query_time:0.315758Lock_
Python进阶者 Python进阶者
10个月前
Excel中这日期老是出来00:00:00,怎么用Pandas把这个去除
大家好,我是皮皮。一、前言前几天在Python白银交流群【上海新年人】问了一个Pandas数据筛选的问题。问题如下:这日期老是出来00:00:00,怎么把这个去除。二、实现过程后来【论草莓如何成为冻干莓】给了一个思路和代码如下:pd.toexcel之前把这