为了验证存在不同的hdfs之间的hive的互操作(归根结底还是为了解决BUG)
需要在两个不同的hadoop集群的HDFS 能够在Hiveserver2上进行路由转发绕过一些坑。
就需要将某hdfs集群的配置文件改改。。
例如hdfs-site.xml
EG:
dfs.nameservices
sfbd,sfbdp1,oldsfbdp1,oldsfbd
dfs.ha.namenodes.sfbd
nn1,nn2
dfs.namenode.rpc-address.sfbd.nn1
CNSZ17PL1782:8020
dfs.namenode.rpc-address.sfbd.nn2
CNSZ17PL1783:8020
dfs.namenode.http-address.sfbd.nn1
CNSZ17PL1782:50070
dfs.namenode.http-address.sfbd.nn2
CNSZ17PL1783:50070
dfs.client.failover.proxy.provider.sfbd
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.namenode.shared.edits.dir.sfbd
qjournal://CNSZ17PL1786:8485;CNSZ17PL1787:8485;CNSZ17PL1788:8485;CNSZ17PL1789:8485;CNSZ17PL1790:8485/sfbd
dfs.ha.namenodes.sfbdp1
nn1,nn2
dfs.namenode.rpc-address.sfbdp1.nn1
CNSZ17PL1784:8020
dfs.namenode.rpc-address.sfbdp1.nn2
CNSZ17PL1785:8020
dfs.namenode.http-address.sfbdp1.nn1
CNSZ17PL1784:50070
dfs.namenode.http-address.sfbdp1.nn2
CNSZ17PL1785:50070
dfs.client.failover.proxy.provider.sfbdp1
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.namenode.shared.edits.dir.sfbdp1
qjournal://CNSZ17PL1786:8485;CNSZ17PL1787:8485;CNSZ17PL1788:8485;CNSZ17PL1789:8485;CNSZ17PL1790:8485/sfbdp1
dfs.client.failover.proxy.provider.oldsfbdp1
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.namenodes.oldsfbdp1
namenode313,namenode411
dfs.namenode.rpc-address.oldsfbdp1.namenode313
cnsz17pl1206:8020
dfs.namenode.http-address.oldsfbdp1.namenode313
cnsz17pl1206:50070
dfs.namenode.rpc-address.oldsfbdp1.namenode411
cnsz17pl1207:8020
dfs.namenode.http-address.oldsfbdp1.namenode411
cnsz17pl1207:50070
dfs.ha.namenodes.oldsfbd
nn1,nn2
dfs.namenode.rpc-address.oldsfbd.nn1
cnsz23pl0090:8020
dfs.namenode.rpc-address.oldsfbd.nn2
cnsz23pl0091:8020
dfs.namenode.http-address.oldsfbd.nn1
cnsz23pl0090:50070
dfs.namenode.http-address.oldsfbd.nn2
cnsz23pl0091:50070
dfs.client.failover.proxy.provider.oldsfbd
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.namenode.name.dir
file:///data/dfs/nn/local
dfs.datanode.data.dir
/HDATA/12/dfs/local,/HDATA/11/dfs/local,/HDATA/10/dfs/local,/HDATA/9/dfs/local,/HDATA/8/dfs/local,/HDATA/7/dfs/local,/HDATA/6/dfs/local,/HDATA/5/dfs/local,/HDATA/4/dfs/local,/HDATA/3/dfs/local,/HDATA/2/dfs/local,/HDATA/1/dfs/local
dfs.journalnode.edits.dir
/data/dfs/jn
dfs.qjournal.start-segment.timeout.ms
60000
dfs.qjournal.prepare-recovery.timeout.ms
240000
dfs.qjournal.accept-recovery.timeout.ms
240000
dfs.qjournal.finalize-segment.timeout.ms
240000
dfs.qjournal.select-input-streams.timeout.ms
60000
dfs.qjournal.get-journal-state.timeout.ms
240000
dfs.qjournal.new-epoch.timeout.ms
240000
dfs.qjournal.write-txns.timeout.ms
60000
dfs.namenode.acls.enabled
true
Number of replication for each chunk.
dfs.ha.fencing.methods
sshfence
dfs.ha.fencing.ssh.private-key-files
/home/hdfs/.ssh/id\_rsa
dfs.ha.automatic-failover.enabled
true
dfs.permissions.superusergroup
hadoop
dfs.datanode.max.transfer.threads
8192
dfs.hosts.exclude
/app/hadoop-conf/exclude.list
List of nodes to decommission
dfs.datanode.fsdataset.volume.choosing.policy
org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy
dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold
10737418240
dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction
0.75
dfs.client.read.shortcircuit.streams.cache.size
1000
dfs.client.read.shortcircuit.streams.cache.expiry.ms
10000
dfs.client.read.shortcircuit
true
dfs.domain.socket.path
/app/var/run/hadoop-hdfs/dn.\_PORT
dfs.client.read.shortcircuit.skip.checksum
false
dfs.block.size
134217728
dfs.replication
3
dfs.namenode.handler.count
300
dfs.datanode.handler.count
40
dfs.webhdfs.enabled
true
dfs.namenode.datanode.registration.ip-hostname-check
false