#前言
今天被问题卡了半天,最后察觉会不会datanode访问不了呢?在团哥的指导下...才发现自己犯了个愚蠢的错误,哎,不说罢。丢人,太粗心了。经过这几天,终于把Hadoop 3.0.3 和 Hive 3.0 搭建好了..不容易啊,搭建好了之后如何使用呢?木知....
#上码
1,下载,解压,变名 wget http://mirrors.hust.edu.cn/apache/hive/hive-3.0.0/apache-hive-3.0.0-bin.tar.gz tar -xzvf apache-hive-3.0.0-bin.tar.gz mv apache-hive-3.0.0-bin hive
2,配置环境 vim /etc/profile (在后面追加) export HIVE_HOME=/home/hive
3,安装Mysql sudo apt-get install mysql-server sudo apt-get install libmysql-java ln -s /usr/share/java/mysql-connector-java.jar $HIVE_HOME/lib/mysql-connector-java.jar
4,导入数据 $ mysql -u root -p mysql> CREATE DATABASE metastore; mysql> USE metastore; mysql> SOURCE $HIVE_HOME/scripts/metastore/upgrade/mysql/hive-schema-3.0.0.mysql.sql; mysql> CREATE USER 'hive'@'%' IDENTIFIED BY 'hive'; mysql> GRANT all on *.* to 'hive'@localhost identified by 'hive'; mysql> flush privileges;
5,配置hive 环境 (/home/hive/conf) cp hive-env.sh.template hive-env.sh vim hive-env.sh export HADOOP_HOME=/home/hadoop export HIVE_CONF_DIR=/home/hive/conf
cp hive-default.xml.template hive-site.xml
vim hive-site.xml (配置路径与mysql)
6, 创建临时目录 $HADOOP_HOME/bin/hadoop fs -mkdir -p /tmp $HADOOP_HOME/bin/hadoop fs -mkdir -p /user/hive/warehouse $HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp $HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse
7,初始化hive schematool -dbType mysql -initSchema
8,启动 metastore服务 (不启用会报:HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient。) ./hive --service metastore &
9,进入Hive $HIVE_HOME/bin/hive #创建表 hive (default)> CREATE TABLE IF NOT EXISTS test_table (col1 int COMMENT 'Integer Column', col2 string COMMENT 'String Column' ) COMMENT 'This is test table' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
hive (default)> show tables; tab_name test_table
#写入 hive (default)> insert into test_table values(1,'aaa'); MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 5.54 sec HDFS Read: 15408 HDFS Write: 243 SUCCESS Total MapReduce CPU Time Spent: 5 seconds 540 msec OK col1 col2 Time taken: 26.271 seconds #查询 hive (default)> select * from test_table; test_table.col1 test_table.col2 2 bbb 3 ccc 4 ddd Time taken: 0.205 seconds, Fetched: 3 row(s)
10,had001 jps root@had001:/home/hive# jps 6675 SecondaryNameNode 6426 NameNode 6908 ResourceManager 8382 Jps 11,had002,had003 jps root@had002:~# jps 3300 DataNode 3430 NodeManager 5610 Jps
#查看是否能连接had001
root@had002:# /home/hadoop/bin/hdfs dfsadmin -report
root@had003:# /home/hadoop/bin/hdfs dfsadmin -report
#正常有data目录 root@had002:~# tree /usr/local/hadoop/tmp /usr/local/hadoop/tmp ├── dfs │ └── data │ ├── current │ │ ├── BP-1834162669-172.17.252.52-1532682436448 │ │ │ ├── current │ │ │ │ ├── finalized
12,错误 1,Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8 at [row,col,system-id]: [3213,96,"file:/home/appleyuchi/apache-hive-3.0.0-bin/conf/hive-site.xml"] 解决: /home/appleyuchi/apache-hive-3.0.0-bin/conf/hive-site.xml 上面的第3213行,第96个字符是非法字符,注释掉就行了
2,hadoop cluder could only be written to 0 of the 1 minReplication nodes 原因是had002,had003连不了had001
参考
https://dzone.com/articles/how-configure-mysql-metastore
http://dwgeek.com/hive-create-table-command-examples.html/
https://blog.csdn.net/yuyanhsf/article/details/81000522
您有什么不同的意见或看法? 欢迎留言共同学习,谢谢。
本文链接:http://www.hihubs.com/article/343
关键字:Hadoop 3.0.3 + Hive3.0安装
若无特别注明,文章皆为Hubs'm原创,转载请注明出处...O(∩_∩)O