sparksql之hive数据仓库安装及配置

一、安装概述

计划使用sparksql组件从hive中读取数据,基于前三篇文章,我已经安装好了hadoop、spark和mysql,对于我想通过sparksql来读取hdfs上的数据来说,这三个软件必不可少。安装hive数据仓库,还需要下载hive安装包以及mysql的驱动。

二、mysql驱动下载

  1. 下载地址:https://downloads.mysql.com/archives/c-j/
    我的mysql版本是5.7.17,官网上的驱动选择5.x即可
    image.png
  2. 利用xftp或xsecure工具上传jar包到hadoop用户的根目录

三、hive下载

  1. hive版本的下载需要基于hadoop版本,具体查看hive的官方说明,https://hive.apache.org/downloads.html
    image.png
  2. 下载地址:https://dlcdn.apache.org/hive/,我选择的版本是2.3.9
    image.png

    image.png
  3. 利用xftp或xsecure工具上传jar包到hadoop用户的根目录

四、hive安装

  1. 解压hive到hadoop用户的根目录
[hadoop@hadoop01 ~]$ tar -zxvf apache-hive-2.3.9-bin.tar.gz
  1. 解压mysql到hadoop用户的根目录
[hadoop@hadoop01 ~]$ tar -zxvf mysql-connector-java-5.1.49.tar.gz
  1. 将mysql的驱动包拷贝到hive的lib目录下
[hadoop@hadoop01 ~]$ cp mysql-connector-java-5.1.49/mysql-connector-java-5.1.49.jar apache-hive-2.3.9-bin/lib/
  1. 修改hive-env.sh
[hadoop@hadoop01 ~]$ echo $HADOOP_HOME
/home/hadoop/hadoop-2.10.1
[hadoop@hadoop01 ~]$ cd apache-hive-2.3.9-bin/conf/
[hadoop@hadoop01 conf]$ cp hive-env.sh.template hive-env.sh
[hadoop@hadoop01 conf]$ vim hive-env.sh

HADOOP_HOME=/home/hadoop/hadoop-2.10.1
  1. 修改hive-site.xml,hive-default.xml.template是hive-site.xml文件的示例文件,但是这个文件默认的配置太多了,不太方便,我决定新建这个文件
[hadoop@hadoop01 conf]$ vim hive-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://172.16.100.26:3306/hive?createDatabaseIfNotExist=true</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hive</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>123456</value>
    </property>
    <property>
        <name>hive.metastore.schema.verification</name>
        <value>false</value>
    </property>
    <property>
        <name>datanucleus.schema.autoCreateAll</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.metastore.warehouse.dir</name>
        <value>/usr/local/warehouse</value>
    </property>
    <property>
         <name>hive.metastore.uris</name>
         <value>thrift://172.16.100.26:9083</value>
    </property>
</configuration>
  1. 在.bashrc中添加hive的环境变量
[hadoop@hadoop01 conf]$ vim ~/.bashrc

export JAVA_HOME=/usr/local/java
export HADOOP_HOME=/home/hadoop/hadoop-2.10.1
export SCALA_HOME=/home/hadoop/scala-2.12.2
export SPARK_HOME=/home/hadoop/spark-3.0.3-bin-hadoop2.7
export HIVE_HOME=/home/hadoop/apache-hive-2.3.9-bin
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin:$SPARK_HOME/bin:$HIVE_HOME/bin

[hadoop@hadoop01 conf]$ source ~/.bashrc
  1. 初始化hive的元数据
[hadoop@hadoop01 lib]$ cd ../bin
[hadoop@hadoop01 bin]$ ./schematool -initSchema -dbType mysql
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apache-hive-2.3.9-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.10.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:    jdbc:mysql://172.16.100.26:3306/hive?createDatabaseIfNotExist=true
Metastore Connection Driver :    com.mysql.jdbc.Driver
Metastore connection User:   root
Mon Dec 06 09:17:23 CST 2021 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.mysql.sql
Mon Dec 06 09:17:23 CST 2021 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Initialization script completed
Mon Dec 06 09:17:24 CST 2021 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
schemaTool completed
  1. 登录mysql数据库,查看是否通过hive-site.xml的配置创建了hive的源数据库
[hadoop@hadoop01 lib]$ mysql -uhive -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 5.7.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
mysql> use hive;
mysql> show tables;
+---------------------------+
| Tables_in_hive            |
+---------------------------+
| AUX_TABLE                 |
| BUCKETING_COLS            |
| CDS                       |
......
| TYPES                     |
| TYPE_FIELDS               |
| VERSION                   |
| WRITE_SET                 |
+---------------------------+
57 rows in set (0.01 sec)
  1. 启动hadoop服务
mysql> exit
Bye
[hadoop@hadoop01 lib]$ cd $HADOOP_HOME/sbin
[hadoop@hadoop01 sbin]$ ./stop-all.sh
[hadoop@hadoop01 sbin]$ ./start-dfs.sh
  1. 启动hive-server服务
[hadoop@hadoop01 sbin]$ nohup hive --service hiveserver2 &
  1. 进入hive客户端
[hadoop@hadoop01 sbin]$ hive
which: no hbase in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/java/jdk1.8.0_311/bin:/usr/local/mysql/bin:/root/bin:/home/hadoop/hadoop-2.10.1/bin:/home/hadoop/hadoop-2.10.1/sbin:/home/hadoop/scala-2.12.2/bin:/home/hadoop/spark-3.0.3-bin-hadoop2.7/bin:/home/hadoop/hadoop-2.10.1/bin:/home/hadoop/hadoop-2.10.1/sbin:/home/hadoop/scala-2.12.2/bin:/home/hadoop/spark-3.0.3-bin-hadoop2.7/bin:/home/hadoop/apache-hive-2.3.9-bin/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apache-hive-2.3.9-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.10.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/home/hadoop/apache-hive-2.3.9-bin/lib/hive-common-2.3.9.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>
  1. hive安装结果测试
hive> create database test;
OK
Time taken: 0.19 seconds
hive> show databases;
OK
default
test
Time taken: 0.017 seconds, Fetched: 2 row(s)
  1. hdfs界面验证,http://172.16.100.26:50070/explorer.html#/usr/local/warehouse
    image.png
  2. 开启metastore服务,使用thrift协议保证spark与hive的通信
hive> exit
[hadoop@hadoop01 sbin]$ nohup hive --service metastore &
  1. 配置spark,使spark能够读取hive数据库,将hive-site.xml复制到spark的conf目录下
[hadoop@hadoop01 sbin]$ cd $HIVE_HOME/conf
[hadoop@hadoop01 conf]$ cp hive-site.xml $SPARK_HOME/conf
  1. 通过spark-sql交互界面,查看能否访问hive数据库表
[hadoop@hadoop01 conf]$ cd $SPARK_HOME/bin
[hadoop@hadoop01 bin]$ ./spark-sql 
...
spark-sql> show databases;
21/12/06 10:03:38 INFO CodeGenerator: Code generated in 154.101485 ms
21/12/06 10:03:38 INFO CodeGenerator: Code generated in 5.977354 ms
default
test
Time taken: 1.688 seconds, Fetched 2 row(s)
21/12/06 10:03:38 INFO SparkSQLCLIDriver: Time taken: 1.688 seconds, Fetched 2 row(s)

五、我遇到的问题

  1. 在配置hive-site.xml时,已经完成了初始化mysql的元数据库,但是我想重新修改一下数据库配置信息。修改完成后,再次执行./schematool -initSchema -dbType mysql,提示我格式化失败。
    解决方案,使用mysql的root用户,删除已经初始化后的数据库,重新再执行即可。
  2. 明文登录mysql数据库时,即直接将密码写在命令行上,会提示warning,可忽略这个安全提示
    mysql: [Warning] Using a password on the command line interface can be insecure.
    这样操作,才是正确的做法:
[hadoop@hadoop01 ~]$ mysql -uhive -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 31
Server version: 5.7.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

不过无伤大雅,开发环境

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 218,607评论 6 507
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,239评论 3 395
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 164,960评论 0 355
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,750评论 1 294
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,764评论 6 392
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,604评论 1 305
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,347评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,253评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,702评论 1 315
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,893评论 3 336
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,015评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,734评论 5 346
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,352评论 3 330
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,934评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,052评论 1 270
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,216评论 3 371
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,969评论 2 355

推荐阅读更多精彩内容