Hive的安装
Hive官网地址 http://hive.apache.org/
文档查看地址 https://cwiki.apache.org/confluence/display/Hive/GettingStarted
下载地址 http://archive.apache.org/dist/hive/
github地址 https://github.com/apache/hive
解压apache-hive-3.1.2-bin.tar.gz到/opt/module/目录下面
修改apache-hive-3.1.2-bin.tar.gz的名称为hive
修改/etc/profile.d/my_env.sh,添加环境变量
tar -zxvf /opt/software/apache-hive-3.1.2-bin.tar.gz -C /opt/module/
mv /opt/module/apache-hive-3.1.2-bin /opt/module/hive
sudo vim /etc/profile.d/my_env.sh
添加内容:
#HIVE_HOME
export HIVE_HOME=/opt/module/hive
export PATH=$PATH:$HIVE_HOME/bin
替换hive中的guava.jar
cp $HADOOP_HOME/share/hadoop/common/lib/guava-27.0-jre.jar $HIVE-HOME/lib/
rm guava-19.0.jar
解决日志Jar包冲突
mv $HIVE_HOME/lib/log4j-slf4j-impl-2.10.0.jar $HIVE_HOME/lib/log4j-slf4j-impl-2.10.0.bak
将hive的元数据配置到MySQL中
拷贝驱动
cp /opt/software/mysql-connector-java-8.0.23.jar $HIVE_HOME/lib
在hive中创建spark配置文件
vim /opt/module/hive/conf/spark-defaults.conf
添加如下内容(在执行任务时,会根据如下参数执行)。
spark.master yarn
spark.eventLog.enabled true
spark.eventLog.dir hdfs://hadoop102:8020/spark-history
spark.executor.memory 1g
spark.driver.memory 1g
在HDFS创建如下路径,用于存储历史日志。
hadoop fs -mkdir /spark-history
向HDFS上传Spark纯净版jar包
说明1:由于Spark3.0.0非纯净版默认支持的是hive2.3.7版本,直接使用会和安装的Hive3.1.2出现兼容性问题。所以采用Spark纯净版jar包,不包含hadoop和hive相关依赖,避免冲突。
说明2:Hive任务最终由Spark来执行,Spark任务资源分配由Yarn来调度,该任务有可能被分配到集群的任何一个节点。所以需要将Spark的依赖上传到HDFS集群路径,这样集群中任何一个节点都能获取到。
上传并解压spark-3.0.0-bin-without-hadoop.tgz
tar -zxvf /opt/software/spark-3.0.0-bin-without-hadoop.tgz
上传Spark纯净版jar包到HDFS
hadoop fs -mkdir /spark-jars
hadoop fs -put spark-3.0.0-bin-without-hadoop/jars/* /spark-jars
配置Metastore到mysql
在$HIVE_HOME/conf目录下新建hive-site.xml文件
vim $HIVE_HOME/conf/hive-site.xml
添加以下内容
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hive.exec.parallel.thread.number</name>
<value>8</value>
</property>
<property>
<name>hive.spark.client.connect.timeout</name>
<value>1000000ms</value>
</property>
<property>
<name>hive.spark.client.server.connect.timeout</name>
<value>1000000000ms</value>
</property>
<property>
<name>hive.spark.client.future.timeout</name>
<value>1000000000ms</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop101:3306/metastore?useSSL=false</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
</property>
<!--mysql的元数据仓库在HDFS上什么位置 -->
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop101:9083</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>hadoop101</value>
</property>
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
</property>
<!--Spark依赖位置(注意:端口号8020必须和namenode的端口号一致)-->
<property>
<name>spark.yarn.jars</name>
<value>hdfs://hadoop2:8020/spark-jars/*</value>
</property>
<property>
<name>hive.execution.engine</name>
<value>spark</value>
</property>
</configuration>
Hive运行日志信息配置
cd /opt/module/hive/conf/
mv hive-log4j2.properties.template hive-log4j2.properties
vim hive-log4j2.properties
修改
hive.log.dir=/opt/module/hive/logs
启动hive
在mysql新建hive元数据库,create database metastore;
初始化hive元数据库,bin/schematool -initSchema -dbType mysql -verbose
启动metastore和hiveserver2
编写hive服务启动脚本,vim $HIVE_HOME/bin/hive-service.sh
添加以下内容:
#!/bin/bash
HIVE_LOG_DIR=$HIVE_HOME/logs
META_PID=$HIVE_HOME/tmp/meta.pid
SERVER_PID=$HIVE_HOME/tmp/server.pid
mkdir -p $HIVE_HOME/tmp
mkdir -p $HIVE_LOG_DIR
function hive_start()
{
nohup hive --service metastore >$HIVE_LOG_DIR/metastore.log 2>&1 &
echo $! > $META_PID
sleep 8
nohup hive --service hiveserver2 >$HIVE_LOG_DIR/hiveserver2.log 2>&1 &
echo $! > $SERVER_PID
}
function hive_stop()
{
if [ -f $META_PID ]
then
cat $META_PID | xargs kill -9
rm $META_PID
else
echo "Meta PID文件丢失,请手动关闭服务"
fi
if [ -f $SERVER_PID ]
then
cat $SERVER_PID | xargs kill -9
rm $SERVER_PID
else
echo "Server2 PID文件丢失,请手动关闭服务"
fi
}
case $1 in
"start")
hive_start
;;
"stop")
hive_stop
;;
"restart")
hive_stop
sleep 2
hive_start
;;
*)
echo Invalid Args!
echo 'Usage: '$(basename $0)' start|stop|restart'
;;
esac
因为hive的执行引擎设置为spark,所以需要先启动spark
/opt/module/spark-yarn/sbin/start-master.sh
如果spark有slave,执行/opt/module/spark-yarn/sbin/start-all.sh
启动hive服务
bin/hive-service.sh start
使用 DataGrip 工具连接hive
创建数据库
create table student(id int,mame string);
插入几条数据
insert into table student values(1001,"zhangsan");
遇到如下报错:
Permission denied: user=anonymous, access=WRITE, inode=“/user/hive/warehouse/
Permission denied: user=anonymous, access=EXECUTE, inode="/tmp/hadoop-yarn"
执行:
hdfs dfs -chmod -R 777 /user/hive/warehouse/
hdfs dfs -chmod -R 777 /tmp
select * from student;
select id,count(*) from student group by id;
参考:
https://blog.csdn.net/weixin_43923463/article/details/123736847