环境介绍
Ubuntu 20.04 单机部署, JDK 8,各应用介绍如下。Hadoop Hive均正常运行。本文介绍 白泽 以 Client 模式部署至 Yarn,并读写 Hive 数据。以 hadoop 账号执行操作。
应用 | 版本 | 安装目录 | 说明 |
---|---|---|---|
Apache Hadoop | 3.2.2 | /work/server/hadoop-3.2.2/ | 伪分布式模式 |
Apache Hive | 3.1.1 | /work/server/hive | |
MySQL | 5.7.35 | Docker 容器 | Hive 元数据库 白泽 Notebook 数据库 |
Apache Spark | 3.1.1-bin-hadop3.2 | /work/spark | |
Byzer-lang | 3.0-2.3.0-SNAPSHOT | /work/mlsql-engine_3.0-2.3.0-SNAPSHOT | 白泽引擎 |
Byzer-notebook | 1.0.1-SNAPSHOT | /work/notebook | 白泽 Notebook |
创建 Hive 测试数据
# 启动 Hive CLI
hive
# 创建 测试库表和一条测试数据
CREATE DATABASE IF NOT EXISTS zjc_test;
CREATE TABLE IF NOT EXISTS zjc_11 (c1 INT);
INSERT OVERWRITE TABLE zjc_11 SELECT 1;
Spark 配置
hive 配置放置到 $SPARK_HOME/conf
cd $SPARK_HOME/conf
# 建立软链接
ln -s /work/server/hive/conf/hive-site.xml hive-site.xml
配置 HiveMetastore
Spark 3.1.1 默认匹配 Hive 2.3.7; 与我们的环境不符,因而手动配置。
# 修改 Spark 参数
cp spark-defaults.conf.template spark-defaults.conf
vi spark-defaults.conf
spark.sql.hive.metastore.jars=path
spark.sql.hive.metastore.version=3.1.2
spark.sql.hive.metastore.jars=path spark.sql.hive.metastore.jars.path=file:///work/server/hive/lib/hive-standalone-metastore-3.1.2.jar,file:///work/server/hive/lib/hive-exec-3.1.2.jar,file:///work/server/hive/lib/commons-logging-1.0.4.jar,file:///work/server/hive/lib/commons-io-2.4.jar,file:///work/server/hive/lib/javax.servlet-api-3.1.0.jar,file:///work/server/hive/lib/calcite-core-1.16.0.jar,file:///work/server/hive/lib/commons-codec-1.7.jar,file:///work/server/hive/lib/libfb303-0.9.3.jar,file:///work/server/hive/lib/metrics-core-3.1.0.jar,file:///work/server/hive/lib/datanucleus-core-4.1.17.jar,file:///work/server/hive/lib/datanucleus-api-jdo-4.2.4.jar,file:///work/server/hive/lib/javax.jdo-3.2.0-m3.jar,file:///work/server/hive/lib/datanucleus-rdbms-4.1.19.jar,file:///work/server/hive/lib/HikariCP-2.6.1.jar,file:///work/server/hive/lib/mysql-connector-java-5.1.48.jar,file:///work/spark/jars/commons-collections-3.2.2.jar,file:///work/server/hive/lib/antlr-runtime-3.5.2.jar,file:///work/server/hive/lib/jackson-core-2.9.5.jar,file:///work/server/hive/lib/jackson-annotations-2.9.5.jar,file:///work/server/hive/lib/jackson-databind-2.9.5.jar,file:///work/server/hive/lib/jackson-mapper-asl-1.9.13.jar,file:///work/server/hive/lib/jackson-core-asl-1.9.13.jar
注意: 拷贝 hive 3.1.2 hivemetastore 及其依赖包。
验证 spark 可以访问 hive 表
# 启动 spark-sql CLI
spark-sql
# 执行命令
show databases;
启动白泽
需要先在 HDFS 创建 Delta 目录
hdfs dfs -mkdir -p /work/data/mlsql
前面已经打通了 Hadoop Spark Hive, 可以启动 Byzer-lang 了。按照以下脚本启动。
#!/bin/bash
set -e
set -o pipefail
MLSQL_HOME=/work/mlsql-engine_3.0-2.3.0-SNAPSHOT
JARS=$(echo ${MLSQL_HOME}/libs/*.jar | tr ' ' ',')
MAIN_JAR=$(ls ${MLSQL_HOME}/libs|grep 'streamingpro-mlsql')
export DRIVER_MEMORY=${DRIVER_MEMORY:-1g}
echo "##############################"
echo "Run with spark : $SPARK_HOME"
echo "With DRIVER_MEMORY=${DRIVER_MEMORY:-1g}"
echo "JARS: ${JARS}"
echo "MAIN_JAR: ${MLSQL_HOME}/libs/${MAIN_JAR}"
echo "##############################"
nohup $SPARK_HOME/bin/spark-submit --class streaming.core.StreamingApp \
--driver-memory "${DRIVER_MEMORY}" \
--jars "${JARS}" \
--master yarn \
--deploy-mode client \
--name mlsql \
--conf "spark.executor.memory=1024m" \
--conf "spark.executor.instances=1" \
--conf "spark.sql.hive.thriftServer.singleSession=true" \
--conf "spark.kryoserializer.buffer=256k" \
--conf "spark.kryoserializer.buffer.max=64m" \
--conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" \
--conf "spark.scheduler.mode=FAIR" \
--conf "spark.driver.extraJavaOptions=-agentlib:jdwp=transport=dt_socket,address=8991,server=y,suspend=n" \
"${MLSQL_HOME}/libs/${MAIN_JAR}" \
-streaming.name mlsql \
-streaming.platform spark \
-streaming.rest true \
-streaming.driver.port 9004 \
-streaming.spark.service true \
-streaming.thrift false \
-streaming.enableHiveSupport true \
-streaming.datalake.path /work/data/mlsql \
> /work/logs/mlsql-3.0-2.3.0-SNAPSHOT.log 2>&1 &
执行后,Byzer-lang log 出现以下信息,表示启动成功
2022-02-27 23:34:46,573 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: 192.168.50.254 ApplicationMaster RPC port: -1 queue: root.hadoop start time: 1645976079262 final status: UNDEFINED tracking URL: http://localhost:8088/proxy/application_1645976014642_0001/ user: hadoop 2022-02-27 23:34:46,574 INFO cluster.YarnClientSchedulerBackend: Application application_1645976014642_0001 has started running.
然后启动 Notebook, 配置和启动脚本这里不再赘述。登录 Notebook,在 Data Catalog 里看到 Hive 表 zjc_11
数据读写测试
# 读取刚生成的测试数据
load hive.`zjc_test.zjc_11` as hive_zjc_11;
# 写入 Hive 表
select 2 as c1 as new_data;
save overwrite new_data as hive.`zjc_test.zjc_12`;