hive 执行计划

1,hive sql 的执行顺序

from... where.... select...group by... having ... order by...

2,explain查看执行计划

explain   selectcity,ad_type,device,sum(cnt)as cnt 

from tb_pmp_raw_log_basic_analysis where day='2016-05-28' and type =0 and media='sohu' and

(deal_id ='' or deal_id ='-' or deal_id is NULL ) group by city,ad_type,device

显示执行计划如下

STAGE DEPENDENCIES:

  Stage-1 is a root stage

  Stage-0 is a root stage

STAGE PLANS:

  Stage: Stage-1    Map Reduce

      Map Operator Tree:

          TableScan

            alias: tb_pmp_raw_log_basic_analysis

            Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE            Filter Operator

              predicate: (((deal_id = '') or (deal_id = '-')) or deal_id is null) (type: boolean)

              Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE              Select Operator

                expressions: city (type: string), ad_type (type: string), device (type: string), cnt (type: bigint)

                outputColumnNames: city, ad_type, device, cnt

                Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE                Group By Operator

                  aggregations: sum(cnt)

                  keys: city (type: string), ad_type (type: string), device (type: string)

                  mode: hash

                  outputColumnNames: _col0, _col1, _col2, _col3

                  Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE                  Reduce Output Operator

                    key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string)

                    sort order: +++                    Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string)

                    Statistics: Num rows: 8195357 Data size: 580058024 Basic stats: COMPLETE Column stats: NONE                    value expressions: _col3 (type: bigint)

      Reduce Operator Tree:

        Group By Operator

          aggregations: sum(VALUE._col0)

          keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string)

          mode: mergepartial

          outputColumnNames: _col0, _col1, _col2, _col3

          Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE          Select Operator

            expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: bigint)

            outputColumnNames: _col0, _col1, _col2, _col3

            Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE            File Output Operator

              compressed: false              Statistics: Num rows: 4097678 Data size: 290028976 Basic stats: COMPLETE Column stats: NONE              table:

                  input format: org.apache.hadoop.mapred.TextInputFormat

                  output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat

                  serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe

  Stage: Stage-0    Fetch Operator

      limit: -1

**stage1的map阶段**

        TableScan:from加载表,描述中有行数和大小等

        Filter Operator:where过滤条件筛选数据,描述有具体筛选条件和行数、大小等

        Select Operator:筛选列,描述中有列名、类型,输出类型、大小等。

        Group By Operator:分组,描述了分组后需要计算的函数,keys描述用于分组的列,outputColumnNames为输出的列名,可以看出列默认使用固定的别名_col0,以及其他信息

        Reduce Output Operator:map端本地的reduce,进行本地的计算,然后按列映射到对应的reduce

**stage1的reduce阶段Reduce Operator Tree**

        Group By Operator:总体分组,并按函数计算。map计算后的结果在reduce端的合并。描述类似。mode: mergepartial是说合并map的计算结果。map端是hash映射分组

        Select Operator:最后过滤列用于输出结果

        File Output Operator:输出结果到临时文件中,描述介绍了压缩格式、输出文件格式。

        stage0第二阶段没有,这里可以实现limit 100的操作。

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容