2019-08-09 Nvidia DALI

NVIDIA数据加载库介绍

The NVIDIA Data Loading Library (DALI) is a portable, open source library for decoding and augmenting images and videos to accelerate deep learning applications. DALI reduces latency and training time, mitigating bottlenecks, by overlapping training and pre-processing. It provides a drop-in replacement for built in data loaders and data iterators in popular deep learning frameworks for easy integration or retargeting to different frameworks.

Training neural networks with images requires developers to first normalize those images. Moreover images are often compressed to save on storage. Developers have therefore built multi-stage data processing pipelines that include loading, decoding, cropping, resizing, and many other augmentation operators. These data processing pipelines, which are currently executed on the CPU, have become a bottleneck, limiting overall throughput.

DALI is a high performance alternative to built-in data loaders and data iterators. Developers can now run their data processing pipelines on the GPU, reducing the total time it takes to train a neural network. Data processing pipelines implemented using DALI are portable because they can easily be retargeted to TensorFlow, PyTorch and MXNet.。


DALI is a high performance alternative to built-in data loaders and data iterators. Developers can now run their data processing pipelines on the GPU, reducing the total time it takes to train a neural network. Data processing pipelines implemented using DALI are portable because they can easily be retargeted to TensorFlow, PyTorch and MXNet.

DALI的主要特点

Easy-to-use Python API
Transparently scales across multiple GPUs
Accelerates image classification (ResNet-50) and object detection (SSD) workloads
Flexible graphs lets developers create custom pipelines
Supports multiple data formats - LMDB, RecordIO, TFRecord, COCO, JPEG, H.264 and HEVC
Developers can add custom image and video processing operators


DALI的目的——解决CPU瓶颈

Training deep learning models with vast amounts of data is necessary to achieve accurate results. Data in the wild, or even prepared data sets, is usually not in the form that can be directly fed into neural network. This is where NVIDIA DALI data preprocessing comes into play.

There are various reasons for that:

(1)Different storage formats
(2)Compression
(3)Data format and size may be incompatible
(4)Limited amount of high quality data

Addressing the above issues requires your training pipeline provide extensive data preprocessing capabilities, such as loading, decoding, decompression, data augmentation, format conversion, and resizing. You may have used the native implementation in existing machine learning frameworks, such as Tensorflow, Pytorch, MXnet, and others, for these pre-processing steps. However, this creates portability issues due to use of framework-specific data format, set of available transformations, and their implementations. Training in a truly portable fashion needs augmentations and portability in the data pipeline.
Data preprocessing for deep learning workloads has garnered little attention until recently, eclipsed by the tremendous computational resources required for training complex models. As such, preprocessing tasks typically ran on the CPU due to simplicity, flexibility, and availability of libraries such as OpenCV or Pillow.

Recent advances in GPU architectures introduced in the NVIDIA Volta and Turing architectures, have significantly increased GPU throughput in deep learning tasks. In particular, half-precision arithmetic and Tensor Cores accelerate certain types of FP16 matrix calculations useful for training DNNs. Dense multi-GPU systems like NVIDIA’s DGX-1 and DGX-2 train a model much faster than data can be provided by the processing framework, leaving the GPUs starved for data.

Today’s DL applications include complex, multi-stage data processing pipelines consisting of many serial operations. To rely on the CPU to handle these pipelines limits your performance and scalability.

DALI Key features

DALI offers a simple Python interface where you can implement a data processing pipeline in a few steps:

  1. Select Operators from this extensive list of supported operators
  2. Define the operation flow as a symbolic graph in an imperative way (as in most of the current deep learning frameworks)
  3. Build an operation pipeline
  4. Run graph on demand
  5. Integrate with your target deep learning framework by dedicated plugin

Let us now deep dive into the inner working of DALI, followed by how to use it.

DALI 内部原理

DALI defines data pre-processing pipeline as a dataflow graph, with each node representing a data processing Operator. DALI has 3 types of Operators as follows:

1.CPU: accepts and produces data on CPU
2.Mixed: accepts data from CPU and produces the output at the GPU side
3.GPU: accepts and produces data on the GPU

Although DALI is developed mostly with GPUs in mind, it also provides a variety of CPU-operator variants. This enables utilizing available CPU cycles for use cases where the CPU/GPU ratio is high or network traffic completely consumes available GPU cycles. You should experiment with CPU/GPU operator placement to find the sweet spot.

For the performance reasons, DALI only transfers data from CPU->Mixed->GPU as shown in figure 3.

Dali example pipeline

Existing frameworks offer prefetching, which calculates necessary data fetches before they’re needed. DALI prefetches transparently, providing the ability to define prefetch queue length flexibly during pipeline construction, as shown in figure 4. This makes it straightforward to hide high variation in the batch-to-batch processing time.

How data processing overlaps with training

DALI 表现

NVIDIA showcases DALI in our implementations of SSD and ResNet-50 since it was one of the contributing factors in MLPerf benchmark success.

Figure 6 compares DALI with the RN50 network running with the different GPU configurations:

Note how the core/GPU ratio becomes smaller (DGX1V has 5 CPU cores per GPU, while DGX2 only 3) the performance improvement gets better.
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,657评论 6 492
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,662评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 158,143评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,732评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,837评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,036评论 1 291
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,126评论 3 410
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,868评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,315评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,641评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,773评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,470评论 4 333
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,126评论 3 317
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,859评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,095评论 1 267
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,584评论 2 362
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,676评论 2 351

推荐阅读更多精彩内容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi阅读 7,312评论 0 10
  • **2014真题Directions:Read the following text. Choose the be...
    又是夜半惊坐起阅读 9,442评论 0 23
  • 有了孩子之后,明显感觉时间精力不够用。孩子小的时候觉得睡不够,孩子大了些觉得自己学习时间不够,家务时间不够,现在觉...
    拥抱小小的我阅读 374评论 0 0
  • 今日体验,每天工作,生活都在努力做好自己,什么事情从不麻烦别人,又看不惯别人有一点成就就不知道是谁了,我还是我,不...
    王全峰阅读 110评论 0 0
  • 前言:根据酷传 iOS版本记录,携程美食林功能于16年6月6日上线,主要产品定位是为用户推荐旅行中的地道美食,帮助...
    betterme222阅读 2,839评论 1 4