前言
当深度学习模型完成训练开始部署、推理阶段,模型的推理速度、性能往往受到关注。目前主流DL framework都有各自的性能分析工具,本文主要介绍PyTorch 的性能分析工具——torch.autograd.profiler
测试环境
- ubuntu 18.04
- anaconda3 + python 3.7
- NVIDIA GPU/CUDA 10.2 (可选)
- PyTorch 1.6
Profiler 性能分析工具介绍
Profiler 一般指性能分析工具,用于分析APP、模型的执行时间,执行流程,内存消耗等。除了Pytorch,Tensorflow 这样的深度学习框架, 像NVIDIA CUDA, AMD ROCm 等也提供了各自的Profiler性能分析工具,比如 nvprof, rocprofiler。
PyTorch Profiler工具
pytroch Profiler位于torch.autograd.profiler
, 目前支持的功能:
- CPU/GPU 端Op执行时间统计
- CPU/GPU 端Op输入Tensor的维度分析
- Op的内存消耗统计
PyTorch 官网关于Profiler的介绍
https://pytorch.org/docs/master/autograd.html
Profiler分析CPU、GPU端Op执行时间
torch.autograd.profiler.profile(use_cuda=False...)
- CPU Only: 设置use_cuda=False
- GPU 模式:设置use_cuda=True, 注意:模型 以及输入Tensor 需要事先导入显存
CPU Only 模式
import os
import numpy as np
import torch
from torchvision.models import resnet18
import time
if __name__ == '__main__':
model = resnet18(pretrained=False)
device = torch.device('cpu')
model.eval()
model.to(device)
dump_input = torch.ones(1,3,224,224).to(device)
# Warn-up
for _ in range(5):
start = time.time()
outputs = model(dump_input)
torch.cuda.synchronize()
end = time.time()
print('Time:{}ms'.format((end-start)*1000))
with torch.autograd.profiler.profile(enabled=True, use_cuda=False, record_shapes=False, profile_memory=False) as prof:
outputs = model(dump_input)
print(prof.table())
profiler输出:(CPU Only)
GPU 模式
import os
import numpy as np
import torch
from torchvision.models import resnet18
import time
if __name__ == '__main__':
model = resnet18(pretrained=False)
device = torch.device('cuda')
model.eval()
model.to(device)
dump_input = torch.ones(1,3,224,224).to(device)
# Warn-up
for _ in range(5):
start = time.time()
outputs = model(dump_input)
torch.cuda.synchronize()
end = time.time()
print('Time:{}ms'.format((end-start)*1000))
with torch.autograd.profiler.profile(enabled=True, use_cuda=True, record_shapes=False, profile_memory=False) as prof:
outputs = model(dump_input)
print(prof.table())
profiler输出:(GPU)
使用Chrome trace可视化Profiler结果
上面的例子中,profiler的结果直接输出到终端,为了更进一步分析模型Op的执行关系,pytroch profiler支持生成 chrome trace json格式的输出,然后采用chrome 浏览器可视化结果:
只需要在上面的代码最后,加上 prof.export_chrome_trace('./resnet_profile.json')
import os
import numpy as np
import torch
from torchvision.models import resnet18
import time
# def process_event(profiler_events):
if __name__ == '__main__':
model = resnet18(pretrained=False)
device = torch.device('cuda')
model.eval()
model.to(device)
dump_input = torch.ones(1,3,224,224).to(device)
# Warn-up
for _ in range(5):
start = time.time()
outputs = model(dump_input)
torch.cuda.synchronize()
end = time.time()
print('Time:{}ms'.format((end-start)*1000))
with torch.autograd.profiler.profile(enabled=True, use_cuda=True, record_shapes=False, profile_memory=False) as prof:
outputs = model(dump_input)
print(prof.table())
prof.export_chrome_trace('./resnet_profile.json')
生成的JSON 文件
打开Chrome浏览器,在地址栏输入 chrome://tracing
导入profiler生成的JSON文件:
操作:
按键盘w, a, s, d键,可以对profiler的结果进行缩放和移动
Profiler 结果分析
上面内容主要是pytorch profiler的用法,我们更关心的是如何分析profiler的数据, 如何通过profiler发现模型的性能瓶颈,得出结论
模型整体分析
-
CPU 端的OP List
-
GPU 端的OP List
CPU 和 GPU Op的关系
CNN/RNN/GAN/Transformer 等模型最终都是由许多Op组成的,在采用GPU设备的情况下,首先CPU端负责Op的调度(schedule),将Op的运算发送到GPU, GPU负责Op的具体运算。 笔者略微了解CUDA编程知识,在CUDA编程中, host(cpu)端调用GPU kernel function, GPU kernel启动之后,CPU与GPU异步执行。
Op的wall_duration_time, self_time 区别
- wall_duration_time: 此Op的总共执行时间
- self_time: 此Op自身的执行时间,不包含调用其他子Op的执行时间
以relu_ Op为例:(relu_ 是in-place ReLU)
调用关系: relu_ ---->threshold_
- threshold_: wall_dur=154.624us
- relu_: wall_dur=179us, 由于relu_ Op又调用了threshold_ Op,因此relu_的self_time = 179 - 154 = 25us
relu_ op:
threshold_ op:
Op Tensor数据维度分析
PyTorch profiler提供了Op 输入维度
DNN Model Training Profile
代码示例: MNIST训练
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR
import torch.cuda.nvtx as nvtx
from torch.profiler import profile, ProfilerActivity
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
prof = torch.profiler.profile(
schedule=torch.profiler.schedule(wait=1, warmup=1, active=10, repeat=1),
on_trace_ready=torch.profiler.tensorboard_trace_handler('./profile'),
record_shapes=True,
with_stack=False)
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
prof.start()
nvtx.range_push('Forward')
output = model(data)
loss = F.nll_loss(output, target)
nvtx.range_pop()
nvtx.range_push('Backward')
loss.backward()
optimizer.step()
nvtx.range_pop()
prof.step()
prof.stop()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
if args.dry_run:
break
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
def inference():
print('Inference')
model.eval()
data = torch.rand(64, 1, 28, 28).cuda()
output = model(data)
print(output.size())
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=14, metavar='N',
help='number of epochs to train (default: 14)')
parser.add_argument('--lr', type=float, default=1.0, metavar='LR',
help='learning rate (default: 1.0)')
parser.add_argument('--gamma', type=float, default=0.7, metavar='M',
help='Learning rate step gamma (default: 0.7)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--no-mps', action='store_true', default=False,
help='disables macOS GPU training')
parser.add_argument('--dry-run', action='store_true', default=False,
help='quickly check a single pass')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--save-model', action='store_true', default=False,
help='For Saving the current Model')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()
use_mps = not args.no_mps and torch.backends.mps.is_available()
torch.manual_seed(args.seed)
if use_cuda:
device = torch.device("cuda")
elif use_mps:
device = torch.device("mps")
else:
device = torch.device("cpu")
train_kwargs = {'batch_size': args.batch_size}
test_kwargs = {'batch_size': args.test_batch_size}
if use_cuda:
cuda_kwargs = {'num_workers': 1,
'pin_memory': True,
'shuffle': True}
train_kwargs.update(cuda_kwargs)
test_kwargs.update(cuda_kwargs)
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
dataset1 = datasets.MNIST('../data', train=True, download=True,
transform=transform)
dataset2 = datasets.MNIST('../data', train=False,
transform=transform)
train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)
test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs)
model = Net().to(device)
optimizer = optim.Adadelta(model.parameters(), lr=args.lr)
scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
scheduler.step()
if args.save_model:
torch.save(model.state_dict(), "mnist_cnn.pt")
if __name__ == '__main__':
main()
发现生成了很多json文件,保存了Profile的信息:
采用Tensorboard打开:
tensorboard --logdir=./profile
查看Training interation=2的运行情况:
可以观察到3部分内容: Forward, backword, Optimizer.step (weight update)