前言
如今,深度学习开发成为了cv开发工程师必备的技能之一,许多在校学生入门深度学习往往是从最经典的LetNet-5,然后学习AlexNet,VGGNet,GoogLeNet,ResNet。然而这些网络最初是用于ImageNet上数据的比赛,在实际开发中我们往往数据不够,硬件资源有限,因此熟悉这些网络之后,我们可以做一个简单的小项目,无论是工业时代,还是人工智能时代,一切的一切都是围绕着人,因此我们可以尝试比如人脸识别,人脸检测这样接地气的项目。本项目基于caffe+ubuntu18.04,二分类的人脸分类器的训练。
开发环境
. Ubuntu18.04
. caffe
. pycaffe
. Anaconda3
开发环境配置
为什么要选择caffe?
目前主流的深度学习框架有Tensorflow,pyTorch,mxNet,caffe等。
. caffe的优势:
- 原则上不用写代码,只需要定义好网络结构,网络超参数,即可训练得到模型
- caffe本身采用C++开发,熟悉C++的同学比较亲切。C++可以方便移植到Android,ARM等平台
- caffe有Python,Matlab接口,使用简单
. caffe的不足之处:
- 项目目前已经停止维护了
- 安装和编译是一件头疼的事,依赖许多第三方的库,尤其在windows上编译比较麻烦
- 文档不够完善,只有官网上的一些例子参考,没有系统的文档
- 不支持一些新的网络特性
强烈建议在Ubuntu18.04的系统上进行caffe安装,过程非常简单,一行命令搞定
Ubuntu18.04 安装caffe
sudo apt-get install caffe-cpu
或者
sudo apt-get install caffe-gpu
若安装好caffe,打开终端,输入:
caffe
说明安装成功
项目总体介绍
1. 项目目标
训练一个二分类的人脸分类器模型(caffemodel),输入一张图像,输出该图像为人脸的概率
2. 采用的网络模型
修改后的DeepId网络,DeepId用于人脸特征提取
3. 训练数据规模
1000k的数据,来源于ALFW人脸数据库
网络模型
在caffe中网络模型一般有2中方法生成,一种是直接写train_val.prototxt,另一种方式采用pycaffe用python代码定义好网络结构,生成train_val,prototxt,建议采用第二种。这里直接贴出来网络结构。
train_val.prototxt
其中有几个地方需要手动设置:
- source: xxxx/train_lmdb 这里设置为训练数据lmdb的路径
- source: xxxx/val_lmdb 这里设置为验证数据的lmdb的路径
- batch_size: 32/64/128/256/512, 根据自己显卡显存大小设置
- fc7->output_num 原始的deepID是1000,这里我们进行二分类,改为2
############################# DATA Layer #############################
name: "face_train_val"
layer {
top: "data"
top: "label"
name: "data"
type: "Data"
data_param {
source: "./DATA/train_lmdb"
backend:LMDB
batch_size: 128
}
transform_param {
mirror: true
}
include: { phase: TRAIN }
}
layer {
top: "data"
top: "label"
name: "data"
type: "Data"
data_param {
source: "./DATA/val_lmdb"
backend:LMDB
batch_size: 128
}
transform_param {
mirror: true
}
include: {
phase: TEST
}
}
############################# CONV NET 1 #############################
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
name: "conv1_w"
lr_mult: 1
decay_mult: 1
}
param {
name: "conv1_b"
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 20
kernel_size: 3
stride: 1
pad: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "norm1"
type: "LRN"
bottom: "conv1"
top: "norm1"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "norm1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
name: "conv2_w"
lr_mult: 1
decay_mult: 1
}
param {
name: "conv2_b"
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 40
kernel_size: 3
pad: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "norm2"
type: "LRN"
bottom: "conv2"
top: "norm2"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "norm2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "pool2"
top: "conv3"
param {
name: "conv3_w"
lr_mult: 1
decay_mult: 1
}
param {
name: "conv3_b"
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 60
kernel_size: 3
pad: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "norm3"
type: "LRN"
bottom: "conv3"
top: "norm3"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "pool3"
type: "Pooling"
bottom: "norm3"
top: "pool3"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv4"
type: "Convolution"
bottom: "pool3"
top: "conv4"
param {
name: "conv4_w"
lr_mult: 1
decay_mult: 1
}
param {
name: "conv4_b"
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 80
kernel_size: 3
pad: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "relu4"
type: "ReLU"
bottom: "conv4"
top: "conv4"
}
layer {
name: "norm4"
type: "LRN"
bottom: "conv4"
top: "norm4"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "pool4"
type: "Pooling"
bottom: "norm4"
top: "pool4"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "deepid"
type: "InnerProduct"
bottom: "pool4"
top: "deepid"
param {
name: "fc5_w"
lr_mult: 1
decay_mult: 1
}
param {
name: "fc5_b"
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 160
weight_filler {
type: "gaussian"
std: 0.005
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "deepid"
top: "deepid"
}
layer {
name: "drop6"
type: "Dropout"
bottom: "deepid"
top: "deepid"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc7"
type: "InnerProduct"
bottom: "deepid"
top: "fc7"
param {
name: "fc7_w"
lr_mult: 1
decay_mult: 1
}
param {
name: "fc7_b"
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "fc7"
bottom: "label"
top: "accuracy"
include: { phase: TEST }
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "fc7"
bottom: "label"
top: "loss"
#loss_weight: 0.5
}
模型可视化
模型可视化网址
网络局部图