卷积神经网络中用1*1 卷积有什么作用或者好处呢?

1X1卷积核最开始是在颜水成论文 [1312.4400] Network In Network 中提出的,后来被[GoogLeNet 1409.4842] Going Deeper with Convolutions的Inception结构继续应用了。能够使用更小channel的前提就是sparse比较多 不然1*1效果也不会很明显

Network in Network and 1×1 convolutions

Lin et al., 2013. Network in network

1x1 卷积可以压缩信道数。池化可以压缩宽和高。
1x1卷积给神经网络增加非线性,从而减少或保持信道数不变,也可以增加信道数

11c.png

11c1.png

1.实现跨通道的交互和信息整合

1×1的卷积层(可能)引起人们的重视是在NIN的结构中,论文中林敏师兄的想法是利用MLP代替传统的线性卷积核,从而提高网络的表达能力。文中同时利用了跨通道pooling的角度解释,认为文中提出的MLP其实等价于在传统卷积核后面接cccp层,从而实现多个feature map的线性组合,实现跨通道的信息整合。而cccp层是等价于1×1卷积的,因此细看NIN的caffe实现,就是在每个传统卷积层后面接了两个cccp层(其实就是接了两个1×1的卷积层)

1.png

2.进行卷积核通道数的降维和升维

由于3X3卷积或者5X5卷积在几百个filter的卷积层上做卷积操作时相当耗时,所以1X1卷积在3X3卷积或者5X5卷积计算之前先降低维度。那么,1X1卷积的主要作用有以下几点:
1、降维( dimension reductionality )。比如,一张500 X500且厚度depth为100 的图片在20个filter上做1X1的卷积,那么结果的大小为500X500X20。
2、加入非线性。卷积层之后经过激励层,1X1的卷积在前一层的学习表示上添加了非线性激励( non-linear activation ),提升网络的表达能力;

What is Depth of a convolutional neural network?


如果卷积的输出输入都是一个平面,那么1X1卷积核并没有什么意义,它是完全不考虑像素与周边其他像素关系。但卷积的输出输入是长方体,所以1X1卷积实际上对每个像素点,在不同的channels上进行线性组合(信息整合),且保留原有平面结构,调控depth,从而完成升维或降维的功能。
下图所示,如果选择2个filters的1X1卷积层,那么数据就从原来的depth3降到了2。若用4个filters,则起到了升维的作用。

1.png

MSRA的ResNet同样也利用了1×1卷积,并且是在3×3卷积层的前后都使用了,不仅进行了降维,还进行了升维,使得卷积层的输入和输出的通道数都减小,参数数量进一步减少,如下图的结构。

1.png

Simple Answer

Most simplistic explanation would be that 1x1 convolution leads to dimension reductionality. For example, an image of 200 x 200 with 50 features on convolution with 20 filters of 1x1 would result in size of 200 x 200 x 20. But then again, is this is the best way to do dimensionality reduction in the convoluational neural network? What about the efficacy vs efficiency?


One by One [ 1 x 1 ] Convolution - counter-intuitively useful

Complex Answer

Feature transformation

Although 1x1 convolution is a ‘feature pooling’ technique, there is more to it than just sum pooling of features across various channels/feature-maps of a given layer. 1x1 convolution acts like coordinate-dependent transformation in the filter space[https://plus.google.com/118431607943208545663/posts/2y7nmBuh2ar]. It is important to note here that this transformation is strictly linear, but in most of application of 1x1 convolution, it is succeeded by a non-linear activation layer like ReLU. This transformation is learned through the (stochastic) gradient descent. But an important distinction is that it suffers with less over-fitting due to smaller kernel size (1x1).

3.可以在保持feature map 尺寸不变(即不损失分辨率)的前提下大幅增加非线性特性,把网络做得很deep

Deeper Network

One by One convolution was first introduced in this paper titled Network in Network. In this paper, the author’s goal was to generate a deeper network without simply stacking more layers. It replaces few filters with a smaller perceptron layer with mixture of 1x1 and 3x3 convolutions. In a way, it can be seen as “going wide” instead of “deep”, but it should be noted that in machine learning terminology, ‘going wide’ is often meant as adding more data to the training. Combination of 1x1 (x F) convolution is mathematically equivalent to a multi-layer perceptron[https://www.reddit.com/r/MachineLearning/comments/3oln72/1x1_convolutions_why_use_them/cvyxood/]

Inception Module

In GoogLeNet architecture, 1x1 convolution is used for two purposes

To make network deep by adding an “inception module” like Network in Network paper, as described above.
To reduce the dimensions inside this “inception module”.
To add more non-linearity by having ReLU immediately after every 1x1 convolution.

Here is the scresnshot from the paper, which elucidates above points :

inception_1x1.png

It can be seen from the image on the right, that 1x1 convolutions (in yellow), are specially used before 3x3 and 5x5 convolution to reduce the dimensions. It should be noted that a two step convolution operation can always to combined into one, but in this case and in most other deep learning networks, convolutions are followed by non-linear activation and hence convolutions are no longer linear operators and cannot be combined.

In designing such a network, it is important to note that initial convolution kernel should be of size larger than 1x1 to have a receptive field capable of capturing locally spatial information. According to the NIN paper, 1x1 convolution is equivalent to cross-channel parametric pooling layer. From the paper - “This cascaded cross channel parameteric pooling structure allows complex and learnable interactions of cross channel information”.

Cross channel information learning (cascaded 1x1 convolution) is biologically inspired because human visual cortex have receptive fields (kernels) tuned to different orientation. For e.g

RotBundleFiltersListPlot3D.gif

Different orientation tuned receptive field profiles in the human visual cortex Source

More Uses

1x1 Convolution can be combined with Max pooling

numerical_max_pooling.gif

1x1 Convolution with higher strides leads to even more redution in data by decreasing resolution, while losing very little non-spatially correlated information.

no_padding_strides.gif

Replace fully connected layers with 1x1 convolutions as Yann LeCun believes they are the same

-In Convolutional Nets, there is no such thing as “fully-connected layers”. There are only convolution layers with 1x1 convolution kernels and a full connection table.– Yann LeCun

Convolution gif images generated using this wonderful code, more images on 1x1 convolutions and 3x3 convolutions can be found here

参考文献

https://www.zhihu.com/question/56024942

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 210,914评论 6 490
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 89,935评论 2 383
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 156,531评论 0 345
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,309评论 1 282
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,381评论 5 384
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,730评论 1 289
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,882评论 3 404
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,643评论 0 266
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,095评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,448评论 2 325
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,566评论 1 339
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,253评论 4 328
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,829评论 3 312
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,715评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,945评论 1 264
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,248评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,440评论 2 348

推荐阅读更多精彩内容

  • 01 假如你已经长大 你就要独自度过天黑踏过路滑 你要告别妈妈 你要四海为家 假如你已经长大 你还要要独自体验世间...
    熙官宝阅读 239评论 0 3
  • 梦里三千场,才拼凑的初具模样。印象双髻山兑现的日出,云海,奇石,午时花…… 𡿨日出〉 五点一刻 准时 迎着打颤的山...
    竹若阅读 567评论 1 1
  • 山里灰蒙蒙的,往远处望去,层峦叠嶂,仿佛在仙境,山里的老仙正在冥想坐禅。 去之前以为是在邯郸,这个遥远的古老赵都,...
    小花Ivan阅读 450评论 0 0
  • 延宕在说之稀泥里 实在 就像一条鱼一样 被憋死了
    李野航阅读 186评论 0 1