2021-10-04-Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis (MICCAI2019)
Models Genesis:用于 3D 医学图像分析的通用自学模型
代码链接:https://github.com/MrGiovanni/ModelsGenesis
Motivation:
3D图像任务转换成2D来解决,这样做丢失丰富的3D解剖信息,降低性能。
为了解决这个问题,本文提出Models Genesis,because they are created ex nihilo(with no manual labeling),self-taught(learned by self-supervision),and generic(served as source models for generating application-specific target models).因为它们是从无到有创建的(没有手动标记)、自学(通过自监督学习)和通用(用作生成特定于应用程序的目标模型的源模型)。
the sophisticated yet recurrent anatomy in medical images can serve as strong supervision signals for deep models to learn common anatomical representation automatically via self-supervision 医学图像中复杂但反复出现的解剖结构可以作为深度模型的强大监督信号,通过自我监督自动学习常见的解剖学表征
Given the marked differences between natural images and medical images,we hypothesize that transfer learning can yield more powerful(application-specific)target models if the source models are built directly from medical images.
Can we utilize the large number of available Chest CT images without systematic annotation to train source models that can yield high-performance target models via transfer learning?
方法
the encoder alone can be fine-tuned for target classification tasks;while the encoder and decoder together can be for target segmentation tasks.
Learning appearance (shape and intensity distribution)via non-linear transformation.
intensity information can be used as a strong source of pixel-wise supervision。
为了保留图像变换时,解剖结构的相对强度信息。我们使用Bezier Curve,一个变换函数(smooth,monotonous单调),分配每个像素一个唯一值,确保一到一映射。
【Bezier Curve是什么】2021-10-04-贝塞尔曲线 - 简书
Learning texture via local pixel shuffling.
给定一个原始patch,local pixel shuffling从patch中随机采样一个窗口,然后对包含的像素的顺序进行混洗,从而得到一个转换后的patch。local window的大小决定了任务的困难度,比模型的感受野小。PatchShuffling[5]是一个正则化技术防止过拟合。为了从local pixel shuffle中恢复,模型必须记住local 边界和纹理。
Learning context via out-painting and in-painting
为了通过out-painting实现自监督学习,我们生成不同大小的任意数量的窗口,互相叠加,产生复杂形状的一个窗口。然后为窗口外的所有像素分配一个随机值,同时保留内部像素的原始强度。对于in-painting,窗口外保留原始强度值,窗口内分配随机值。Out-painting 迫使 ModelsGenesis 通过外推extrapolating学习器官的全局几何和空间布局,而在in-painting中需要 ModelsGenesis 通过内插interpolating来了解器官的局部连续性。
Four unique properties:
1)Autodidactic—requiring no manual labeling.
2)Eclectic—learning from multiple perspectives. appearance,texture,context,
to learn more comprehensive representation
3)Scalable—eliminating proxy-task-specific heads.
如果每个任务都需要自己的解码器,由于 GPU 上的内存有限,我们的框架将无法适应大量自监督任务。通过将所有任务统一为单个图像恢复任务,任何有利的变换都可以轻松修改到我们的框架中,克服与多任务学习相关的可扩展性问题 [2],其中network heads受制于特定的代理任务proxy tasks。
4)Generic—yielding diverse applications.
Models Genesis learn a general purpose image representation that can be leveraged for a wide range of target tasks.Specifically,Models Genesis can be utilized to initialize the encoder for the target classification tasks and to initialize the encoder-decoder for the target segmentation tasks.
实验和结果
Experiment protocol.
534CT scans inLIDC-IDRI1 and 77,074 X-rays in ChestXray83
Models Genesis outperform 3D models trained from scratch.
Models Genesis consistently top any 2D approaches.
ModelsGenesis(2D)offer equivalent performances to supervised pretrained models.