PyTorch+sklearn划分训练集/验证集


StratifiedShuffleSplit

from  sklearn.model_selection import StratifiedShuffleSplit

StratifiedShuffleSplit(n_splits=10,test_size=None,train_size=None,random_state=None)

参数n_splits是将训练数据分成train/test对的组数,可根据需要进行设置,默认为10

参数test_size和train_size是用来设置train/test对中train和test所占的比例。例如:
1.提供10个数据num进行训练和测试集划分
2.设置train_size=0.8 test_size=0.2
3.train_num=numtrain_size=8 test_num=numtest_size=2
4.即10个数据,进行划分以后8个是训练数据,2个是测试数据
:train_num≥2,test_num≥2 ;test_size+train_size可以小于1
参数 andom_state控制是将样本随机打乱

函数作用和实现

1.其产生指定数量的独立的train/test数据集划分数据集划分成n组。
2.首先将样本随机打乱,然后根据设置参数划分出train/test对。
3.其创建的每一组划分将保证每组类比比例相同。即第一组训练数据类别比例为2:1,则后面每组类别都满足这个比例

from sklearn.model_selection import StratifiedShuffleSplit
import numpy as np
X = np.array([[1, 2], [3, 4], [1, 2], [3, 4],
              [1, 2],[3, 4], [1, 2], [3, 4]]) #训练数据集8*2
y = np.array([0, 0, 1, 1,0,0,1,1]) #类别数据集8*1

ss=StratifiedShuffleSplit(n_splits=5,test_size=0.25,train_size=0.75,random_state=0) #分成5组,测试比例为0.25,训练比例是0.75

for train_index, test_index in ss.split(X, y):
   print("TRAIN:", train_index, "TEST:", test_index) #获得索引值
   X_train, X_test = X[train_index], X[test_index]  #训练集对应的值
   y_train, y_test = y[train_index], y[test_index] #类别集对应的值


使用 StratifiedShuffleSplit 划分 Fashion-mnist 数据

def load_data_fashion_mnist(batch_size, root='../F_MNIST', use_normalize=False, mean=None, std=None, resize=None):
  trans = []
  if use_normalize:
    normalize = transforms.Normalize(mean=[mean], std=[std])
    trans.append(transforms.RandomCrop(28, padding=2))
    trans.append(transforms.RandomHorizontalFlip())
    if resize:
      trans.append(transforms.Resize(size=resize))
    trans.append(transforms.ToTensor())
    trans.append(normalize)
    trans.append(Cutout(n_holes=1, length=2))

    train_augs = transforms.Compose(trans)
    test_augs = transforms.Compose([
                                    transforms.ToTensor(),
                                    normalize
                                    ])
  else:
    train_augs = transforms.Compose([transforms.ToTensor()])
    test_augs = transforms.Compose([transforms.ToTensor()])

  mnist_train = torchvision.datasets.FashionMNIST(root=root, train=True, download=True, transform=train_augs)
  mnist_val = torchvision.datasets.FashionMNIST(root=root, train=True, download=True, transform=test_augs)
  mnist_test = torchvision.datasets.FashionMNIST(root=root, train=False, download=True, transform=test_augs)

  labels = [mnist_train[i][1] for i in range(len(mnist_train))]
  ss = StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=0)
  train_indices, valid_indices = list(ss.split(np.array(labels)[:, np.newaxis], labels))[0]
  mnist_train = torch.utils.data.Subset(mnist_train, train_indices)
  mnist_val = torch.utils.data.Subset(mnist_val, valid_indices)


  if sys.platform.startswith('win'):
    num_workers = 0
  else:
    num_workers = 4

  train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True, num_workers=num_workers)
  val_iter = torch.utils.data.DataLoader(mnist_val, batch_size=batch_size, shuffle=False, num_workers=num_workers)
  test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False, num_workers=num_workers)

  return train_iter, val_iter, test_iter

记载一个数据增强方法: cutout
github地址:https://github.com/uoguelph-mlrg/Cutout
其思想也很简单,就是对训练图像进行随机遮挡,该方法激励神经网络在决策时能够更多考虑次要特征,而不是主要依赖于很少的主要特征,如下图所示:

该方法需要设置n_holes和length两个超参数,分别表示遮挡的补丁数量和遮挡方形补丁的长度。首先建立cutout对象,使用call来封装方法,使之可调用:

class Cutout(object):
    """Randomly mask out one or more patches from an image.
    Args:
        n_holes (int): Number of patches to cut out of each image.
        length (int): The length (in pixels) of each square patch.
    """
    def __init__(self, n_holes, length):
        self.n_holes = n_holes
        self.length = length

    def __call__(self, img):
        """
        Args:
            img (Tensor): Tensor image of size (C, H, W).
        Returns:
            Tensor: Image with n_holes of dimension length x length cut out of it.
        """
        h = img.size(1)
        w = img.size(2)

        mask = np.ones((h, w), np.float32)

        for n in range(self.n_holes):
            # (x,y)表示方形补丁的中心位置
            y = np.random.randint(h)
            x = np.random.randint(w)

            y1 = np.clip(y - self.length // 2, 0, h)
            y2 = np.clip(y + self.length // 2, 0, h)
            x1 = np.clip(x - self.length // 2, 0, w)
            x2 = np.clip(x + self.length // 2, 0, w)

            mask[y1: y2, x1: x2] = 0.

        mask = torch.from_numpy(mask)
        mask = mask.expand_as(img)
        img = img * mask

        return img

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容