Evolving Image Compositions for
Feature Representation Learning

In this work, we propose PatchMix, a data augmentation method that creates new samples by composing patches from pairs of images in a grid-like pattern.

PatchMix Model

We also explore ways to find better image interpolation strategies, and propose a guiding strategy to discover optimal grid-like patterns and image pairing jointly via evolutionary search , which we call Guided PatchMix.

Our method outperforms a base model on CIFAR-10 (+1.91), CIFAR-100 (+5.31), Tiny Imagenet (+3.52), and ImageNet (+1.16), and also shows performance gains over previous approaches when evaluating the Transfer Learning capacity and Robustness of a model trained with Guided PatchMix configuration discoveries.

Training Workflow:



Genetic Search Findings

The figure below shows the evolution of class-pairs selections
on CIFAR-100 over 250 generations.

As some class combinations get discarded systematically in
the search process, some interesting choices emerge.

For instance, the model initially selects the pairs (plain, seal),
(chimpanzee, mushroom), but those are discarded after 50 generations
and more informative combinations are selected. After 100 generations,
the model has discovered many of the class combinations that it will use,
such as (chimpanzee, raccoon), (road, tractor), and (seal, shark).
PatchMix Model

Want to learn more?
Check our paper!
Explore more findings in our Supplemental Material!

@InProceedings{PatchMix_2021_BMVC,
title = {Evolving Image Compositions for Feature Representation Learning},
author = {Paola Cascante-Bonilla and Arshdeep Sekhon and Yanjun Qi and Vicente Ordonez},
booktitle = {British Machine Vision Conference (BMVC)},
month = {November},
year= {2021} }