CVPR 2021 Workshop on Super-Resolution

Call for CVPR 2021 Workshop Paper

Usually, super-resolution (SR) is trained using pairs of high- and low-resolution images. Infinitely many high-resolution images can be downsampled to the same low-resolution image. That means that the problem is ill-posed and cannot be inverted with a deterministic mapping. Instead, this CVPR 2021 NTIRE challenge frames the SR problem as learning a stochastic mapping, capable of sampling from the space of plausible high-resolution images given a low-resolution image.

Super-Resolution is ill-posed

Please help us spread the word.

Why participate in a challenge?

A major advantage of taking part in a challenge is that your results are verified by someone else and compared with other methods. This gives your method more credibility. Moreover, the challenge gives more visibility to your work. For example ESRGAN, the winner of the PIRM2018 Super-Resolution Challenge now as more than 726 Citations and 3'283 Stars on GitHub (Jan. 2021).

CVPR 2021 Super-Resolution Workshop Details

We organize this challenge to stimulate research in the emerging area of learning one-to-many SR mappings that are capable of sampling from the space of plausible solutions. Therefore the task is to develop a super-resolution method that:

  1. Each individual SR prediction should achieve highest possible photo-realism, as perceived by humans.
  2. Is capable of sampling an arbitrary number of SR images capturing meaningful diversity, corresponding to the uncertainty induced by the ill-posed nature of the SR problem together with image priors.
  3. Each individual SR prediction should be consistent with the input low-resolution image.

The challenge contains two tracks, targeting 4X and 8X super-resolution respectively. You can download the training and validation data in the table below. At a later stage, the low-resolution of the test set will be released.

How the Challenge is Evaluated

A method is evaluated by first predicting a set of 10 randomly sampled SR images for each low-resolution image in the dataset. From this set of images, evaluation metrics corresponding to the three criteria above will be considered. The participating methods will be ranked according to each metric. These ranks will then be combined into a final score. The three evaluation metrics are described next.

git clone --recursive
python3 OutName path/to/Ground-Truch path/to/Super-Resolution n_samples scale_factor
# n_samples = 10
# scale_factor = 4 for 4X and 8 for 8X

Please help us spread the word.