Usually, super-resolution (SR) is trained using pairs of high- and low-resolution images. Infinitely many high-resolution images can be downsampled to the same low-resolution image. That means that the problem is ill-posed and cannot be inverted with a deterministic mapping. Instead, this CVPR 2021 NTIRE challenge frames the SR problem as learning a stochastic mapping, capable of sampling from the space of plausible high-resolution images given a low-resolution image.
Main Paper: PDF | 8 Pages + Reference| 30MB | [Source]
Supplementary: PDF or ZIP |100MB
Put caption below figures/tables and end them with a dot.
The deadline for the main paper is on the 16th of November.
However, you have to register your paper with the title, abstract, authors, and subject areas already on the 9th of November.
The benefits of Normalizing Flow. In this article, we show how we outperformed GAN with Normalizing Flow. We do that based on the application super-resolution. There we describe SRFlow, a super-resolution method that outperforms state-of-the-art GAN approaches. We explain it in detail in our ECCV 2020 paper.
Intuition for Conditional Normalizing Flow. We train a Normalizing Flow model to transform an image into a gaussian latent space. During inference, we sample a random gaussian vector to generate an image. That works because the mapping is bijective and therefore outputs an image for a gaussian vector. …