NIPS 2016 Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks
Tianfan Xue*1 Jiajun Wu*1 Katherine L. Bouman1 William T. Freeman1,2
1MIT Computer Science and Artificial Intelligence Laboratory     2Google Research
* indicates equal contributions


The precise motion corresponding to a snapshot image in time is often ambiguous. For instance, is the girl's leg in (a) moving up or down? We propose a probabilistic, content-aware motion prediction model (b) that learns the conditional distribution of future frames. Using this model we are able to predict and synthesize various future frames (c) that are all consistent with the observed input image (a).

Video Demo

If you cannot access YouTube, please download our video here in 1080p or 720p
Abstract

We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. Future frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing future frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.

@inproceedings{visualdynamics16,   author = {Xue, Tianfan and Wu, Jiajun and Bouman, Katherine L and Freeman, William T},   title = {Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks},   booktitle = {NIPS},   year = {2016} }


Downloads:
Network Architecture

Our network consists of five components: (a) a motion encoder, (b) a kernel decoder, (c) an image encoder, (d) a cross convolution layer, and (e) a motion decoder. Our image encoder takes images at four scales as input, while for simplicity we only show two in the figure.
Talk in NIPS 2016

Slides: PPT PDF
Results

Sampling future frames from one input frame

Shapes dataset Sprites dataset Exercise dataset
Input Sample 1 Sample 2 Sample 3
Input Sample 1 Sample 2 Sample 3
Input Sample 1 Sample 2 Sample 3

Visualizing latent motion representations

Dimension 778 (Moving up) Dimension 2958 (Moving down) Dimension 2971 (Legs to the right)
Acknowledgement

The authors thank Yining Wang for helpful discussions. This work is supported by NSF Robust Intelligence 1212849, NSF Big Data 1447476, ONR MURI 6923196, Adobe, and Shell Research. The authors would also like to thank Nvidia for GPU donations.