IEEE TPAMI 2019, NeurIPS 2016 Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks
Tianfan Xue*1 Jiajun Wu*1 Katherine L. Bouman1 William T. Freeman1,2
1MIT Computer Science and Artificial Intelligence Laboratory     2Google Research
* indicates equal contributions


The precise motion corresponding to a snapshot image in time is often ambiguous. For instance, is the girl's leg in (a) moving up or down? We propose a probabilistic, content-aware motion prediction model (b) that learns the conditional distribution of future frames. Using this model we are able to predict and synthesize various future frames (c) that are all consistent with the observed input image (a).

Video Demo

If you cannot access YouTube, please download our video here in 1080p or 720p
Abstract

We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation.

@inproceedings{visualdynamics16, author = {Xue, Tianfan and Wu, Jiajun and Bouman, Katherine L and Freeman, William T}, title = {Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks}, booktitle = {Advances In Neural Information Processing Systems}, year = {2016} } @article{visualdynamics, author = {Xue, Tianfan and Wu, Jiajun and Bouman, Katherine L and Freeman, William T}, title = {Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks}, journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, volume = {41}, number = {9}, pages = {2236-2250}, year = {2019} }


Downloads:
Network Architecture

Our network consists of five components: (a) a motion encoder, (b) a kernel decoder, (c) an image encoder, (d) a cross convolution layer, and (e) a motion decoder. Our image encoder takes images at four scales as input, while for simplicity we only show two in the figure.
Talk in NeurIPS 2016

Slides: PPT PDF
Results

Sampling future frames from one input frame

Shapes dataset Sprites dataset Exercise dataset PennAction dataset
Input Sample 1 Sample 2 Sample 3
Input Sample 1 Sample 2 Sample 3
Input Sample 1 Sample 2 Sample 3
Input Sample 1

Visualizing latent motion representations

Dimension 778 (Moving up) Dimension 2958 (Moving down) Dimension 2971 (Legs to the right)
Acknowledgement

The authors thank Yining Wang for helpful discussions. This work is supported by NSF Robust Intelligence 1212849, NSF Big Data 1447476, ONR MURI 6923196, Adobe, and Shell Research. The authors would also like to thank Nvidia for GPU donations.