Learning Pixel Trajectories with Multiscale Contrastive Random Walks
Zhangxing Bian
Allan Jabri
Alexei Efros
Andrew Owens
University of Michigan
UC Berkeley
Johns Hopkins University
[Paper]
[GitHub]
* Accepted by CVPR'22, New Orleans, USA.

Abstract

A range of video modeling tasks, from optical flow to multiple object tracking, share the same fundamental challenge: establishing space-time correspondence. Yet, approaches that dominate each space differ. We take a step towards bridging this gap by extending the recent contrastive random walk formulation to much more dense, pixel-level space-time graphs. The main contribution is introducing hierarchy into the search problem by computing the transition matrix between two frames in a coarse-to-fine manner, forming a multiscale contrastive random walk when extended in time. This establishes a unified technique for self-supervised learning of optical flow, keypoint tracking, and video object segmentation. Experiments demonstrate that, for each of these tasks, the unified model achieves performance competitive with strong self-supervised approaches specific to that task.



Paper and Supplementary Material

Z. Bian, A. Jabri, A. Efros, A. Owens.
Learning Pixel Trajectories with Multiscale Contrastive Random Walks.

(hosted on ArXiv)


[Bibtex]