Peeking into the Future:
Predicting Future Person Activities and Locations in Videos
Junwei Liang1,2, Lu Jiang2, Juan Carlos Niebles3, Alexander Hauptmann1, Li Fei-Fei3
1Carnegie Mellon University, 2Google AI, 3Stanford University
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

Demo Video
[Presented at CVPR2019. Great discussions!]
Deciphering human behaviors to predict their future paths/trajectories and what they would do from videos is important in many applications. Motivated by this idea, this paper studies predicting a pedestrian's future path jointly with future activities. We propose an end-to-end, multi-task learning system utilizing rich visual features about the human behavioral information and interaction with their surroundings. To facilitate the training, the network is learned with two auxiliary tasks of predicting future activities and the location in which the activity will happen. Experimental results demonstrate our state-of-the-art performance over two public benchmarks on future trajectory prediction. Moreover, our method is able to produce meaningful future activity prediction in addition to the path. The result provides the first empirical evidence that a joint modeling of paths and activities benefits future path prediction.
Figure: Our goal is to jointly predict a person’s future path and activity. The green and yellow line show two possible future trajectories and two possible activities are shown in the green and yellow boxes. Depending on the future activity, the person (top right) may take different paths, e.g. the yellow path for “loading” and the green path for “object transfer”.
Release Log