Video Summarization by Learning Relationships between Action and Scene

Abstract

We propose a novel deep architecture for video summarization in untrimmed videos that simultaneously recognizes action and scene classes for every video segments. Our networks accomplish this through a multi-task fusion approach based on two types of attention modules to explore semantic correlations between action and scene in the videos. The proposed networks consist of the feature embedding networks and attention inference networks to stochastically leverage the inferred action and scene feature representations. Additionally, we design a new center loss function that learns the feature representations by enforcing to minimize the intra-class variations and to maximize the interclass variations. Our model achieves a score of 0.8409 for summarization and accuracy of 0.7294 for action and scene recognition on test set of CoVieW’19 dataset, which is ranked 3rd.

Publication
In IEEE/CVF International Conference on Computer Vision Workshop
Jungin Park
Jungin Park
PhD, Postdoc Researcher

My research interests include computer vision, video understanding, multimodal learning, and vision-language models.