Loading [MathJax]/extensions/Safe.js
arxiv.org
arxiv-vanity.com
scholar.google.com
Actor and Action Video Segmentation from a Sentence
Kirill Gavrilyuk and Amir Ghodrati and Zhenyang Li and Cees G. M. Snoek
arXiv e-Print archive - 2018 via Local arXiv
Keywords: cs.CV

more

[link]
Summary by Oleksandr Bailo 6 years ago

This paper performs pixel-wise segmentation of the object of interest which is specified by a sentence. The model is composed of three main components: a textual encoder, a video encoder, and a decoder.

  • Textual encoder is word2vec pre-trained model followed by 1D CNN.
  • Video encoder is a 3D CNN to obtain a visual representation of the video (can be combined with optical flow to obtain motion information).
  • Decoder. Given a sentence representation $T$ a separate filter $f^r = tanh(W^r_fT + b^r_f)$ is created to match each feature map in the video frame decoder and combined with visual features as $S^r_t = f^r * V^r_t$, for each $r$esolution at $t$imestep. The decoder is composed of sequence of transpose convolution layers to get the response map of the same size as the input video frame.
Your comment:

Send Feedback
ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: