This paper performs pixel-wise segmentation of the object of interest which is specified by a sentence. The model is composed of three main components: a textual encoder, a video encoder, and a decoder.
- Textual encoder is word2vec pre-trained model followed by 1D CNN.
- Video encoder is a 3D CNN to obtain a visual representation of the video (can be combined with optical flow to obtain motion information).
- Decoder. Given a sentence representation $T$ a separate filter $f^r = tanh(W^r_fT + b^r_f)$ is created
to match each feature map in the video frame decoder and combined with visual features as $S^r_t = f^r * V^r_t$, for each $r$esolution at $t$imestep. The decoder is composed of sequence of transpose convolution layers to get the response map of the same size as the input video frame.