A REVIEW ON DEEP LEARNING TECHNIQUES FOR VIDEO PREDICTION

Abstract : The ability to predict, anticipate and reason about future outcomes is a key component of intelligent decision-making systems. In light of the success of deep learning in computer vision, deep-learning-based video prediction emerged as a promising research direction. Defined as a self-supervised learning task, video prediction represents a suitable framework for representation learning, as it demonstrated potential capabilities for extracting meaningful representations of the underlying patterns in natural videos. Motivated by the increasing interest in this task, we provide a review on the deep learning methods for prediction in video sequences. We firstly define the video prediction fundamentals, as well as mandatory background concepts and the most used datasets. we carefully analyze existing video prediction models organized according to a proposed taxonomy, highlighting their contributions and their significance in the field.The summary of the datasets and methods is accompanied with experimental results that facilitate the assessment of the state of the art on a quantitative basis. The paper is summarized by drawing some general conclusions, identifying open research challenges and by pointing out future research directions
 EXISTING SYSTEM :
 ? proposed the conditional Generative Adversarial Network (cGAN), a conditional version where the generator and discriminator are conditioned on some extra information, e.g. class labels, previous predictions, and multimodal data, among others. CGANs are suitable for video prediction, since the spatiotemporal coherence between the generated frames and the input sequence is guaranteed. ? With that purpose, the challenge provided a dataset that extends UCF101 [115] (trimmed videos with one action) with 2100 untrimmed videos where one or more actions take place (with the correspondent temporal annotations) and almost 3000 relevant videos without any of the 101 proposed actions. ? A problem was that TCGA and ARCHS4 preprocess the data (from the sequencer output) using slightly different pipelines. ? The cross-entropy loss, which is standard for classification problems. ? Leveraging on that additional information can offer the chance to extract features useful to overcome the lack of labeled samples and tackle the subtyping problem more efficiently
 DISADVANTAGE :
 ? A problem was that TCGA and ARCHS4 preprocess the data (from the sequencer output) using slightly different pipelines. ? The cross-entropy loss, which is standard for classification problems. ? Leveraging on that additional information can offer the chance to extract features useful to overcome the lack of labeled samples and tackle the subtyping problem more efficiently
 PROPOSED SYSTEM :
 ? we carefully analyze existing video prediction models organized according to a proposed taxonomy, highlighting their contributions and their significance in the field. ? Most of the existing deep learning-based models in the literature are deterministic. Although the future is uncertain, a deterministic prediction would suffice some easily predictable situations ? While great strides have been made to mitigate blurriness, most of the existing approaches still rely on distance-based loss functions. ? This has further encouraged authors to reformulate existing deterministic models in a probabilistic fashion.
 ADVANTAGE :
 ? The most widely used evaluation protocols for video prediction rely on image similarity-based metrics such as, Mean-Squared Error (MSE), Structural Similarity Index Measure (SSIM) [229], and Peak Signal to Noise Ratio (PSNR). However, evaluating a prediction according to the mismatch between its visual appearance and the ground truth is not always reliable ? Finally, the lack of reliable and fair evaluation models makes the qualitative evaluation of video prediction challenging and represents another potential open problem.
Download DOC Download PPT

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com