Temporal Cycle Consistency: For A Video-To-Video Translation
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
ASTU
Abstract
aged enormous in image translation by the use of GANs on an unpaired dataset. As far as video translation is concerned, current GAN-based approaches do not entirely leverage space-time knowledge in videos. This research examines the idea of using GANs for the utilization of spatial-temporal information in a video by extending the unpaired video-to-video translations model (ReCycle GAN) to enhance spatial-temporal video translation. In particular, previous methods suffer from Object disappearance, Object dislocation, and flickering Artifacts. To Mitigate these issues, this work proposes to add feature preserving loss and temporal aware discriminator to the Cycle GAN and ReCycle GAN to generate more temporal consistent videos. Extensive qualitative and quantitative assessments demonstrate the notable success of the proposed system against existing methods. Average human evaluation study has shown that this research excels at 60% compared to Cycle GAN and 35% on ReCycle GAN. This paper concludes that adding feature preserving constraints and temporal aware discriminator does improve temporal coherency of generated output video.aged enormous in image translation by the use of GANs on an unpaired dataset. As far as video translation is concerned, current GAN-based approaches do not entirely leverage space-time knowledge in videos. This research examines the idea of using GANs for the utilization of spatial-temporal information in a video by extending the unpaired video-to-video translations model (ReCycle GAN) to enhance spatial-temporal video translation. In particular, previous methods suffer from Object disappearance, Object dislocation, and flickering Artifacts. To Mitigate these issues, this work proposes to add feature preserving loss and temporal aware discriminator to the Cycle GAN and ReCycle GAN to generate more temporal consistent videos. Extensive qualitative and quantitative assessments demonstrate the notable success of the proposed system against existing methods. Average human evaluation study has shown that this research excels at 60% compared to Cycle GAN and 35% on ReCycle GAN. This paper concludes that adding feature preserving constraints and temporal aware discriminator does improve temporal coherency of generated output video.
