Temporal Cycle Consistency: For A Video-To-Video Translation

dc.contributor.advisorYun Koo Chung (Prof)
dc.contributor.authorKirubel, Abebe
dc.date.accessioned2025-12-17T10:54:00Z
dc.date.issued2020-10
dc.description.abstractaged enormous in image translation by the use of GANs on an unpaired dataset. As far as video translation is concerned, current GAN-based approaches do not entirely leverage space-time knowledge in videos. This research examines the idea of using GANs for the utilization of spatial-temporal information in a video by extending the unpaired video-to-video translations model (ReCycle GAN) to enhance spatial-temporal video translation. In particular, previous methods suffer from Object disappearance, Object dislocation, and flickering Artifacts. To Mitigate these issues, this work proposes to add feature preserving loss and temporal aware discriminator to the Cycle GAN and ReCycle GAN to generate more temporal consistent videos. Extensive qualitative and quantitative assessments demonstrate the notable success of the proposed system against existing methods. Average human evaluation study has shown that this research excels at 60% compared to Cycle GAN and 35% on ReCycle GAN. This paper concludes that adding feature preserving constraints and temporal aware discriminator does improve temporal coherency of generated output video.aged enormous in image translation by the use of GANs on an unpaired dataset. As far as video translation is concerned, current GAN-based approaches do not entirely leverage space-time knowledge in videos. This research examines the idea of using GANs for the utilization of spatial-temporal information in a video by extending the unpaired video-to-video translations model (ReCycle GAN) to enhance spatial-temporal video translation. In particular, previous methods suffer from Object disappearance, Object dislocation, and flickering Artifacts. To Mitigate these issues, this work proposes to add feature preserving loss and temporal aware discriminator to the Cycle GAN and ReCycle GAN to generate more temporal consistent videos. Extensive qualitative and quantitative assessments demonstrate the notable success of the proposed system against existing methods. Average human evaluation study has shown that this research excels at 60% compared to Cycle GAN and 35% on ReCycle GAN. This paper concludes that adding feature preserving constraints and temporal aware discriminator does improve temporal coherency of generated output video.en_US
dc.description.sponsorshipASTUen_US
dc.identifier.urihttp://10.240.1.28:4000/handle/123456789/1496
dc.language.isoenen_US
dc.publisherASTUen_US
dc.subjectCycle GAN, ReCycle GAN, Spatio-Temporal information, Unsupervised Videoto-Video translation.en_US
dc.titleTemporal Cycle Consistency: For A Video-To-Video Translationen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Kirubel Abebe Senbeto.pdf
Size:
4.58 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Plain Text
Description:

Collections