Generating Cinemagraphs
Summary
Cinemagraphs are still photographs in which a minor and repeated movement occurs, forming a video clip. They are published as an animated GIF or in other video formats, and can give the illusion that the viewer is watching an animation. Cinemagraphs are made by taking a series of photographs or a video recording, and, using image editing software, compositing the photographs or the video frames into a seamless loop of sequential frames. This is done in such a way that motion in part of the subject between exposures is perceived as a repeating or continued motion, in contrast with the stillness of the rest of the image.
Taking opportunity from this project, I intend to go through the available techniques for generation of a Cinemagraph, all the steps involved like ROI detection, background warping, stabilization, etc., and come up with a hybrid rendition of the whole process, scraping off parts from existing research to generate a cinemagraph from given input video.
Description of Problem:
There are a lot of ways to create Cinemagraphs. The two which are the main points of focus for the scope of this project are Selectively De-Animating Video and Automatic Cinemagraph Portraits. The first one asks for user input to mark regions of animation and uses Markov Random Fields to compose the final video. The later does it automatically, emphasizing mainly on fine-scale facial motions while discarding large scale motions in the scene. Creating Cinemagraphs crudely comprises of recognition of area of interest, warping the area to create a seamless boundary with the immobilized area and then finally animating the area of interest. Doing all these steps, whether in an editing tool like Adobe Photoshop / Premier, or using graphics techniques requires great deal of effort and/or computation along with knowledge of concepts like content-aware warping and graph-cut video composition.
Importance of Problem:
This is a new kind of media that is gaining popularity due to its subtle artsy appeal. It is bridging the gap between a photo and a video and is well liked on social media sites like Instagram. Creating a cinemagraph, semi-automatically or automatically is useful because doing it manually requires certain amount of editing skills and use of powerful tools which might be a little too much for a novice or a casual user just wanting to generate a cool looking piece of art without much hassle. This project aims at doing just that, at the click of a button!
Previous Work on Problem:
There have been many research on ways to create a cinemagraph before. Most of the following works employ techniques to stabilize the video and smoothly animating a designated region of interest -
Proposal:
The project's goal is to compare various techniques and provide a hybrid model for cinemagraph generation so that a user can get the best possible results for a particular scenario / setting of video capture. The project's objective is to provide the user with option to generate manually or automatically for cases where user requires more fine-grained control over how he intends the final cinemagraph to look.
Originality:
While the techniques mentioned in the referred papers (linked below) are widespread, there is no single technique that works on all scenarios to generated desirable effects without trading off some things (ease of use) for the others (performance and accuracy). I plan to pick the best parts of available techniques to generate the best possible Cinemagraph.
Relationship to Graphics:
The referred papers in this post were presented on SIGGRAPH and closely deal with Computer Graphics.
List of Goals:
Intermediate Tasks until November 30th
- Readings
- Selectively De-Animating Video
- Automatic Cinemagraph Portraits
- Graphcut textures: image and video synthesis using graph cuts
- Content-preserving warps for 3D video stabilization
- Kanade–Lucas–Tomasi feature tracker
- Markov Random Field (MRF)
- Face Tracking
- Fast approximate energy minimization via graph cuts
- Interactive digital photomontage
- Implementation
- Given an input video, generate a cinemagraph (assuming the input is stabilized)
- immobilizing a hard-coded area in the video frames
- immobilizing / mobilizing area based on user input
Final Tasks until December 12th
Having assimilated the various techniques involved in generation of cinemagraph, evaluate the performance and quality by following approaches -
- Having user to stroke the regions of motion / non-motion
- Automatically recognizing the immobile part using KLT tracks and animating only fine scale motion
Additional Goals
- Switching between manually selecting areas of interest and auto generation
- Use face tracking for portrait cinemagraphs
- Use energy functions for seamless infinitely looping video
- Use overall video stabilization
Comments
Post a Comment