Skip to main content

Project Proposal

Generating Cinemagraphs



Summary 

Cinemagraphs are still photographs in which a minor and repeated movement occurs, forming a video clip. They are published as an animated GIF or in other video formats, and can give the illusion that the viewer is watching an animation. Cinemagraphs are made by taking a series of photographs or a video recording, and, using image editing software, compositing the photographs or the video frames into a seamless loop of sequential frames. This is done in such a way that motion in part of the subject between exposures is perceived as a repeating or continued motion, in contrast with the stillness of the rest of the image.

Taking opportunity from this project, I intend to go through the available techniques for generation of a Cinemagraph, all the steps involved like ROI detection, background warping, stabilization, etc., and come up with a hybrid rendition of the whole process, scraping off parts from existing research to generate a cinemagraph from given input video.

Description of Problem:

There are a lot of ways to create Cinemagraphs. The two which are the main points of focus for the scope of this project are Selectively De-Animating Video and Automatic Cinemagraph Portraits. The first one asks for user input to mark regions of animation and uses Markov Random Fields to compose the final video. The later does it automatically, emphasizing mainly on fine-scale facial motions while discarding large scale motions in the scene. Creating Cinemagraphs crudely comprises of recognition of area of interest, warping the area to create a seamless boundary with the immobilized area and then finally animating the area of interest. Doing all these steps, whether in an editing tool like Adobe Photoshop / Premier, or using graphics techniques requires great deal of effort and/or computation along with knowledge of concepts like content-aware warping and graph-cut video composition.

Importance of Problem:

This is a new kind of media that is gaining popularity due to its subtle artsy appeal. It is bridging the gap between a photo and a video and is well liked on social media sites like Instagram. Creating a cinemagraph, semi-automatically or automatically is useful because doing it manually requires certain amount of editing skills and use of powerful tools which might be a little too much for a novice or a casual user just wanting to generate a cool looking piece of art without much hassle. This project aims at doing just that, at the click of a button!

Previous Work on Problem:

There have been many research on ways to create a cinemagraph before. Most of the following works employ techniques to stabilize the video and smoothly animating a designated region of interest - 


Proposal:

The project's goal is to compare various techniques and provide a hybrid model for cinemagraph generation so that a user can get the best possible results for a particular scenario / setting of video capture. The project's objective is to provide the user with option to generate manually or automatically for cases where user requires more fine-grained control over how he intends the final cinemagraph to look.

Originality:

While the techniques mentioned in the referred papers (linked below) are widespread, there is no single technique that works on all scenarios to generated desirable effects without trading off some things (ease of use) for the others (performance and accuracy). I plan to pick the best parts of available techniques to generate the best possible Cinemagraph.

Relationship to Graphics:

The referred papers in this post were presented on SIGGRAPH and closely deal with Computer Graphics.

List of Goals:

Intermediate Tasks until November 30th


Final Tasks until December 12th

Having assimilated the various techniques involved in generation of cinemagraph, evaluate the performance and quality by following approaches -
  • Having user to stroke the regions of motion / non-motion
  • Automatically recognizing the immobile part using KLT tracks and animating only fine scale motion

Additional Goals

  • Switching between manually selecting areas of interest and auto generation
  • Use face tracking for portrait cinemagraphs
  • Use energy functions for seamless infinitely looping video
  • Use overall video stabilization

    Comments

    Popular posts from this blog

    Final Report

    Generating Cinemagraphs Tushar Turkar © P hotodoto.com Outline Summary Previous work Description of work Results Analysis Summary Cinemagraphs are seamlessly looping media files wherein one or more prominent (or  selected ) region in the foreground is dynamically in motion while rest of the scene is static. The Problem : A regular process to generate a cinemagraph comprises of the following steps: Recording - Capture stabilized footage, e.g., with a tripod Editing Software - Import the raw footage and clip it to show the desired content Masking - Create a mask to mobilize / immobilize the areas of interest Composite - Apply the mask and composite it with a static (invert-masked) image from the clip. Looping - Add seamless boundaries around clip on which to loop. While all this can also be done with several algorithms proposed in the research papers mentioned ahead, creating a cinemagraph still remains a cumbersome task for a majority of ...

    Update Report

    Summary of Work to Date: Setting off to implement a Cinemagraph generator using the two papers [ 1 ] [ 2 ] as primary references, following is a list of work done: Going through the papers' methods, two main steps have been outlined for generating semi-automatic / automatic cinemagraphs, namely, Warping and Compositing . The former step (also the one that I majorly studied hitherto) requires use of KLT feature tracking. To understand the working of the tracker, I read through topics including Corner Detection ,  Homography ,  RANSAC  and Optical Flow . I experimented with sample codes provided by OpenCV to understand how they work and how would I use this existing implementation towards the project goal. Optical Flow example (1): The lines denote the direction of motion of pixel under the dot Optical Flow example (2): HSV flow visualization KLT Tracking example: Detects good-tracking-points and tracks them across frames. As a part...