Notes: Combining Vision and Computer Graphics For Video Motion Capture

  • 1. Introduction
    • Motivation
      • “Physical simulation offers realism, but control problems for complex creatures are horrendous. Therefore, capturing motion from real creatures is often the best way to achieve the desired effect.”
    • Combining automatic methods with goal-directed, parametric, and interactive techniques
  • 2. Background
    • They use active contours, or snakes as the means of relating the model to the image.
      • sets of connected control points (usually edges) in the underlying image.
  • 3. The 3D Model
    • for their joint segments, they implemented constraints with reach-cone joint limits
      • Reach-Cone Joint Limits:
  • 4 Feature Tracking
    • 4.1 Low-Level Image Processing
      • 4.1.1: Hue and Intensity tracking
      • 4.1.2: Smoothing filters to convolve the image
      • 4.1.3: Edge detection (Sobel method)
        • primary approach for their research
    • 4.2 Active Contours
      • Able to drive the animation of the 3D model
      • The more complicated the motion is in the footage, the more interaction is required from the user to make sure that the snakes are tracking properly. If the motion is simple, the snakes track almost automatically.
    • 4.3 Feature tracking with active contours
      • Given one or more initial positions for the contours on the image, the contours must automatically adjust themselves to track these features
        • To do this, virtual forces are applied to the active contours
          • internal forces: represent a contour’s interaction with itself
          • external forces: contour’s interaction with the image
          • gravity forces: pull the contour in a particular direction.
        • The heuristics used to determine these forces are under development.
  • 5. Animation from Video
    • use the 2D active contours that represent feature motion to extract 3D positions at joints in our model.
    • extremely under-constrained problem
    • rely on user input to make the capture feasible
    • STEPS ARE AS FOLLOWS
      • user adjusts the basic geometry of the generic model to fit the subject in the video.
      • user aligns the model with image features in one or more model key frames, where there’s a clear relationship to the model and image.
      • the active contours are automatically anchored to the model in anchor frames → fauna snakes
      • the fauna snakes are used to “pull” the model into position in the remaining frames.
    • 5.1 Anchoring fauna snakes to the model
      • First find anchor frames to anchor the fauna snake to the model
        • clear image
      • Anchoring: finding the nearest point on a segment longitudinal axis to fauna snake vertex.
      • When the nearest point is found in two dimensions, each fauna snake vertex is given a Z-value equal to that of the near point on the segment axis.
      • Each vertex is then converted to the local coordinate frame of the anchoring segment, establishing a relationship between the world space features represented by active contour points, and the 3D model.
    • 5.2 Automatically Repositioning the Model
      • 5.2.1 Translation
        • Adjust the position of the model root segment parallel to the image plane.
          • difference between the projected positions in world space of each contour point and its anchor point.
      • 5.2.2 Image-Plane and Euler Rotations
      • 5.2.3 Direct out-of-plane rotation
        • Calculations are done in world space, but applied in local segment space.
    • 5.3 Occlusion and Camera Motion
      • Virtual anatomy can be rendered with our model and used to estimate when contour points become occluded.
  • 6. Results
    • First adjusted generic horse model to suit horse from video (footage from front and side)
    • Captured the movement mostly with fauna snakes with some adjustment

There might be a way to analyze video footage of creatures moving using active contours and snakes. generate the “snakes” and use their positional information in maya to drive the rig.
How would this work with an IK solver? I need to investigate this issue!

Also, you already have this data from the rotoscoped PLD footage. Therefore, I think it’s time to move on to examining a rig to drive.

Start with a prototype.

See if you can find that software and MatLab code for this analysis. http://gpu4vision.icg.tugraz.at/index.php?content=downloads.php

Advertisements
Tagged

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Advertisements
%d bloggers like this: