Motion Capture Tutorial (Fundamentals)
There are 8 main steps that you should remember for using the motion capture system. These steps serve as a guideline for all scenarios.
This manual is divided into 8 different sections (listed below). Each seciton describes sub-tasks with pictures.
* This is an ongoing tutorial; therefore, if you have any suggestions, please send them to EVL support or contact James (sjames at evl dot uic dot edu) or Katie (kcrupp2 at uic dot edu) directly.
Contents
First, boot up system and start all necessary components. This can be done in four different steps.
a. Power on Cameras
The power switch of vicon hardware can be found on the back panel of Vicon Ultranet (thin black box. Figure 1-a, 1-b)
Flip on the power switch. All eight cameras will turn on their IR light ring one by one. Once all cameras' LEDs are lit, then the system hardware is ready.
b. Start Vicon iQ software
If Vicon Host PC is not on, then power it on and login with either your EVL domain account or UIC netid (You must select proper domain name in login window corresponding to your choice of account.). Find Vicon iQ software icon on host pc's desktop and double click to execute (figure 2).
c. Create Eclipse Database & new mocap session
If this is the first time you have used Vicon software under your account, you will see the eclipse database creation window after starting up Vicon iQ. (figure 3. note: sometimes this database window hides behide vicon software splash image. If you do not see the database creation window, check the taskbar.) If you have used Vicon iQ before under your account, iQ will automatically load your last database file and show its contents in iQ software.
To create a new database, click the 'new' button on this window, then fill out the database information (figure 4.)
You can locate DB directory by cliking the 'browse' button and selecting where you wish to store your data (figure 4. first field).
Give name of DB and click create button.
Once you create a new database, the new database item will be shown in open database window (figure 5). Select this item and click 'open,'
The first screen you will see in iQ is data management operation mode (figure 6). Since you just created new database, there is nothing listed in data management window. The next step is to create our caputre session in the database.
Capture sessions are organized in a hierachical manner in the database. (This is a simple directory structure that eclipse refers to) In descending order, they are organized as: project, day, session and subsession. You need to create a project first, then you can create day and session by clicking the small icons sequentiallly (figure 7).
The session is the base unit used for storing your motion caputre data. If necessary, it can also be divided into sub sessions.
Figure 8. shows more details of the Vicon iQ software interface.
d. Connect Vicon realtime engine
To connect to the Vicon realtime engine, select setup operation mode (figure 9. a) and click the realtime connect button (fugre 9. b).
[note: Before connecting to the realtime engine, all 8 cameras must be recognized by vicon software. When you start the software, all cameras will be initialized again and recognized by the software automatically. (The cameras' IR light turns off and then on again.) You are then ready to connect realtime engine.]
If a previously calibrated camera configuration exists, you will see all 8 cameras in the Live 3D workspace.
In this step, you calibrate the Vicon hardware (cameras). This sets up a threshold and locates all cameras properly in 3D space.
a. Threshold
Make sure that you are in the Setup operation mode. Then, select camera view and select all cameras. Click on thresholds activity bar (figure 10. a, b, c)
In the 3D workspace, there are eight camera views. In general, there should not be any visible object or dots in any of the 8 camera views if a previous proper configuration exists(this configuration is stored in camera hardware and publically accessible)
If you see anything in any of the 8 camera views, make sure that there is no IR reflective material, such as shiny reflective material on athelete shoes or sportware, in the capture volume first.
If you still see something in any of the 8 camera views, you can filter it out by setting threshold. Open View option control bar and turn on threshold grid (figure 11. a, b).
Click start recording background (figure 11. c), and stop recording backgroud after a few seconds (start recording button changes to stop recording button after starting, so click this button again.)
Now you can see that the small region marked with x indicating threshold set. After this background recording, check that all camera views are clean and that nothing appears anymore. (turn off threshold grid and see each camera view). If you want to see a camera view in more detail, select any camera number in menu. Then, single camera view will be shown to fill the entire workspace. (Figure 12)
Once you check all cameras' threshold, save out the threshold settings (figure 13.).
b. Camera Calibration (Dynamic and Static calibration)
To calibrate the cameras, select the calibration operation mode (figure 14. a) If there is no previous setting for camera calibration, you will see all cameras aligned along with x axis on the ground (figure 14).
There are two calibration steps. The first one is dynamic calibration and the second is static calibration.
Dynamic calibration sets up the relative location of each camera according to the other cameras and static calibration will set the world orgin. Two different calibration objects are used to perform these steps (figure 15, 16).
The 240 mm wand is used for dynamic calibration and L-Frame for static (Figure 16.)
Do not present any other markers or calibration object in capture volume other than the exact one you need to use in each step! (This means the other objects should all be in the case).
[Dynamic Calibration]
Select 240 mm Wand in wand combo box (figure 17. a) and change work space view to Camera (figure 17. b)
Bring 240 mm calibraiton wand inside the capture volume. Click Start Wand Wave button (figure 18. a)
Wave wand around the capture volume. Try to cover as much space for each camera as possible. (figure 19) You can see the visuals in each camera viewing window as you move the wand around.
The minimum number of frames that each camera must capture of the calibration object is 500. Once a camera gets more than 500 effective frames, the text color of the frame numbers will change from red to green in the status report area. (figure 19. a) When you have captured enough frames, click the stop wand wave button (figure 19. b).
The system will start the caculation of relative camera positions and angles (progress figure 20. a). After processing this, the statuses of all 8 cameras should be better than or eaqual to the "Good" status. (figure 20. b). Occasionally you may get "excellent" or "awesome" as a status.
Now you can see how the cameras are properly positioned in the 3D workspace. Select Live 3D workspace view to see this. All cameras should locate properly relative to each other, but they will look strange in 3D space because an orgin has not yet been set up for the cameras. (Figure 21.)
[Static Calibration]
After the dynamic calibration is complete, you can set the origin for the cameras (static calibration). Make sure to put the 240 mm wand back in the calibration kit case before taking the L-Frame to the center of capture volume to avoid excess "noise". Position L-Frame as the orgin (center) of the capture volume (refer to figure 22).
Select Ergo 9.5mm LFrame (figure 23. a), Click Track L-Frame button (figure 23. b), Click Set Origin button. (figure 23. c)
Now you have finished the calibration and all cameras will locate properly in the 3D space (figure 24). Save out the result of calibration. (Button is located in Manage Calibration Files area below Status report).
Since all necessary HardWare/SoftWare setup has been completed at this point, it is time to begin the subject calibration. The subject is the person whose motions will be recorded.
a. Mocap Suit
There four different size of suits in the studio: XL, L, M and S. Select the best size for subject and have them wear the jacket, pants and hat.
In general, a tight suit is better, as long as it still allows the subject to move around uninhibited. If some parts of the suit are loose, velcro can be used to tighten it. (velcro is kept in the marker storage box on the shelf near the dressing area.)
b. Vicon Skeleton Template (.VST file)
Vicon Skeleton Template file is the generic definition of Skeleton. It includes all joints and attached markers on the body. For the full body human motion capture, there are two template files that can be used: EVL_Fullbody_1.vst and EVL_Fullbody_2.vst. Template files are located in D:\Resources\VST directory.
We will use this file as reference to attach markers in the following section.
In Vicon iQ software, go to the file menu, click open and select the template file that you would like to use. You will then be able to see the template structure in the Modeling operation mode. (see Figure 25. and Figure 26)
c. Attach Markers
Attach markers to the subject as required by the subject template file. There are different types of markers that which each have a different size (diameter of sphere). The size of marker does not matter in terms of subject definition; however, you should use smaller markers for the smaller part of body such as the feet and hands. All markers can be found in the clear box on the self near the suit hanger. Figure 27. shows each of these.
You can find these markers on the shelf in dressing area. (Figure 28.)
To see more details of fullbody template configurations look at following two pages.
Range of Motion is a special type of motion that is used to process the subject calibration (refer to following section 5).
Every session of motion capture needs its own ROM capture for the subject calibration. Here we refer "session" as the motion capture that occurs without any changes to the subject. For instance, if subject takes off suit for some reason and wears it again later, another session would begin since the markers' position may have changed.
The objective of ROM is to find the largest possible range of subject motion so that the system can calculate all the possible variables for each markers and joint based on template configuration. This information will be used to process the rest of the motion capture data.
ROM capture starts should start with the T-Pose and end with the T-Pose. Change operation mode to Capture (Figure 29. a)
The subject should stands in T-Pose at the center of the capture volume facing toward Y-Axis. Set your trial file detials including name, type and description (Figure 29. b. This is optional. The trial description is useful later since it will appear with the trial information in the data management operation). Click the start button (Figure 29. c). The subject should now moves all of their joints as much as possible.
An example video of ROM capture can be found here (thanks to Lance for this footage.)
Lance_ROM_640.mov (640 x 480 resolution, 18.3 MB 1:08)
After moving all joints around as much as possible, the subject should return to the T-Pose position and the capture can be stopped. The system will automatically save all data onto the hard disk.
Subject Calibration is a special type of post processing task for the ROM data. The point of subject calibration is to generate a calibrated Skeleton File (.VSK) that includes all the details for subject specific data in addition to the subject template information.
a. Open ROM data
Once you have finished ROM motion acquisition, go to the data mangement operation mode. You should see that a new trial entry has been created under your project session. Right click on this ROM entry, select open and then select raw capture data. (Figure 30.) Vicon iQ will load the raw data (x2d) file and the operation mode will change to post processing.
In post processing mode, you will not see any marker data in the 3D workspace because this data only exits at first as a 2D image. You can see this 2D data in the camera view mode. (Figure 31.)
b. Reconstruct marker
The first task after loading ROM file is to reconstruct 2D markers into 3D space. Select the reconstruct activity bar and click the reconstruct run button. (Figure 32.) Once it is reconstructed, you will be able to see all the markers in 3D space (like is shown in Figure 33 in the next section).
c. Load Template file (.VST)
Before you can start labeling the markers, you need to load the Vicon Skeleton Template file (.vst) to read all the necessary marker information (including names and color codings). Go to the Subject activity bar (Figure 33.) and click Create Vicon Skeleton Template button.
Figure 34 shows the VST selection window. First of all, you should select the place where your .vst file is located by clicking on Change Dir. Once you specify the location, Vicon iQ will show the template pull down menu (if there are multiple vst files in that location, you will see others in the pull down menu). In Figure 34, the template file Fullbody_James.vst has been used. You should type a subject to use (i.e. "James" here) and click OK.
Once you have created a skeleton template file, you should be able to see it in the Active Subject area as a VST entry (Figure 35.). At the same time, your skeleton template should appear in 3D space. You can change the visibility of each element type in the View Option area under -> Subjects.
d. Labeling (+ Label Range Of Motion)
Now you are ready to begin labeling the markers. Go to the Labeling activity bar. All marker names are listed in the Labels area (Figure 36. c).
Make sure to change the labeling mode to Sequence and Rules to Whole (or forward) (Figure 36. a, b) so that iQ label markers through whole range of motion for as long as it is continuous.
You should start labeling in the first T-Pose frame. Typically this is the first frame of motion since you should always start with T-Pose (however, if you are capturing yourself this may not be the case).
If there is some useless data before the T-Pose (i.e. you are doing the capture yourself and you had to move from the operating workstation to the center of capture volume to start capturing), you can move the time slider bar to the first T-Pose frame.
The labeling sequence begins with the first marker in the template file (Figure 36. c). Vicon iQ will highlight the first marker in the list.
Once you have selected this marker (left click with the mouse on a white marker in the 3D workspace), the second one in the sequence will be hightlighted. You can finish all marker labeling by selecting each one in the workspace.
White markers are unlabeled. Once you label a marker, its color changes based on color setting in the template file and it will connect to the other markers (see Figure 37.)
If you are confused with the abbreviation for a marker, hover your mouse cursor over the markers in the skeleton template. A small popup window will show the full name for the marker. This information can be used to find each marker. If you mislabeled a marker, you can select (click the marker name in the list) the marker in the Label area and select the correct marker in the 3D workspace.
Figure 38 shows the end result of labeling (all markers are labeled and colored.).
e. Autolabel Range of Motion
Now you should have at least one frame fully labelled. Vicon iQ can automatically label all the other frames. The idea here is that labeling may has some gaps between discontinuous markers and the autolabel for range of motion operation can help to solve this.
This time you can use one of the pipeline operations (you can find more details about this in the advanced material section).
Go to the pipeline activity bar. If you do not see a pull down menu enabled in the pipeline control, you will need to locate the saved pipeline files (.plf).
Click the Browse for folder button (Figure 39. a) and select D:\resources\pipeline directory (Figure 39. This dir has some of the useful predefined pipeline operations).
Select SubjectCalibration pipeline in pull down menu (Figure 39.b). Right click on the second entry of pipeline task, then Autolabel Range of Motion, and select Run Selected Op (Figure 40. c).
Before calibration, you must check the whole sequence of ROM data to make sure that no markers are mis-labelled or un-labelled. Move the time line slider bar or click play button and visually examine the markers. If you find problematic a problematic marker, go to the labeling activity bar again and select that marker's correct name from the list and click on that marker in the 3D workspace (correcting label). If all labels are correct, you can go on to the next step.
f. Subject Calibration
After autolabel range of motion, you calibrate the subject based on the skeleton template file. Go to the Subject activity bar. Before you can start calibration, you must set the T-Pose event.
Make sure that you are in the T-Pose frame. If not, move time slider to find the first T-Pose frame. Click the General event icon (Figure 41. a) and then select the T-Pose icon (Figure 41. b). You will then see a blue T in the current frame right below the time slider.
There are two ways to perform the subject calibration. One can be done in pipeline, the other is done manually.
- pipeline: In the pipeline acivity bar, right click on subject calibration and then run the selected op.
- manual: In subject activity bar, click the Calibrate Subject button in the Calibration area (Figure 41. c).
Once calibration is done, you will see the change of subject type in the Active Subject area (Figure 41.a). It should change from a VST (template) file to VSK (calibrated) file.
Save out the VSK file (Figure 41.b). The file will be stored in your session directory.
Make sure to save out your intermediate file (.trial) to avoid loosing any work that you've done so far. You can do this by going to the File menu and selecting save.
g. Trajectory Label
Now you must label all of the trajectories by performing the Trajectory Label in pipeline. Right click on this entry in the pipeline activity bar and Run selected op.
h. Kinematic Fit
If you plan to use skeleton animation in other software, you should run the Kinematic Fit to calculate all of the bone bone animation data in addition to the markers.
Execute the Kinematic Fit pipeline operation.
Once you finish all the ROM processing, the subject's bones should move along with the markers and you can export the skeleton animation in other file formats (i.e. v file / bvh file ...)
You are now ready to capture real motion data for your scenario or plan. All motion trials should still start with the T-Pose and end with the T-Pose.
Motion Acquisiton is a much more simple process. All that is necessary is to make sure that you click the start button when subject is ready (starting T-Pose) and click the stop button when the performance is over (ending T-Pose). The system automatically names each trial and records the data on the hard disk. If you wish to have better names and description for your trial files, you can modify these in each capture trial before it starts as was demonstrated in the ROM section.
Post processing for motion data (not ROM data) is much simpler than ROM processing since you will already have a calibrated subject file (.vst). The VST file can help the system to process the rest of the motion data fairly accurately. This is why the Subject Calibration is important for each session. This automated task sometimes does not provide necessary quality and accuracy needed though. If this is the case, you can re-do labeling manually as instructed in the ROM processing section.
There are four main steps.
- Reconstruct markers
- Trajectory Labeling
- Kinematic Fit
- Save trial file
* Before you start the above steps, you need to make sure that a vst file is loaded in Vicon iQ. In post processing operation mode, go to the subject bar and check for a loaded VST file. If one is not loaded, you can load one by clicking import vicon skeleton (vsk) button.
* To speed up this process, you can use the pipeline feature. The pipeline feature batches multiple jobs with assigned pipeline procedures. Refer to Advanced Pipleline material in the left menu list for more details.
Vicon iQ software supports various types of file formats for exportting. A truncated list is available here.
Format |
Description |
.X2D |
Vicon Raw Capture Data (2D Image) |
.C3D |
Vicon Motion Data. Only includes markers and
label |
.V / .VSK |
Vicon
Motion Data format. Markers with label and skeleton animation (does not
include the structure of skeleton. subject calibration file (.vsk) is
necessary for this skeleton info). |
.trial |
Vicon intermediate file format. |
.fbx |
filmbox (Motion Builder) native file format. |
.csv |
Comma Seperated Values |
.csm |
Optical Data for character studio (3D Studio
Max) |
.trc |
Motion Analysis file format |
In File menu, select the export menu and choose the file format type you want to export.
If you require an unsupported format, you may use the Motion Builder software to export the data. Motion Builder supports almost all motion formats available. A more detailed description of motion file format @ Wikipedia
There is video clip about file export features from Vicon (refer to Resources menu).
If you have any trouble in using Mocap Studio, please contact EVL support.
Please clean up the studio after using it. You might want to apply some deordorizer to the Mocap Suit. If you feel the suit is not clean enough (got too much sweat? smell bad?), we periodically wash them. Please report a request to support. This is your responsibility.
This manual only covers the very basic and fundamental tasks. You may find further details and help in the Advanced or Resources menu in the left hand list.