How can I edit motion capture footage

1. Introduction What is motion capturing? Historical overview components of the Vicon system workflow ...


1 Contents 1. Introduction Motivation Outline What is motion capturing? Definition of recording techniques Types of systems Areas of application Historical overview Motion Capturing Vicon and OMG MotionBuilder by Kaydara Components of the Vicon system Hardware MX cameras MX Ultranet Host computer calibration kit Software Vicon iq workflow pipelines Vicon iq pipeline MotionBuilder pipeline Pre-production Integration of the performer Camera Calibration Labeling and subject calibration Capturing the movements Post and data processing Further use Appendix Bibliography List of figures Guide

2 1. Introduction 1.1 Motivation A Vicon MX motion capturing system enables a production team to undertake amazing projects. So that you don't have to deal with the complexity of the hardware and software, this guide is intended to help you understand processes, work quickly, save time and achieve the respective goal as desired. 1.2 Structure This work is divided into a theoretical and a practical part. The theory is dealt with in the main part, the practical part can be found in the appendix as a guide. It is he who is at the side of the user during the motion capturing to be carried out. The theory dealt with below serves exclusively to familiarize yourself with the topic and its terminology. 1

3 2. What is motion capturing? 2.1 Definition of motion The term motion [mō shən] is derived from the verb mo vie ren, whose original Latin origin movere is translated into German as to move. The root of the word can be found in the Indo-European term meuə. In English, motion is defined as: The act or process of changing position or place. 1 According to the vocabulary project 2 of the University of Leipzig, the most significant right neighbor for the word motion is the term capture. In other words, both words predominantly appear together in the German language. Capture The word capture [kǎp chər] comes from the Latin term captūra and can be translated as catch. The root of the word lies in the Indo-European chapter 3. A suitable definition from English is: The act of catching, taking, or holding a particle or impulse. 4 Motion Capture (MoCap) The combination of both of the above expressions finally results in a new term, that of Motion Capture, which is understood as the recording of movements or as movement detection. At this point it should be mentioned that motion capture or motion capturing is often abbreviated to MoCap, as is the case in this bachelor thesis. Albert Menache, the author of the book Understanding Motion Capture for Computer Animation and Video Games (Verlag Morgan Kaufmann, 1999) describes motion capturing as the process of recording a live motion event and translating it into usable mathematical terms by tracking a number of key points in space over time and combining them to obtain a single three-dimensional representation of the performance. 1 The American Heritage Dictionary of the English Language, Fourth Edition. 2 Project Der Deutsche Wortschatz, University of Leipzig, website access from: 3 The American Heritage Dictionary of the English Language, Fourth Edition. 4 The American Heritage Stedman's Medical Dictionary. 2

4 The technology of motion capturing makes it possible to record movements of a real object (human, animal, machine, etc.) in real time and to save them as a three-dimensional movement sequence in the computer in order to then transfer them to a virtual character. One of the best-known applications is the animation of computer-generated characters in films and computer games. Further areas of application are listed in Chapter 2.4. In motion capturing, the so-called performer wears a suit equipped with markers or sensors, whose changes in location are recorded by special cameras. From the analysis of the markers / sensors (positions, orientations) over time, three-dimensional movement recordings are generated. After post-processing, i.e. the processing of the movement data combined with the correction of incorrect records, the data can be imported into 3D systems. There they are linked to the skeleton of a 3D body (mapping), which records the movements and reproduces them accordingly. The outstanding benefit that results from the technology of motion capturing is the real-looking, complex movement of the virtual character. In contrast, the technique of interpolations between keyframes previously used by animation studios seems much more unreal. 2.2 Recording techniques The systems for MoCap recording that exist on the market can be divided into three types: 1. Inside-In systems Here the sensors and the signal recording are located on the body of the performer. An example would be an electromechanical exoskeleton, in which the tracking takes place via the potentiometer. 2. Inside-out systems The sensors are on the performer's body, but the signal pick-up is removed. In the XSENS system 5, for example, an externally generated magnetic field strength is measured. It is a radio network. 5 XSens Motion Technologies: 3

5 3. Outside-In Systems In this system, the sensors are aimed at the performer (or a room in which the performer is located). Optical systems that track the reflective markers are one of these systems. Types of Systems Electromechanical Systems These systems use electromagnetic suits (exo-skeletal systems). Potentiometers measure the rotations and orientations of the joints. The advantage of this system is its low weight and low cost, and there are no concealment problems. The disadvantage is that the performer is hindered in terms of freedom of movement, as the skeleton is limited. Changes in sensor positions are also problematic and the number of sensors is predetermined. 7 Electromagnetic systems In this system, an electromagnetic field is generated by the transmitter and the receivers, which are attached to the performer, determine the position and orientation. Disadvantages of electromagnetic systems are the small movement space, complex calibration and the risk of distortion of the magnetic field by cables and metals. Acoustic systems In acoustic systems, ultrasonic transmitters are attached to the performer's body that send out impulses. Ultrasonic receivers set up nearby register these impulses and measure their positions. This system has problems of obscuration and reflection, and the quality of the air is also important as it can influence the results. Fiber optic systems Here, the flexion angles of the joints are measured optically using fiber optic cables. Photo sensors determine the intensity with which the light escapes when the cables are bent. Since the change in light intensity is used for the measurement, errors can easily occur in the event of minor external influences. 6 Compare Jackel, Neunreither, Wagner: Methods of Computer Animation, Springer-Verlag Cf. ibid. 4th

6 Image acquisition systems These systems attempt to extract the movements of a person from videos or images. For this purpose, suitable points in the image are identified, which are then linked after an analysis. This will attempt to extract animation data. Optical systems The Vicon system we use is one of the optical systems with tracking of passive markers. Optical systems are among the best-known processes. The decision to purchase this type of system was made after considering the following aspects: Optical systems have in common that they use special cameras, but are divided into two different types of recording: 1. Tracking with active markers: pulsating light-emitting diodes are located on the body of the performer, whereby these are recorded by the cameras. 2. Tracking with passive markers: Reflective, non-self-luminous markers are located on the performer's body, which are made to reflect by the infrared pulses from the special cameras. Advantages of optical systems are: the extreme accuracy that results from the sensor detection in the infrared range, the flexibility, since the number of markers can be expanded as required, the freedom of movement of the performer, as there are no cables to obstruct him, and the possibility of using higher frequencies to increase the frame rate (images per second) and get a more detailed recording. The system also has disadvantages which are essentially occlusions where markers are not seen by sufficient cameras and overlays where the system confuses markers. It can also be problematic to find a room that is dark and in which no reflections occur, as these could be recognized as ghost markers. 5

7 2.4 Areas of application Motion capturing is used in a wide variety of areas, which are roughly demarcated and provided with examples in the following: entertainment industry Entertainment is now the top priority, with character animation in computer games, films and commercials, character previews on set and full motion Video 8. Examples of the first successful films using MoCap technology can be cited: Jurassic Park (1993), Toy Story (1995), Titanic (1997), Batman and Robin (1997) and The Mummy (1999). At the beginning of the 21st century there was a definitive breakthrough for motion capturing with films such as Gladiator (2000), The Patriot (2000), The Lord of the Rings: The Fellowship of the Ring (2001), Star Wars Episode 1 (2001 ), Final Fantasy: The Spirits Within (2001), Pearl Harbor (2001) etc. As one can easily guess, motion capturing has become standard in the big films produced today. The same applies to the field of computer games, in which the animation of virtual characters is obtained using MoCap technology. The best examples of this are the sports games from EA Sports. Biomedicine Motion analysis in orthopedics, pediatrics and neurology, physical therapies, object tracking in medical environments. Engineering visualization of simulations, virtual prototyping and virtual reality. 9 Sports Optimizing the performance of athletes (for example improving the golfers' striking technique or training for cyclists and skiers). Military technical training, deployments in the America s Army practice game. 10 Justice Reconstruction and animation of courses of action. 8 FMV is a pre-recorded television-quality video or computer-generated animation that is incorporated into a computer game. (See Vicon Product Glossary, page 10.) 9 Vicon MX Hardware System Reference, Revision America s Army, accessed on: 6

8 3. Historical overview 3.1 Motion capturing When researching the subject of motion capturing, one often comes across information on two-dimensional motion recordings that were carried out on humans and animals in the 19th century by Eadweard Muybridge and Etienne-Jules Marey. In the following, however, the history of the MoCap technology known today will be briefly framed, which is characterized by the fact that the movements of a real person are transferred to a three-dimensional character. It all started when several American universities independently developed different systems for motion recording in the late 1970s / early 1980s. At the time, the universities saw the purpose of motion capturing less in the areas of entertainment than in the areas of medicine (prostheses, sports medicine). 11 An example of such a university is Simon Fraser University, Canada, which equipped people with potentiometers in their biomechanical laboratory, recorded their movements and used computers to navigate and graphically display computer-animated figures through the data obtained. 12 With Brilliance, a commercial for canned food, the MoCap technology was used commercially for the first time in 1984. The commercial produced by Robert Abel showed a three-dimensional robot woman who moved like a real woman. To achieve this, the production team attached black markers to the joints of a real actress and photographed her movements from different perspectives using Polaroid cameras. The footage was used to analyze the marker positions with the help of computers and to derive movement algorithms. The motion algorithms were applied to the joints, combined and exported as vector graphics. 11 See Matt Liverman The Animator's Motion Capture Guide: Organizing, Managing, Editing, Verlag Charles River Media, 1st edition TW Calvert, J. Chapman and A. Patla, "Aspects of the kinematic simulation of human movement," IEEE Computer Graphics and Applications, Vol. 2, No. 9, November 1982, pp. Taken from: 7

9 A wire frame model of the robot woman could be driven with it. The spot, which had a production time of four weeks and a length of 30 seconds, came to be known as Sexy Robot. 13 Figure 1 - Scene from the Brilliance commercial.In the following years, other projects were developed that were developed by Pacific Data Images, for example the Waldo C. Project in 1988 (real-time animation of a polygonal doll) and the exoskeleton, which was used for the Animation was used in the film Toys. DeGraf-Wahrman Inc. developed Mike The Talking Head in 1988, in which a puppeteer controls an animated character in real time. Another sensational project was the music video Dozo - Don t Touch Me 14, produced in 1989 by the animation studio Kleiser-Walczak, in which a singer was completely modeled and animated by MoCap movements. The first female synthesis pian (synthetic actor) was created. An optical system with reflective markers was used for the MoCap image. Figure 2 - Dozo from the music video Don t Touch Me 13 Cf. Alberto Menache Understanding Motion Capture for Computer Animation and Video Games, Kaufmann, available on Youtube: (accessed from). 8th

10 The music video Steam 15, produced in 1993, is also worth mentioning, which received a Grammy for its special effects using motion capturing technology. In 1996, EA Games' Fifa 97 was the first computer game to contain MoCap animations. EA Games has since used MoCap technology for sports games to make them appear as realistic as possible. In the mid-1990s, the film industry began using the now professional MoCap systems to create special effects in their films. This was the breakthrough in the entertainment sector. Vicon and OMG The hardware and software that we use for the MoCap sessions comes from Vicon. This company is the world leader in motion systems and has been around for more than two decades. Here is a brief historical overview: Julian Morris founded Oxford Metrics Ltd. in Great Britain in 1984. (today: OMG plc 17) and its subsidiary Vicon Motion Systems, which after just a few years of existence became the world market leader in motion capturing. In the beginning, Vicon Motion Systems sold its systems exclusively to research groups in the fields of biomechanics, orthopedics and gait analysis. However, in the mid-1990s, the business area was expanded to include entertainment and since then MoCap systems have also been sold to animation studios. 18 Peak Performance Technologies Inc., a company that was established in Colorado, USA in 1984 and engaged in the production of computer and video-based biomechanical analysis tools for athletes 19, was incorporated into the Oxford Metrics Group in 2005 and became one with Vicon Motion Systems single company combined: Vicon. Figure 3 - OMG and Vicon 15 logos Available on Youtube: (accessed from). 16 Movement Research Lab, (accessed on). 17 OMG Oxford Metrics Group, website: 18 OMG website, accessed: 19 Vicon website, accessed: 9

11 Awards: In 1996 and 2001, Vicon Motion Systems won the Queen's Award for International Trade and in 2005 one of the Academy Scientific and Technical Awards was given in recognition of the development of Vicon Motion Capture technology. MotionBuilder by Kaydara MotionBuilder is software developed by us is used for data processing. It is an essential part of the MoCap workflow, as it provides a variety of functions that greatly facilitate the differentiated editing of animations. This will be discussed in detail in later chapters, followed by a brief look at history: The company Kaydara Inc., which was founded in 1993 in Montreal (Canada), developed a product called FiLMBOX, which was aimed at animation studios and which, among other things, had the ability to MoCap -Import data and 3D data, edit them in real time and export them again. FiLMBOX has been very successful and has been used in films, on television, for commercials, on the web and for game development. The software advanced to become the industry standard for editing, cleaning and modifying MoCap and animation data. 21 In March 2004 Kaydara released MotionBuilder, a further development of FiLMBOX with a new user interface and extended functions as well as import and export options for a variety of formats. 22 As a growing number of animation studios using motion capture technology were working with MotionBuilder and the software increasingly dominated the market, Alias ​​finally bought Kaydara in October 2004. In January 2006, Alias ​​was again taken over by Autodesk, so that the software is now called Autodesk MotionBuilder and is currently in version 7.5.23 Conclusion: With the modern Vicon MX system and the current version of the MotionBuilder software, a production team is technically able to achieve high-quality results in the area of ​​animation. 20 Academy of Motion Picture Arts and Sciences: (accessed on). 21 Measurand: History Of Animation Companies, (accessed by) 22 Golem: Kaydara announces Motionbuilder 4.0 animation software () 23 Autodesk MotionBuilder: (accessed by). 10

12 4. Components of the Vicon system 4.1 Hardware The Vicon system available to us consists of the hardware components described below, which are collectively referred to as the Vicon MX Architecture. 24 Figure 4 - Vicon MX Architecture with 6 cameras MX Cameras The six Vicon MX3 cameras we use are equipped with several high-speed processors, which enable image processing in real time. The resolution per camera is 659 x 494 pixels (aspect ratio 4: 3), the maximum frame rate is 242 and the sensor is of the CMOS type. 25 In addition to the video cameras and their stroboscopic units, the camera system consists of suitable lenses, optical filters and cables. 24 The technical data listed in this chapter are taken from the Vicon MX Hardware System Reference, Revision 1.3. 25 Complementary Metal Oxide Semiconductor. 11

13 Figure 5 - Vicon MX camera Figure 6 - MX camera rear panel camera lenses and stroboscope unit The strobe unit is attached to the front of the camera and illuminates the capture volume 26 by generating an infrared light pulse that is directed to the performer makes the attached marker reflect. The reflected light passes through an optical camera filter that only lets light with the required wavelength through to the lens. The camera lens picks up the reflected light from the scene and forms a sharp image on the sensor surface. The camera then converts the light pattern into digital data representing the position and radius of each marker in the image and forwards it to the MX Ultranet unit. MX Cables Proprietary MX cables connect the system components and offer a combination of power, Ethernet communication, synchronization and video signals. A capture volume is the space in which motion is detected and which can be captured by the cameras. This volume is reconstructed in Vicon iq's 3D workspace and displayed in 3D. 27 Detailed information can be found in the Vicon MX Hardware System Reference, Revision 1.3. 12th

14 4.1.2 MX Ultranet The MX Ultranet unit records the data from the cameras and forwards them to the host computer. It provides power to the cameras and synchronized communication between cameras and host computers. Figure 7 - MX Net Front Panel Figure 8 - MX Net Rear Panel For information on the rear connections, please refer to the Vicon Hardware System Reference, Revision Host computer The host PC records the MoCap data, visualizes them and processes them with the Vicon iq software installed on it. Processor speed and memory are the most important factors in system performance. Vicon therefore recommends a fast hard disk and 1 GB of RAM. A high quality graphics card should also be used so that the high graphics demands of the Vicon RealTime Engine can be satisfied. The Vicon MX system uses Gigabit Ethernet connections, so the host PC needs a suitable network card (PC with dedicated Ethernet port) to be able to communicate with the cameras. 13th

15 4.1.4 Calibration Kit The Calibration Kit contains tools for the exact calibration of the system. The kit can be used to implement the two necessary types of calibration: camera calibration using a wall (calibration rod) and calibration of the global coordinate system using an L-frame. Figure 9 - Calibration Kit The following is an overview of the elements available in the Calibration Kit. Only devices 1, 2, 8 and 11 are used by us and are listed here: Calibration device number Description mm wall spacer bar A 3-marker stick with 14 mm markers. 2 Wall handle Can be used with the 240 mm rod (1) or the 390 mm rod (5). 8 L-Frame (also called Ergo Calibration Frame) Static calibration object with four 9.5 mm markers (exchange with 14 mm or 25 mm markers possible). Compare (11). 11 Calibration Marker Pack The marker pack contains marker sets of sizes 9.5 mm, 14 mm, and 25 mm that are attached to the Ergo Calibration Frame (8). Static Calibration Object The L-Frame (see No. 8 in the table above) is a static calibration object that is used to set up the global coordinate system in the capture volume. 9.5 mm markers, which we also use for our motion capture sessions, are attached as standard. 14th

16 Accessory Kit The Accessory Kit contains utensils that are required before starting to use the system: Adhesive tapes (micropore, gaffer) for fixing markers, the HASP license dongle, Velcro rollers (Velcro fastener), the Vicon Lycra suit and other reflective markers. 4.2 Vicon iq software The software belonging to the Vicon system and used by us to record the performer's movements, for processing and for post-processing is called Vicon iq and is available in version 2.5. Vicon iq is used to manage a MoCap production. It supports the MoCap team in processing the workflow and is used to manage the data obtained. Thanks to the integrated real-time engine Tarsus, Vicon iq can record the 2D data from the MX cameras and display them in three dimensions in real time. It is also possible to use various operations in post-processing, including reconstruction, labeling of the movement paths, filling gaps and adapting to a kinematic model. This is discussed in detail in the guide in connection with the batch pipeline. 28 At this point, I would like to point out that the Vicon MX Architecture can also be operated with other software in addition to Vicon iq, including: BodyBuilder, Workstation, Polygon, Tracker. 28 See Vicon: (accessed by). 15th

17 5. Workflow The motion capturing process can be roughly divided into: 1. Planning, 2. Movement recording, 3. Data cleansing, 4. Post-processing and 5. Mapping the movement to a figure to be animated. The following overview has been increased by a few levels of detail and helps to get a better overview of the MoCap production process: Figure 10 - Workflow of a MoCap production 16

18 5.1 Pipelines Vicon iq-pipeline After the hardware has been set up (see guide), the workflow of the Vicon iq software can be started. This process can be represented with the following pipeline: 1. Creation of the database, start of the real-time engine 2. Camera calibration 3. Calibration of the capture volume 4. Range of motion 5. 3D reconstruction of the data 6. Labeling of the skeleton (using the Vicon Skeleton Template) 7. Subject calibration (calculate the structure of the Vicon Skeleton) 8. Record movements (Motion Capturing) 9. Clean up the MoCap data (obscurations and malfunctions) 10. Export as a C3D file MotionBuilder Pipeline The in MotionBuilder work steps to be carried out can generally be mapped as follows: 1. Import the MoCap data (C3D file) 2. Create an Actor 3. Connect MoCap data with Actor 4. Import 3D figure with skeleton from Maya 5. Characterize the Skeletons of the 3D figure 6. Connect actor and skeleton 7. Optional: a. Bug fixes b. Modification of movements c. Combination of motion tracks 8. Plotting of animation and skeleton 9. Export for further processing 17

19 5.2 Pre-Production 1. Performer Frequent recordings of full-body animation include: running, running, jumping, dancing, falling, fighting, acrobatics. Before starting a session, however, you should prepare a script in which you determine which movements are required and which should be recorded. When performing the movements, the performer should therefore have internalized the virtual character particularly well in order to best represent its properties. Important characteristics would include: age, gender, height, stature and weight, impaired movement, habits and intelligence, clothing, mood, etc. 2. Script The script should consist of the following elements: - Names: List of all characters. - Scene head: A brief description of the location and the time of day. - Action: A description of all occurring scene elements and actions. - Dialogues: Conversations between characters in the scene. In the dialogs, the name of the speaking character appears at the beginning of the line. - Transitions: Comments on the camera transitions such as dissolve after (the camera image becomes blurred and changes to another scene). 3. Storyboard A storyboard is a conceptual schedule and, in the style of a comic strip, breaks down a script story into individual images. It is used to visually represent the basic ideas, including the actions of the character, timing of movements and transitions between scenes. Using the comic strip, a motion list can be created that gives the production team the necessary recordings. 4. List of recordings It should be noted in which recording the camera recorded which scene. For this purpose, a numbered list of all recordings (takes) is to be created, with a respective explanation of the performance shown. The camera settings intended for the respective recording can also be recorded (tracking, close-up, wide shot). Cf. course ATEC 6351 of the University of Texas (Dallas), with Professor Midori Kitagawa, website: /assignments/assignment_1.htm (accessed on). 18th

20 5.3 Involvement of the performer The person whose movements are recorded is called the trial subject in English, we use the term performer. It is possible to capture any object (including animals and machines). For the sake of simplicity and because it is most frequently used, however, it is based on a human object. The following figure shows the workflow in which the performer is involved: Figure 11 - Workflow for performer The steps shown in this workflow are described in detail in the guide. 19th

21 5.4 Camera calibration The camera calibration (also known as system calibration or dynamic calibration) is the first essential step in a MoCap production. It is divided into two different phases: In the dynamic phase, the calibration rod (wall) is used to measure the physical positions and orientation of each individual Vicon camera in the system and to correct lens settings if necessary. In the static phase, a static calibration object (L-frame) is used to set the global coordinate system for the capture volume. The camera calibration is necessary in order to be able to reconstruct the 3D movement data without errors. 30 Calibration Process The performance of the Vicon MX system relies heavily on the accuracy with which the system has been calibrated. The calibration process includes the acquisition of internal and external camera parameters. Internal parameters include the aperture distances and the image distortion 31, and external parameters include the camera positions and their orientation. With the dynamic calibration, all of the above parameters are determined. A linearization method is used in which optical distortions of the camera lenses are measured and a correction matrix is ​​created. The corrections are then applied to each data frame for each camera. 32 Calibration Object This is a device that is used to calibrate the capture volume. As mentioned before, there are two types: the L-frame and the wall. Both are made of metal and have reflective markers attached to them. Vicon iq uses the physical dimensions known from these objects to calculate the calibration parameters. 30 Cf. Vicon iq System Reference Volume I, Revision 1.0, pages 211 f. 31 Image distortion is the distortion in the measurement image; in addition to object distortion, it contains all other deviations from the central projection (DIN) that have become effective in the measurement image. 32 See Vicon MX Hardware System Reference, Revision 1.3, page

22 5.5 Labeling and subject calibration Reconstructed three-dimensional movement data consist of movement paths (trajectories) which indicate the marker positions in space over time. Vicon iq calculates the trajectories by linking the marker positions from image to image. The reconstructed image can be viewed in Vicon iq's 3D Live Workspace. In order to assign the markers to a hierarchy or to enable the markers to be distinguished from one another, further steps are necessary: ​​1. Range of Motion (RoM) In order to be able to carry out a subject calibration, an essential initial recording, the so-called Range of Motion , necessary. During this recording, the performer moves his limbs and joints in every imaginable way so that Vicon iq can calculate the marker positions on the suit and their relative dimensions from the multitude of data obtained. 2. Labeling Labeling means that every marker shown in the 3D Live Workspace of Vicon iq is labeled using the predefined markers of a Vicon skeleton template. The production team must do this manually. 3. Subject calibration The Vicon iq software is based on the Vicon Skeleton Template and independently assigns the marker names to all subsequent images. Then she calibrates the template and adapts it to the structure of the skeleton. This proportioned data is saved as a Vicon Skeleton (VSK) and can now be processed with various operations in the batch pipeline. 5.6 Capturing the movements If the calibration described in the previous chapters has been successful, you can start recording the movements. To do this, the performer stands in the middle of the capture volume and the capture process is started in Vicon iq. During the MoCap process, the performer will perform a number of movements in the capture volume as directed by the production team. During this time, the Vicon MX cameras record all markers and send the data to the MX Ultranet, which is the central unit of the Vicon system, the 2D- 21

23 collects and processes video data. It then combines the data with the camera coordinates and determines a three-dimensional image, whereby the three-dimensional data exist separately for each marker. The 3D data are then combined from frame to frame in such a way that trajectories are created. Ideally, the aforementioned process should produce continuous, complete movements. In reality, however, there is occlusion or not enough cameras can see the respective marker or markers are so close together (crossover) that they cannot be interpreted. On the other hand, it can happen that reflections are perceived by the cameras that do not originate from the markers, but are still displayed as markers (ghost markers). Furthermore, misrepresentations can occur if the performer leaves the boundaries of the capture volume. As a result of the above-mentioned possible errors, the movement data is never completely available. The existing gaps and missing areas must then be compensated for in post-processing with various operations. 5.7 Post- and data processing After the Vicon system has been set up, calibrated, the subject has been calibrated and the MoCap has been recorded, the data obtained is post-processed. This is necessary because the technical errors described in the previous section can occur. Post-processing includes operations such as the 3D reconstruction of the 2D camera data (generation of movement paths), the adjustment of the MoCap data (clean-up, fill gaps) and their post-processing (labeling, Vicon Skeleton). Data processing, on the other hand, means the use of the finished data. Post-Processing This process includes a wide range of possible operations in Vicon iq. To name just a few: filtering out noise in unclear recordings, filling discontinuous or missing trajectories through linear, quadratic or spline interpolations (fill gaps), use of kinematic fitting, in which the incorrectly reconstructed model is based on a generic skeleton Model is restored, etc. The accompanying guide details the various processes available. 22nd

24 Data processing In data processing, the MoCap data can be combined with the skeleton of a 3D figure in MotionBuilder. This is done via the intermediate instance of a so-called actor. The following changes can then be made with the help of the software: Movements can be modified (for example, the jump height can be increased significantly). Inaccuracies and errors can be compensated with various tools. Movements from different recordings can be combined to form an overall movement without any noticeable transitions (motion blending). Constraints are possible (this includes, for example, not moving a part of the body). Keyframing is available for the existing animation clip. Objects can be built in and moved in the same way as the animated figure. Intermediate poses can be defined, which can also be animated. With the help of control rigs, the proportions of the skeleton are maintained and distortions and unnatural movements (e.g. the knee buckling backwards) are prevented. Weights determine how much the rig affects the 3D body. Inverse kinematics 33 techniques lead to movement dependencies between the various limbs (for example raising the thigh also leads to movement of the tibia).The possibility of retargeting is also very important, in which the proportions of the performer in the movement data are adapted to those of the virtual figure. Retargeting allows the MoCap data to be reused on any figure. Finally, refinements should be mentioned, including, for example, the movement of clothing, breathing movements or muscle deformations. In inverse kinematics, the movement of the other limbs is calculated based on the position of an end effector. 34 Cf. Jackel, Neunreither, Wagner: Methods of Computeranimation, Springer-Verlag 2006, pages 176 f. 23

25 5.8 Further use Are post and data processing completed. Can the finished motion map be exported. Before doing this, however, the animation and skeleton of the 3D figure must be linked or merged in MotionBuilder. This process is called plotting (also known as baking, burning). Once this has been done, only the 3D figure (referred to as a character in MotionBuilder) is left whose skeleton has taken over the movement data. Now the animation can be exported and used in other programs such as Autodesk Maya, Autodesk 3ds Max, Cinema 4D, LightWave 3D and Softimage XSI. The FBX format (File Interchange Format for 3D content), which is supported by most 3D software, is ideal for export. 24

26 6. Appendix 6.1 Bibliography [1] Understanding Motion Capture for Computer Animation and Video Games Alberto Menache, Verlag Morgan Kaufmann, 1999 [2] The Animator's Motion Capture Guide: Organizing, Managing, and Editing Matthew Liverman, Verlag Charles River Media, Inc ., 2004 [3] Methods of Computer Animation Dietmar Jackel, Stephan Neunreither, Friedrich Wagner, Springer-Verlag, List of Figures [1] Scene from the Brilliance commercial Source: Computer Animation, Neal Weinstock /abel.html () [2] Dozo from the music video Don t Touch Me Source: Kleiser-Walczak, Timeline () [3] Logos from OMG and Vicon Source: Vicon Management Team () [4] Vicon MX Architecture with 6 cameras Source: Vicon MX Hardware System Reference, Revision 1.3 [5] Vicon MX Camera [6] MX Camera Rear Panel [7] MX Net Front Panel [8] MX Net Rear Panel [9] Calibration Kit Source: Vicon MX Hardware System Reference, Revision 1.3 [10] Workflow of a MoCap production [11] Workflow for Performer Source: Vicon MX Hardware System Reference, Revision