Overview
As part of the Brunswick Immersive Experience showcased at CES 2025, there was a requirement to include realistic swimming and boating scenes featuring human characters. Given the limited size of our team, we opted for a pipeline that combined Unreal Engine’s Metahuman system with Mixamo’s motion capture animations to deliver high-quality character motion without building custom animation sets from scratch.
I was responsible for creating a diverse set of human characters using the Metahuman Creator and integrating various animations—including swimming, idle, and seated poses—into the Metahuman rigs. This involved retargeting Mixamo’s rig to Metahuman control rigs, enabling seamless playback of third-party animations within the Metahuman framework.
The final implementation featured animated characters swimming around the boat and relaxing on deck, enhancing the realism and storytelling of the immersive simulation.
Tools and Techniques
1. Unreal Engine 5 – Animation retargeting, character integration, scene setup
2. Metahuman Creator – Generation of high-fidelity, customizable digital humans
3. Mixamo – Source of motion capture animations (e.g., swim, idle, sit)
Challenges
1. Rig Compatibility: Mixamo and Metahuman use different rigs, requiring precise retargeting and manual alignment of joints for believable results.
2. Animation Transitions: Smoothly blending between various animations (e.g., swimming → treading water → idle) required additional pose editing and timing control.
3. Visual Cohesion: Ensuring the animations worked well within the water simulation and maintained physical believability across varying camera angles.
4. Optimization for Real-Time Performance: Maintaining visual quality while ensuring the animations were lightweight enough for real-time playback in a live demo environment.
Learnings
1. Gained hands-on experience with animation retargeting in Unreal Engine, bridging third-party mocap data with advanced rig systems like Metahuman.
2. Learned to build rapid animation pipelines for simulation-ready characters without the need for custom keyframe animation.
3. Improved understanding of pose matching and transition blending, contributing to more natural-looking animated sequences.
4. Developed strategies for asset reuse and scalability, enabling future character-based scenes to be prototyped and iterated quickly.