Animated characters using sign language should not be removed from the production due to "subs and dub" (subtitles and dubbing). Signal Flair not only captures a Deaf actor's performance for the character's use, but permits the character to be "dubbed" into different sign languages for international distribution.
Signal Flair explores how human-driven motion data can be used to generate accurate, culturally grounded CG animations of sign languages. By combining motion capture, procedural animation, and linguistic modelling (in particular the importance of non-manual signals), the project is developing a system that can translate real human gestures into high-fidelity digital avatars while preserving the grammatical, spatial, and expressive features unique to each sign language, to permit different sign languages to be "dubbed" to allow the same animation to be displayed in each country's own sign language.

Support from the Animation Innovation and Immersion Fund enabled AnamoLABS to undertake the core research and early testing required to build this pipeline. The funding made it possible to allocate dedicated time to community engagement, sign-language awareness, and technical experimentation across motion capture, animation workflows, and extended-reality interaction.
A key component of the project involved consultation with Deaf organisations, including the Cork Deaf Association, Deaf Rebels Cork, and interactions with the Irish Deaf Society. These engagements provided essential insights into communication barriers in media, education, and public environments. The team deepened its understanding of the visual and linguistic features of sign languages—handshape, movement, orientation, location, facial expression, and the non-linear grammatical structures that differ significantly from spoken languages. The process also strengthened the team's cultural awareness and grounding in Deaf community practices.
The primary technical objective was to design a modular CG animation pipeline using marker-less motion capture to acquire, process, and retarget sign language performances by trained actors. Motion data, including fine hand articulation, facial nuance, and upper-body dynamics, is captured using computer-vision and machine-learning–based pose estimation. The data is then cleaned, smoothed, and fitted to a skeletal model before being retargeted to rigged 3D characters.
The resulting pipeline is being built to support multi-avatar translation, enabling a single motion sequence to be accurately applied across different character rigs while maintaining linguistic fidelity and expressive detail. This framework aims to streamline the production of sign language animation, reduce the need for manual keyframing, and provide a scalable technical foundation for accessible digital and immersive content.



