Abstract
Methods: Muscle activity of seven facial muscles in six volunteers was measured bilaterally with sEMG. A triple camera set-up recorded 3D lip movement. The generic face model in ArtiSynth was adapted to our needs. We controlled the model using the volunteer-specific MAPs. Three activation strategies were tested: activating all muscles (Formula presented.), selecting the three muscles showing highest muscle activity bilaterally (Formula presented.)—this was calculated by taking the mean of left and right muscles and then selecting the three with highest variance—and activating the muscles considered most relevant per instruction (Formula presented.), bilaterally. The model’s lip movement was compared to the actual lip movement performed by the volunteers, using 3D correlation coefficients (Formula presented.).
Results: The correlation coefficient between simulations and measurements with (Formula presented.) resulted in a median (Formula presented.) of 0.77. (Formula presented.) had a median (Formula presented.) of 0.78, whereas with (Formula presented.) the median (Formula presented.) decreased to 0.45.
Conclusion: We demonstrated that MAPs derived from noninvasive sEMG measurements can control movement of the lips in a generic finite element face model with a median (Formula presented.) of 0.78. Ultimately, this is important to show the patient-specific residual movement using the patient’s own MAPs. When the required treatment tools and personalisation techniques for geometry and anatomy become available, this may enable surgeons to test the functional results of wedge excisions for lip cancer in a virtual environment and to weigh surgery versus organ-sparing radiotherapy or photodynamic therapy.
Original language | English |
---|---|
Pages (from-to) | 47-59 |
Journal | International journal of computer assisted radiology and surgery |
Volume | 13 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2018 |
Bibliographical note
All raw data (excluding raw videos), are available from the Open Science Framework (Eskes, M. (2017, August 20). Simulation of facial expressions using person-specific sEMG signals controlling a biomechanical face model. Retrieved from osf.io/dux3w. doi: 10.17605/OSF.IO/DUX3W.Funding
Acknowledgements The authors gratefully acknowledge all technical medicine students for their good contributions (www.virtualtherapy.nl/ publications). They also thank all volunteers for participating in this study. We thank John Lloyd, Sidney Fels, and the ArtiSynth team for providing the simulation platform for this work www.artisynth.org. In particular, the authors would like to express their gratitude to the Maurits en Anna de Kock Foundation (www.mauritsenannadekockstichting.nl) for funding the triple camera set-up and the Porti EMG system. Lastly, we thank the reviewers for their constructive feedback that helped us to significantly improve the manuscript.
Funders | Funder number |
---|---|
Maurits en Anna de Kock Foundation | |
University of Twente |