We build algorithms which detect emotional states and physical conditions purely from sound data - especially from the human voice. The technology is based on quantitative techniques rooted in musicology. Audio signals are analyzed with musical parameters from the domains of loudness, articulation, tempo, rhythm, melody and timbre. The applications cover a broad range of topics, e.g. in the medical sector as well as in the industrial field.
The algorithms are used in software applications that run in real-time and on the basis of short voice samples. The method is robust against implications such as poor sound quality or different dialects and can be applied via telephone. As our method differs substantially from traditional approaches to sound analysis we find solutions to problems some may find impossible to solve.