We build algorithms which detect emotional states and physical conditions purely from sound data - especially from the human voice. The technology is based on quantitative techniques rooted in musicology. Audio signals, brought to spectral format by means of the Fourier transformation, are analyzed with musical parameters from the domains of loudness, articulation, tempo, rhythm, melody and timbre.
The algorithms are used in software applications that run in real-time and on the basis of short voice samples. The method is robust against implications such as poor sound quality or different dialects and can be applied via telephone. As our method differs substantially from traditional approaches to sound analysis we find solutions to problems some may find impossible to solve.