Emovere is a Max/MSP-based software that allows the user to generate sounds based in data received in real-time through both MIDI and Open Sound Control (OSC) protocols. Data can be mapped to different sound-generation strategies and the output configuration can be adjusted for two or more speakers, including stereo and quadraphonic settings, among others. It was developed during an 18-months research for a performance with the same name – Emovere – featuring 4 dancers connected to 16 physiological sensors such as EMG and ECG sensors.
The results of this projects were presented at International Conference on New Interfaces for Musical Expression (NIME) 2016 in Brisbane, Australia.
- Real-time data sonification algorithms with one or several sound outputs
- MIDI controller support for live performances
- OSC protocol for receiving data and controlling sonification objects
- VST/AU support for each audio channel
- JSON presets for full session recall
- JSON presets for single object recall
Press and related links
- Emovere: Body, sound and movement
- E. Gómez and J. Jaimovich, “Designing a flexible workflow for complex real-time interactive performances,” in Proceedings of the international conference on new interfaces for musical expression, Brisbane, Australia, 2016, pp. 305-309.
- Writing: Designing a Flexible Interaction Platform for Complex Interactive Performances in Real Time
- Emovere Flickr Album
- Emovere in Arts Faculty of Universidad de Chile’s website
- Emovere in El Mercurio
- Emovere in CNN Chile