Control Group
Members
Roger Moore, Alistair Edwards, Rob Clarke, Jim Gilbert and Rob Mackay
Proposal
The Control Group is going to build one or more systems which will allow a person or people to control the way that speech is performed, in real time.
A variety of input paradigms and speech generators will be experimented with and the most successful one(s) will be the outcome. It is hoped that much of the work will be done by masters students as their projects, so to some extent what is achieved will depend on the take-up of projects.
Inputs
Kinetic
Rob MacKay, Rob Clark and Roger
Assorted switches
Alistair (with assistance from Jude, also possibly Mark and/or Ben from Apollo Creative)
Speech generation
HMM-based synthesizer (http://www.youtube.com/watch?v=HxQuSczW0rE)
Rob Clark
Formant-based synthesizer
Roger
CereProc
Alistair (with whatever help I can get from Chris and his people)
Demonstration
The plan is to mount performances in which speeches are controlled by users. Imagine, perhaps, the delivery of Hamlet's soliloquy being controlled by a performer gesturing in from of a Kinect.
The spoken content of the performance will be pre-stroed (e.g. Hamlet's words) but every performance will be different, as for a live performance.
Dialogues between more than one participant might also be enabled.
It is to be hoped that other members of the Network might be inspired to write pieces specifically for this syle of performance.
Collaboration with some of the other Working Groups is also to be encouraged. Voice Expressivity and Emotion are one obvious candidate.
Some people may become skilled performers with such a device - but it will also be possible to let members of the public play and experiment.