versión impresa ISSN 0012-7353
In this paper, a novel approach for the mouth-gestures based command of three degrees of freedom of a robot is proposed. The different selected gestures are recorded in video sequences, which are processed and classified in real time. Several image processing techniques are applied in each frame, in order to achieve an appropriate feature extraction and classification of gestures. After that, the output of the classifier is used as the input of a state machine which stabilizes the command selection and sends the selected operation to the robots command interface. The method shows to be very effective for real time applications, giving both enough speed and good gesture detection.
Palabras llave : Human-machine interface; mouth segmentation; gesture detection.