Resumen:
In recent years, the development of algorithms that assist in communicate with deaf people is an important challenge. The development of automatic systems to translate sign language is a current research topic. However, this involves several processes that range from video capture, pre-processing to identification or classification of the signal. The development of systems capable of extracting discriminative features that enhance the power of generalization of a classifier is even a very challenging problem. The meaning of a sign is the combination of the hand movement, hand shape, and the point of contact of the hand in the body. This paper presents a method to detect and translate hand gestures. First, we obtain 15 frames per word, obtaining 3 regions of interest (hands and face) from which we obtain geometric features. Finally we use several classifier techniques and present the experimental results.