Abstrak
Masyarakat tuna rungu pada umumnya menggunakan bahasa isyarat sebagai alat komunikasi utamanya. Bahasa isyarat
mengutamakan komunikasi visual, pengguna bahasa ini menggunakan orientasi, bentuk dan gerakan tangan, lengan, tubuh,
serta expresi wajah untuk mengungkapkan expresi mereka. Tetapi cara komunikasi ini sering menyulitkan/membatasi
komunikasi dengan orang lain yang normal, karena perbedaan komunikasinya itu kurang dipahami oleh lawan
komunikasinya. Untuk mengatasi keterbatasan komunikasinya tersebut diperlukan upaya penterjemahan bahasa isyarat
menjadi lisan. Dengan demikian akan terjadi komunikasi yang lebih mudah antar kaum tuna rungu dengan masyarakat
umum.
Untuk menyelesaikan proyek akhir ini, digunakan kamera webcam sebaga alat bantu untuk menangkap gambar dari
tangan pengguna. Teknik yang digunakan adalah dengan menangkap posisi tangan, mengekstrak bentuk dari tangan
tersebut, kemudian mengklasifikasinya. Untuk mencari letak tangan dari setiap frame yang dihasilkan, penulis
menggunakan HaarClassifier yang sebelumnya telah dilakukan training terlebih dahulu. Kemudian untuk mengekstrak
bentu tangan digunakan skin detection dan noise removal yang kemudian dilanjutkan dengan thresholding dan normalisasi.
Setelah bentuk tangan ini didapatkan, maka gambar biner bentuk tangan ini diklasifikasikan berdasarkan kumpulan
gambar-gambar isyarat tangan yang digunakan sebagai data training. Algoritma klasifikasi yang digunakan penulis adalah
algoritma K Nearest Neigbors.
Sistem ini mampu mengenali 19 isyarat huruf tangan dari 26 isyarat yang ditargetkan. Rata-rata akurasi yang
dihasilkan system ini adalah 89.68%. Nilai akurasi ini dapat bervariasi tergantung dari konsistensi data training dan noise
yang dihasilkan.
Kata Kunci : Skin detection, HaarClassifier, OpenCV, Machine Learning.
AbstractDeaf community at large using sign language as their main means of communication. Sign languageemphasis on visual communication, users of this language is to use the orientation, shape and movement of the hands, arms, body,as well as the facial expression to reveal their expression. But the way communication is often difficult/restrictedcommunication with others is normal, because the difference in its communication that is poorly understood by the opponentits communication. To overcome the limitations of its communication efforts are needed to translate the sign languagebe oral. Thus there will be an easier communication between deaf persons with societycommon.To accomplish this end, the project used a webcam camera as a tool to capture images fromthe hand of the user. The technique used is to capture the position of hand, extracting the shape of the handit, then mengklasifikasinya. To find the location of each frame is generated, writeruse the previous HaarClassifier have done training in advance. Then to extractbentu hand used skin detection and noise removal which is then followed by thresholding and normalization.After this hand shape is obtained, then the binary form of the hand image is classified based on thepictures of hand signals are used as training data. The classification algorithm is usedalgorithm K Nearest Neigbors.The system is able to recognize the letters of the hand cue 19 26 targeted a cue. Average accuracyThis system produced was 89.68%. This accuracy value can vary depending on the consistency of training data and noisethe resulting.Keywords: Skin detection, HaarClassifier, OpenCV, Machine Learning.
การแปล กรุณารอสักครู่..

Abstract
deaf community in general use sign language as the main communication tool. Sign language
prioritizes visual communication, language users use orientation, shape and movement of the hands, arms, body,
and facial expressions to express their expression. But the way this communication is often difficult / restrict
communication with other normal people, because of differences in communication was poorly understood by opponents
communication. To overcome the limitations of the communication necessary sign language translation efforts
be spoken. Thus there will be an easier communication between the deaf community
generally.
To complete this final project, used camera webcam sebaga tools to capture images of
the user's hand. The technique used is to capture the position of the hand, to extract the shape of the hand
, and then classify them. To locate the hand of each frame is generated, the author
uses HaarClassifier previously undertaken training beforehand. Then to extract
bentu used hand skin detection and noise removal followed by thresholding and normalization.
After this hand shape is obtained, the binary image is classified based on the shape of the hands of a collection of
images of hand signals used as training data. Classification algorithms used by the author is
K Nearest neigbors algorithm.
The system is able to recognize 19 hand gestures letters of 26 cues targeted. The average accuracy of the
resulting system is 89.68%. The accuracy values may vary depending on the consistency of the training data and the noise
generated.
Keywords: Skin detection, HaarClassifier, OpenCV, Machine Learning.
การแปล กรุณารอสักครู่..
