Sign Language Recognition Using Deep Learning

Authors

  • Subhashini R. Assistant Professor, Department of Computer Science & Engineering, Cambridge Institute of Technology (CIT), Bengaluru, Karnataka, India
  • Sushmita Manna Student, Department of Computer Science & Engineering, Cambridge Institute of Technology (CIT), Bengaluru, Karnataka, India
  • Pratik Sarkar Student, Department of Computer Science & Engineering, Cambridge Institute of Technology (CIT), Bengaluru, Karnataka, India
  • Shashank A. M. Student, Department of Computer Science & Engineering, Cambridge Institute of Technology (CIT), Bengaluru, Karnataka, India
  • M.V. Keerthana Student, Department of Computer Science & Engineering, Cambridge Institute of Technology (CIT), Bengaluru, Karnataka, India

Keywords:

LSTM, Mediapipe, OpenCV, Tensorflow, Keras

Abstract

People with hearing/speech disabilities use sign language to communicate their thoughts and feelings, but non-disabled people often find it hard to comprehend these hand gestures since they do not know sign language. This project aims to create a system that uses MediaPipe, LSTM and Keras for sign language recognition. The webcam will capture video footage of people performing sign language gestures, then MediaPipe will extract and identify the hand landmarks and motion from the video. After that, LSTM, a sequence modelling technique, is used to process the extracted features. Lastly, a deep learning LSTM model implemented in Keras is trained to recognize the different sign language gestures. This system can potentially aid those with hearing impairments in communicating with others quickly and easily.

References

Tolentino LK, Juan RS, Thio-ac AC, Pamahoy MA, Forteza JR, Garcia XJ. Static sign language recognition using deep learning. Int J Mach Learn Comput. 2019 Dec; 9(6): 821–7.

Tripathi K, Nandi NB. Continuous Indian sign language gesture recognition and sentence formation. Procedia Comput Sci. 2015 Jan 1; 54: 523–31.

Harish N, Poonguzhali S. Design and development of hand gesture recognition system for speech impaired people. In 2015 IEEE International Conference on Industrial Instrumentation and Control (ICIC). 2015 May 28; 1129–1133.

Suganya R, Meeradevi T. Design of a Communication aid for physically challenged. In 2015 IEEE 2nd International Conference on Electronics and Communication Systems (ICECS). 2015 Feb 26; 818–822.

Papastratis I, Chatzikonstantinou C, Konstantinidis D, Dimitropoulos K, Daras P. Artificial intelligence technologies for sign language. Sensors. 2021 Aug 30; 21(17): 5843.

Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997 Nov 15; 9(8):1735–80.

Dhivyasri S, KBKH, Akash M, Sona M, Divyapriya S, Krishnaveni V. An efficient approach for interpretation of Indian sign language using machine learning. In 2021 IEEE 3rd International Conference on Signal Processing and Communication (ICPSC). 2021 May 13; 130–133.

Nikam AS, Ambekar AG. Sign language recognition using image based hand gesture recognition techniques. In 2016 IEEE online international conference on green engineering and technologies (IC-GET). 2016 Nov 19; 1–5.

Rani RS, Rumana R, Prema R. A Review Paper on Sign Language Recognition for The Deaf and Dumb. Int J Eng Res Technol. 2021 Nov; 10(10): 329–332.

Al-Hammadi M, Muhammad G, Abdul W, Alsulaiman M, Bencherif MA, Mekhtiche MA. Hand gesture recognition for sign language using 3DCNN. IEEE Access. 2020 Apr 27; 8: 79491–509.

Sawant SN, Kumbhar MS. Real time sign language recognition using PCA. In 2014 IEEE International Conference on Advanced Communications, Control and Computing Technologies. 2014 May 8; 1412–1415.

Published

2023-08-31