Abstract
Communication is the exchange of thoughts, ideas and feelings through emotion. In this paper we have proposed a method where human speech is converted into digital input. The digitized sound is fed into the proposed models and the voice of every person is classified into discrete emotional characteristics by its intensity, pitch, timbre, speech rate and pauses. In the proposed method, authors have applied multi scale area attention in a deep 2D-CNN connected to dense DNN to obtain emotional characteristics with wide range of granularities and therefore the classifier can predict a wide range of emotions on a broad scale classification.
Recommended Citation
Roy, Saumya; Ghoshal, Sayak; and Basak, Rituparna
(2024)
"Sentiment Analysis of Human Speech using CNN and DNN,"
American Journal of Science & Engineering (AJSE): Vol. 2:
Iss.
4, Article 1.
Available at:
https://research.smartsociety.org/ajse/vol2/iss4/1