•  
  •  
 

Abstract

Communication is the exchange of thoughts, ideas and feelings through emotion. In this paper we have proposed a method where human speech is converted into digital input. The digitized sound is fed into the proposed models and the voice of every person is classified into discrete emotional characteristics by its intensity, pitch, timbre, speech rate and pauses. In the proposed method, authors have applied multi scale area attention in a deep 2D-CNN connected to dense DNN to obtain emotional characteristics with wide range of granularities and therefore the classifier can predict a wide range of emotions on a broad scale classification.

Share

COinS