This paper is published in Volume 3, Issue 4, 2018
Area
Image Processing
Author
Rashmika. R
Co-authors
Sudha Mercy, Sangeetha Kamatchi. C , Preethi. R
Org/Univ
Easwari Engineering College, Chennai, Tamil Nadu, India
Pub. Date
20 April, 2018
Paper ID
V3I4-1250
Publisher
Keywords
Face recognition, Binary pattern, Emotion recognition, Detection of edge.

Citationsacebook

IEEE
Rashmika. R, Sudha Mercy, Sangeetha Kamatchi. C , Preethi. R. An automated method for characterization of facial expression, International Journal of Advance Research, Ideas and Innovations in Technology, www.IJARnD.com.

APA
Rashmika. R, Sudha Mercy, Sangeetha Kamatchi. C , Preethi. R (2018). An automated method for characterization of facial expression. International Journal of Advance Research, Ideas and Innovations in Technology, 3(4) www.IJARnD.com.

MLA
Rashmika. R, Sudha Mercy, Sangeetha Kamatchi. C , Preethi. R. "An automated method for characterization of facial expression." International Journal of Advance Research, Ideas and Innovations in Technology 3.4 (2018). www.IJARnD.com.

Abstract

This paper presents a new facial expression technique, locality directional ternary pattern (LDTP), for facial emotion recognition. LDTP efficiently encode various information of expression-related features (ı.e., eyes, eyebrows, nose, mouth and lips) by using the directional information and advanced pattern in order to take advantage of the robust of advanced patterns in the edge region while overcoming the weakness of other methods in smooth regions. Our proposal, unlike existing face description methods that divide the face into multiple regions and sample the codes uniformly, uses a two-level grid to construct the face descriptor while sampling emotion information at different scales. We use a grid for stable codes (highly related to non-expression), and another one for active codes (highly related to expression). This multi-level approach enables us to do a finer description of facial motions while still characterizing the other features of the expression. Moreover, we learn the active pattern codes from the expression-related facial regions. We tested our method by using person-dependent and independent cross-validation schemes to evaluate the performance. We show that our approaches improve the overall accuracy of facial expression recognition on six data sets.
Paper PDF