11 research outputs found

    Time-Efficient Hybrid Approach for Facial Expression Recognition

    Get PDF
    Facial expression recognition is an emerging research area for improving human and computer interaction. This research plays a significant role in the field of social communication, commercial enterprise, law enforcement, and other computer interactions. In this paper, we propose a time-efficient hybrid design for facial expression recognition, combining image pre-processing steps and different Convolutional Neural Network (CNN) structures providing better accuracy and greatly improved training time. We are predicting seven basic emotions of human faces: sadness, happiness, disgust, anger, fear, surprise and neutral. The model performs well regarding challenging facial expression recognition where the emotion expressed could be one of several due to their quite similar facial characteristics such as anger, disgust, and sadness. The experiment to test the model was conducted across multiple databases and different facial orientations, and to the best of our knowledge, the model provided an accuracy of about 89.58% for KDEF dataset, 100% accuracy for JAFFE dataset and 71.975% accuracy for combined (KDEF + JAFFE + SFEW) dataset across these different scenarios. Performance evaluation was done by cross-validation techniques to avoid bias towards a specific set of images from a database

    A Study on the Recognition of Seabed Environments Employing Sonar Images

    Get PDF
    The ocean accounts for approximately 70% of the area on the earth, and the water as well as coastal areas sustain many species including humans. Ocean resources are used for fish farming, land reclamation, and a variety of other purposes. Seabed resources such as oil, natural gas methane hydrates, and manganese nodules are still largely unexploited on the bottom of the sea. Maps are critical to development activities such as construction, mining, offshore drilling, marine traffic control, security, environmental protection, and tourism. Accordingly, more topographic and others types of mapping information are needed for marine and submarine investigations. Both waterborne and airborne survey techniques show promise for collecting data on marine and submarine environments, and these techniques can be classified into four main categories. First, remote sensing by satellites or aircraft is a widely used technique that can yield important data such as information on sea levels and coastal sediment transport. Second, investigations may collect direct information by remotely operated vehicles (ROVs), autonomous underwater vehicles (AUVs), and divers. While the quality of data obtained from these techniques is high, the data obtained are often limited to relatively shallow and small geographic areas. Third, sediment profile imagery can be used to collect photographs that contain detailed information about the seabed. Lastly, acoustic investigations that use sonar are popular in marine mapping studies, especially in coastal areas. In particular, acoustic investigations that employ ultrasound technology can yield rich information about variations in bathymetry. Unlike air, water has physical properties that make it difficult for light or electromagnetic waves to pass through. However, sound waves propagate readily in water. Therefore, sound waves are used in a wide range of technical applications to detect underwater structures that are difficult to observe with light-based techniques. In the dark depths of the ocean, the use of acoustic technology is essential. The development of marine acoustic technology is expanding in modern times. In addition to the basic physics related to acoustic waves, much research has been dedicated to other basic and applied fields such as electronics, physical oceanography, signal processing, and biology. The realization of new sonar systems that utilize advanced detection algorithms can be expected to contribute to major breakthroughs in oceanographic research that require deployment to novel marine environments and other areas of natural resource interest. In this study, the author focuses on side-scan sonar, which is one of the imaging technologies that employs sound to determine the seabed state, to conduct research on imaging algorithms for discrimination. The proposed method for discrimination was coupled to a high-speed detection method for installed reefs on the seabed. This method is also capable of detecting unknown objects with Haar-like features during object recognition of rectangular regions of a certain size via machine learning by AdaBoost and fast elimination of non-object regions on the cascade structure. Side-scan and forward looking sonars are some of the most widely used imaging systems for obtaining large-scale images of the seafloor, and their application continues to expand rapidly with their increasing deployment on AUVs. However, it can be difficult to extract quantitative information from the images generated from these processes, in particular, for the detection and extraction of information on the objects within these images. Hence, this study analyzes features that are common to most undersea objects projected in side-scan sonar images to improve information processing. By using a technique based on the k-means method to determine the Haar-like features, the number of patterns of Haar-like features was minimized and the proposed method was capable of detecting undersea objects faster than current methodology. This study demonstrates the effectiveness of this method by applying it to the detection of real objects imaged on the seabed (i.e., sandy ground and muddy ground). Attempts are made as well to automate the proposed method for discriminating objects lying on the seafloor from surficial sediments. During undersea exploration, a thorough understanding of the state of the seafloor surrounding objects of interest is important. Therefore, a method is proposed in this study to automatically determine seabed sediment characteristics. In traditional methods, a variety of techniques have been used to collect information about seabed sediments including depth measurements, bathymetry evaluations, and seabed image analyses using the co-occurrence direction of the gray values of the image. Unfortunately, such data cannot be estimated from the object image itself and it can take a long time to obtain the required information. Therefore, these techniques are not currently suitable for real-time identification of objects on the seafloor. For practical purposes, automatic techniques that are developed should follow a simple procedure that results in highly precise and accurate classifications. The technique proposed here uses the subspace method, which is a method that has been used for supervised pattern recognition and analyses of higher-order local autocorrelation features. The most important feature of this method is that it uses only acoustic images obtained from the side-scan sonar. This feature opens up the possibility of installing this technology in unmanned small digital devices. In this study, the classification accuracy of the proposed automation method is compared to the accuracy of traditional methods in order to show the usefulness of the technology. In addition, the proposed method is applied to real-world images of the seabed to evaluate its effectiveness in marine surveys. The thesis is organized as follows. In Chapter 1, the purpose of this study is presented and previous studies relevant to this research are reviewed. In Chapter 2, an overview of underwater sound is given and key principles of sound wave technology are explained. In Chapter 3, a new method for detecting and discriminating objects on the seafloor is proposed. In Chapter 4, the possibility of automating the discrimination method is explored. Finally, Chapter 5 summarizes the findings of this study and proposes new avenues for future research.九州工業大学博士学位論文 学位記番号:工博甲第364号 学位授与年月日:平成26年3月25日Chapter 1 Introduction|Chapter 2 Underwater acoustics|Chapter 3 Detection of underwater objects based on machine learning|Chapter 4 Automatic classification of seabed sediments using HLAC|Chapter 5 Conclusion九州工業大学平成25年

    A Study on the Recognition of Seabed Environments Employing Sonar Images

    Get PDF
    The ocean accounts for approximately 70% of the area on the earth, and the water as well as coastal areas sustain many species including humans. Ocean resources are used for fish farming, land reclamation, and a variety of other purposes. Seabed resources such as oil, natural gas methane hydrates, and manganese nodules are still largely unexploited on the bottom of the sea. Maps are critical to development activities such as construction, mining, offshore drilling, marine traffic control, security, environmental protection, and tourism. Accordingly, more topographic and others types of mapping information are needed for marine and submarine investigations. Both waterborne and airborne survey techniques show promise for collecting data on marine and submarine environments, and these techniques can be classified into four main categories. First, remote sensing by satellites or aircraft is a widely used technique that can yield important data such as information on sea levels and coastal sediment transport. Second, investigations may collect direct information by remotely operated vehicles (ROVs), autonomous underwater vehicles (AUVs), and divers. While the quality of data obtained from these techniques is high, the data obtained are often limited to relatively shallow and small geographic areas. Third, sediment profile imagery can be used to collect photographs that contain detailed information about the seabed. Lastly, acoustic investigations that use sonar are popular in marine mapping studies, especially in coastal areas. In particular, acoustic investigations that employ ultrasound technology can yield rich information about variations in bathymetry. Unlike air, water has physical properties that make it difficult for light or electromagnetic waves to pass through. However, sound waves propagate readily in water. Therefore, sound waves are used in a wide range of technical applications to detect underwater structures that are difficult to observe with light-based techniques. In the dark depths of the ocean, the use of acoustic technology is essential. The development of marine acoustic technology is expanding in modern times. In addition to the basic physics related to acoustic waves, much research has been dedicated to other basic and applied fields such as electronics, physical oceanography, signal processing, and biology. The realization of new sonar systems that utilize advanced detection algorithms can be expected to contribute to major breakthroughs in oceanographic research that require deployment to novel marine environments and other areas of natural resource interest. In this study, the author focuses on side-scan sonar, which is one of the imaging technologies that employs sound to determine the seabed state, to conduct research on imaging algorithms for discrimination. The proposed method for discrimination was coupled to a high-speed detection method for installed reefs on the seabed. This method is also capable of detecting unknown objects with Haar-like features during object recognition of rectangular regions of a certain size via machine learning by AdaBoost and fast elimination of non-object regions on the cascade structure. Side-scan and forward looking sonars are some of the most widely used imaging systems for obtaining large-scale images of the seafloor, and their application continues to expand rapidly with their increasing deployment on AUVs. However, it can be difficult to extract quantitative information from the images generated from these processes, in particular, for the detection and extraction of information on the objects within these images. Hence, this study analyzes features that are common to most undersea objects projected in side-scan sonar images to improve information processing. By using a technique based on the k-means method to determine the Haar-like features, the number of patterns of Haar-like features was minimized and the proposed method was capable of detecting undersea objects faster than current methodology. This study demonstrates the effectiveness of this method by applying it to the detection of real objects imaged on the seabed (i.e., sandy ground and muddy ground). Attempts are made as well to automate the proposed method for discriminating objects lying on the seafloor from surficial sediments. During undersea exploration, a thorough understanding of the state of the seafloor surrounding objects of interest is important. Therefore, a method is proposed in this study to automatically determine seabed sediment characteristics. In traditional methods, a variety of techniques have been used to collect information about seabed sediments including depth measurements, bathymetry evaluations, and seabed image analyses using the co-occurrence direction of the gray values of the image. Unfortunately, such data cannot be estimated from the object image itself and it can take a long time to obtain the required information. Therefore, these techniques are not currently suitable for real-time identification of objects on the seafloor. For practical purposes, automatic techniques that are developed should follow a simple procedure that results in highly precise and accurate classifications. The technique proposed here uses the subspace method, which is a method that has been used for supervised pattern recognition and analyses of higher-order local autocorrelation features. The most important feature of this method is that it uses only acoustic images obtained from the side-scan sonar. This feature opens up the possibility of installing this technology in unmanned small digital devices. In this study, the classification accuracy of the proposed automation method is compared to the accuracy of traditional methods in order to show the usefulness of the technology. In addition, the proposed method is applied to real-world images of the seabed to evaluate its effectiveness in marine surveys. The thesis is organized as follows. In Chapter 1, the purpose of this study is presented and previous studies relevant to this research are reviewed. In Chapter 2, an overview of underwater sound is given and key principles of sound wave technology are explained. In Chapter 3, a new method for detecting and discriminating objects on the seafloor is proposed. In Chapter 4, the possibility of automating the discrimination method is explored. Finally, Chapter 5 summarizes the findings of this study and proposes new avenues for future research.九州工業大学博士学位論文 学位記番号:工博甲第364号 学位授与年月日:平成26年3月25日Chapter 1 Introduction|Chapter 2 Underwater acoustics|Chapter 3 Detection of underwater objects based on machine learning|Chapter 4 Automatic classification of seabed sediments using HLAC|Chapter 5 Conclusion九州工業大学平成25年

    Facial feature representation and recognition

    Get PDF
    Facial expression provides an important behavioral measure for studies of emotion, cognitive processes, and social interaction. Facial expression representation and recognition have become a promising research area during recent years. Its applications include human-computer interfaces, human emotion analysis, and medical care and cure. In this dissertation, the fundamental techniques will be first reviewed, and the developments of the novel algorithms and theorems will be presented later. The objective of the proposed algorithm is to provide a reliable, fast, and integrated procedure to recognize either seven prototypical, emotion-specified expressions (e.g., happy, neutral, angry, disgust, fear, sad, and surprise in JAFFE database) or the action units in CohnKanade AU-coded facial expression image database. A new application area developed by the Infant COPE project is the recognition of neonatal facial expressions of pain (e.g., air puff, cry, friction, pain, and rest in Infant COPE database). It has been reported in medical literature that health care professionals have difficulty in distinguishing newborn\u27s facial expressions of pain from facial reactions of other stimuli. Since pain is a major indicator of medical problems and the quality of patient care depends on the quality of pain management, it is vital that the methods to be developed should accurately distinguish an infant\u27s signal of pain from a host of minor distress signal. The evaluation protocol used in the Infant COPE project considers two conditions: person-dependent and person-independent. The person-dependent means that some data of a subject are used for training and other data of the subject for testing. The person-independent means that the data of all subjects except one are used for training and this left-out one subject is used for testing. In this dissertation, both evaluation protocols are experimented. The Infant COPE research of neonatal pain classification is a first attempt at applying the state-of-the-art face recognition technologies to actual medical problems. The objective of Infant COPE project is to bypass these observational problems by developing a machine classification system to diagnose neonatal facial expressions of pain. Since assessment of pain by machine is based on pixel states, a machine classification system of pain will remain objective and will exploit the full spectrum of information available in a neonate\u27s facial expressions. Furthermore, it will be capable of monitoring neonate\u27s facial expressions when he/she is left unattended. Experimental results using the Infant COPE database and evaluation protocols indicate that the application of face classification techniques in pain assessment and management is a promising area of investigation. One of the challenging problems for building an automatic facial expression recognition system is how to automatically locate the principal facial parts since most existing algorithms capture the necessary face parts by cropping images manually. In this dissertation, two systems are developed to detect facial features, especially for eyes. The purpose is to develop a fast and reliable system to detect facial features automatically and correctly. By combining the proposed facial feature detection, the facial expression and neonatal pain recognition systems can be robust and efficient

    Voices from Saint Lucia: A Dialogue on Curriculum Change in a Small Island State

    Get PDF
    This research aims to identify the issues pertinent to the implementation of new curricula in the small island state of Saint Lucia and focuses in particular on the Organization of the Eastern Caribbean States (OECS) Harmonized Language Arts Curriculum which was developed as part of the OECS Education Reform project. The intentions of this research are to fill the gaps in significant information on and knowledge of how implementation processes work in post colonial, small island states, in particular those of the OECS sub region, by giving voice to those hitherto unheard from in the reform process. The key question posed by the research is: “How is the curriculum implementation process represented by insider voices in curriculum discourse in Saint Lucia?” The study is qualitative in nature, using a dialogic approach to collecting data by way of audio/video taped conversations, focus groups and a panel discussion. Data was collected over a seven month period through conversations with participants who were representative of various strata of the education system: from policy makers through education officers, principals and teachers. Data was analysed using the constant comparative method (Glaser & Strauss, 1967; Strauss & Corbin, 1990; Charmaz, 2006) and sorted, classified and coded through a combination of electronic and manual processes. The results indicate that despite the plethora of reform initiatives in the region, there remains an absence of mutually intelligible dialogue within, between and among the various groups involved in the process of implementing curriculum. The findings also illustrate the need for developing collaborative systems designed to facilitate institutional support, strategic preparation, ongoing professional development and organized instructional supervision

    On the 3D point cloud for human-pose estimation

    Get PDF
    This thesis aims at investigating methodologies for estimating a human pose from a 3D point cloud that is captured by a static depth sensor. Human-pose estimation (HPE) is important for a range of applications, such as human-robot interaction, healthcare, surveillance, and so forth. Yet, HPE is challenging because of the uncertainty in sensor measurements and the complexity of human poses. In this research, we focus on addressing challenges related to two crucial components in the estimation process, namely, human-pose feature extraction and human-pose modeling. In feature extraction, the main challenge involves reducing feature ambiguity. We propose a 3D-point-cloud feature called viewpoint and shape feature histogram (VISH) to reduce feature ambiguity by capturing geometric properties of the 3D point cloud of a human. The feature extraction consists of three steps: 3D-point-cloud pre-processing, hierarchical structuring, and feature extraction. In the pre-processing step, 3D points corresponding to a human are extracted and outliers from the environment are removed to retain the 3D points of interest. This step is important because it allows us to reduce the number of 3D points by keeping only those points that correspond to the human body for further processing. In the hierarchical structuring, the pre-processed 3D point cloud is partitioned and replicated into a tree structure as nodes. Viewpoint feature histogram (VFH) and shape features are extracted from each node in the tree to provide a descriptor to represent each node. As the features are obtained based on histograms, coarse-level details are highlighted in large regions and fine-level details are highlighted in small regions. Therefore, the features from the point cloud in the tree can capture coarse level to fine level information to reduce feature ambiguity. In human-pose modeling, the main challenges involve reducing the dimensionality of human-pose space and designing appropriate factors that represent the underlying probability distributions for estimating human poses. To reduce the dimensionality, we propose a non-parametric action-mixture model (AMM). It represents high-dimensional human-pose space using low-dimensional manifolds in searching human poses. In each manifold, a probability distribution is estimated based on feature similarity. The distributions in the manifolds are then redistributed according to the stationary distribution of a Markov chain that models the frequency of human actions. After the redistribution, the manifolds are combined according to a probability distribution determined by action classification. Experiments were conducted using VISH features as input to the AMM. The results showed that the overall error and standard deviation of the AMM were reduced by about 7.9% and 7.1%, respectively, compared with a model without action classification. To design appropriate factors, we consider the AMM as a Bayesian network and propose a mapping that converts the Bayesian network to a neural network called NN-AMM. The proposed mapping consists of two steps: structure identification and parameter learning. In structure identification, we have developed a bottom-up approach to build a neural network while preserving the Bayesian-network structure. In parameter learning, we have created a part-based approach to learn synaptic weights by decomposing a neural network into parts. Based on the concept of distributed representation, the NN-AMM is further modified into a scalable neural network called NND-AMM. A neural-network-based system is then built by using VISH features to represent 3D-point-cloud input and the NND-AMM to estimate 3D human poses. The results showed that the proposed mapping can be utilized to design AMM factors automatically. The NND-AMM can provide more accurate human-pose estimates with fewer hidden neurons than both the AMM and NN-AMM can. Both the NN-AMM and NND-AMM can adapt to different types of input, showing the advantage of using neural networks to design factors
    corecore