1,659 research outputs found

    Toward a flexible facial analysis framework in OpenISS for visual effects

    Get PDF
    Facial analysis, including tasks such as face detection, facial landmark detection, and facial expression recognition, is a significant research domain in computer vision for visual effects. It can be used in various domains such as facial feature mapping for movie animation, biometrics/face recognition for security systems, and driver fatigue monitoring for transportation safety assistance. Most applications involve basic face and landmark detection as preliminary analysis approaches before proceeding into further specialized processing applications. As technology develops, there are plenty of implementations and resources for each task available for researchers, but the key missing properties among them all are fexibility and usability. The integration of functionality components involves complex configurations for each connection joint which is typically problematic with poor reusability and adjustability. The lack of support for integrating different functionality components greatly impact the research effort and cost for individual researchers, which also leads us to the idea of providing a framework solution that can help regarding the issue once and for all. To address this problem, we propose a user-friendly and highly expandable facial analysis framework solution. It contains a core that supports fundamental services for the framework, and a facial analysis module composed of implementations for facial analysis tasks. We evaluate our framework solution and achieve our goals of instantiating the facial analysis specialized framework, which essentially perform tasks in face detection, facial landmark detection, and facial expression recognition. This framework solution as a whole, solves the industry problem of lacking an execution platform for integrated facial analysis implementations and fills the gap in visual effects industry

    3d Face Reconstruction And Emotion Analytics With Part-Based Morphable Models

    Get PDF
    3D face reconstruction and facial expression analytics using 3D facial data are new and hot research topics in computer graphics and computer vision. In this proposal, we first review the background knowledge for emotion analytics using 3D morphable face model, including geometry feature-based methods, statistic model-based methods and more advanced deep learning-bade methods. Then, we introduce a novel 3D face modeling and reconstruction solution that robustly and accurately acquires 3D face models from a couple of images captured by a single smartphone camera. Two selfie photos of a subject taken from the front and side are used to guide our Non-Negative Matrix Factorization (NMF) induced part-based face model to iteratively reconstruct an initial 3D face of the subject. Then, an iterative detail updating method is applied to the initial generated 3D face to reconstruct facial details through optimizing lighting parameters and local depths. Our iterative 3D face reconstruction method permits fully automatic registration of a part-based face representation to the acquired face data and the detailed 2D/3D features to build a high-quality 3D face model. The NMF part-based face representation learned from a 3D face database facilitates effective global and adaptive local detail data fitting alternatively. Our system is flexible and it allows users to conduct the capture in any uncontrolled environment. We demonstrate the capability of our method by allowing users to capture and reconstruct their 3D faces by themselves. Based on the 3D face model reconstruction, we can analyze the facial expression and the related emotion in 3D space. We present a novel approach to analyze the facial expressions from images and a quantitative information visualization scheme for exploring this type of visual data. From the reconstructed result using NMF part-based morphable 3D face model, basis parameters and a displacement map are extracted as features for facial emotion analysis and visualization. Based upon the features, two Support Vector Regressions (SVRs) are trained to determine the fuzzy Valence-Arousal (VA) values to quantify the emotions. The continuously changing emotion status can be intuitively analyzed by visualizing the VA values in VA-space. Our emotion analysis and visualization system, based on 3D NMF morphable face model, detects expressions robustly from various head poses, face sizes and lighting conditions, and is fully automatic to compute the VA values from images or a sequence of video with various facial expressions. To evaluate our novel method, we test our system on publicly available databases and evaluate the emotion analysis and visualization results. We also apply our method to quantifying emotion changes during motivational interviews. These experiments and applications demonstrate effectiveness and accuracy of our method. In order to improve the expression recognition accuracy, we present a facial expression recognition approach with 3D Mesh Convolutional Neural Network (3DMCNN) and a visual analytics guided 3DMCNN design and optimization scheme. The geometric properties of the surface is computed using the 3D face model of a subject with facial expressions. Instead of using regular Convolutional Neural Network (CNN) to learn intensities of the facial images, we convolve the geometric properties on the surface of the 3D model using 3DMCNN. We design a geodesic distance-based convolution method to overcome the difficulties raised from the irregular sampling of the face surface mesh. We further present an interactive visual analytics for the purpose of designing and modifying the networks to analyze the learned features and cluster similar nodes in 3DMCNN. By removing low activity nodes in the network, the performance of the network is greatly improved. We compare our method with the regular CNN-based method by interactively visualizing each layer of the networks and analyze the effectiveness of our method by studying representative cases. Testing on public datasets, our method achieves a higher recognition accuracy than traditional image-based CNN and other 3D CNNs. The presented framework, including 3DMCNN and interactive visual analytics of the CNN, can be extended to other applications

    Benchmarking of Embedded Object Detection in Optical and RADAR Scenes

    Get PDF
    A portable, real-time vital sign estimation protoype is developed using neural network- based localization, multi-object tracking, and embedded processing optimizations. The system estimates heart and respiration rates of multiple subjects using directional of arrival techniques on RADAR data. This system is useful in many civilian and military applications including search and rescue. The primary contribution from this work is the implementation and benchmarking of neural networks for real time detection and localization on various systems including the testing of eight neural networks on a discrete GPU and Jetson Xavier devices. Mean average precision (mAP) and inference speed benchmarks were performed. We have shown fast and accurate detection and tracking using synthetic and real RADAR data. Another major contribution is the quantification of the relationship between neural network mAP performance and data augmentations. As an example, we focused on image and video compression methods, such as JPEG, WebP, H264, and H265. The results show WebP at a quantization level of 50 and H265 at a constant rate factor of 30 provide the best balance between compression and acceptable mAP. Other minor contributions are achieved in enhancing the functionality of the real-time prototype system. This includes the implementation and benchmarking of neural network op- timizations, such as quantization and pruning. Furthermore, an appearance-based synthetic RADAR and real RADAR datasets are developed. The latter contains simultaneous optical and RADAR data capture and cross-modal labels. Finally, multi-object tracking methods are benchmarked and a support vector machine is utilized for cross-modal association. In summary, the implementation, benchmarking, and optimization of methods for detection and tracking helped create a real-time vital sign system on a low-profile embedded device. Additionally, this work established a relationship between compression methods and different neural networks for optimal file compression and network performance. Finally, methods for RADAR and optical data collection and cross-modal association are implemented

    The Functional Architecture of the Brain Underlies Strategic Deception in Impression Management

    Get PDF
    Impression management, as one of the most essential skills of social function, impacts one’s survival and success in human societies. However, the neural architecture underpinning this social skill remains poorly understood. By employing a two-person bargaining game, we exposed three strategies involving distinct cognitive processes for social impression management with different levels of strategic deception. We utilized a novel adaptation of Granger causality accounting for signal-dependent noise (SDN), which captured the directional connectivity underlying the impression management during the bargaining game. We found that the sophisticated strategists engaged stronger directional connectivity from both dorsal anterior cingulate cortex and retrosplenial cortex to rostral prefrontal cortex, and the strengths of these directional influences were associated with higher level of deception during the game. Using the directional connectivity as a neural signature, we identified the strategic deception with 80% accuracy by a machine-learning classifier. These results suggest that different social strategies are supported by distinct patterns of directional connectivity among key brain regions for social cognition

    Linear Regression Models Applied to Imperfect Information Spacecraft Pursuit-evasion Differential Games

    Get PDF
    Within satellite rendezvous and proximity operations lies pursuit-evasion differential games between two spacecraft. The extent of possible outcomes can be mathematically bounded by differential games where each player employs optimal strategies. A linear regression model is developed from a large data set of optimal control solutions. The model is shown to map pursuer relative starting positions to final capture positions and estimate capture time. The model is 3.8 times faster than the indirect heuristic method for arbitrary pursuer starting positions on an initial relative orbit about the evader. The linear regression model is shown to be well suited for on-board implementation for autonomous mission planning

    Computer Vision Based Structural Identification Framework for Bridge Health Mornitoring

    Get PDF
    The objective of this dissertation is to develop a comprehensive Structural Identification (St-Id) framework with damage for bridge type structures by using cameras and computer vision technologies. The traditional St-Id frameworks rely on using conventional sensors. In this study, the collected input and output data employed in the St-Id system are acquired by series of vision-based measurements. The following novelties are proposed, developed and demonstrated in this project: a) vehicle load (input) modeling using computer vision, b) bridge response (output) using full non-contact approach using video/image processing, c) image-based structural identification using input-output measurements and new damage indicators. The input (loading) data due vehicles such as vehicle weights and vehicle locations on the bridges, are estimated by employing computer vision algorithms (detection, classification, and localization of objects) based on the video images of vehicles. Meanwhile, the output data as structural displacements are also obtained by defining and tracking image key-points of measurement locations. Subsequently, the input and output data sets are analyzed to construct novel types of damage indicators, named Unit Influence Surface (UIS). Finally, the new damage detection and localization framework is introduced that does not require a network of sensors, but much less number of sensors. The main research significance is the first time development of algorithms that transform the measured video images into a form that is highly damage-sensitive/change-sensitive for bridge assessment within the context of Structural Identification with input and output characterization. The study exploits the unique attributes of computer vision systems, where the signal is continuous in space. This requires new adaptations and transformations that can handle computer vision data/signals for structural engineering applications. This research will significantly advance current sensor-based structural health monitoring with computer-vision techniques, leading to practical applications for damage detection of complex structures with a novel approach. By using computer vision algorithms and cameras as special sensors for structural health monitoring, this study proposes an advance approach in bridge monitoring through which certain type of data that could not be collected by conventional sensors such as vehicle loads and location, can be obtained practically and accurately
    • …
    corecore