843 research outputs found

    A coordinated optical and X-ray spectroscopic campaign on HD179949: searching for planet-induced chromospheric and coronal activity

    Get PDF
    HD179949 is an F8V star, orbited by a close-in giant planet with a period of ~3 days. Previous studies suggested that the planet enhances the magnetic activity of the parent star, producing a chromospheric hot spot which rotates in phase with the planet orbit. However, this phenomenon is intermittent since it was observed in several but not all seasons. A long-term monitoring of the magnetic activity of HD179949 is required to study the amplitude and time scales of star-planet interactions. In 2009 we performed a simultaneous optical and X-ray spectroscopic campaign to monitor the magnetic activity of HD179949 during ~5 orbital periods and ~2 stellar rotations. We analyzed the CaII H&K lines as a proxy for chromospheric activity, and we studied the X-ray emission in search of flux modulations and to determine basic properties of the coronal plasma. A detailed analysis of the flux in the cores of the CaII H&K lines and a similar study of the X-ray photometry shows evidence of source variability, including one flare. The analysis of the the time series of chromospheric data indicates a modulation with a ~11 days period, compatible with the stellar rotation period at high latitudes. Instead, the X-ray light curve suggests a signal with a period of ~4 days, consistent with the presence of two active regions on opposite hemispheres. The observed variability can be explained, most likely, as due to rotational modulation and to intrinsic evolution of chromospheric and coronal activity. There is no clear signature related to the orbital motion of the planet, but the possibility that just a fraction of the chromospheric and coronal variability is modulated with the orbital period of the planet, or the stellar-planet beat period, cannot be excluded. We conclude that any effect due to the presence of the planet is difficult to disentangle

    Non-Intrusive Affective Assessment in the Circumplex Model from Pupil Diameter and Facial Expression Monitoring

    Get PDF
    Automatic methods for affective assessment seek to enable computer systems to recognize the affective state of their users. This dissertation proposes a system that uses non-intrusive measurements of the user’s pupil diameter and facial expression to characterize his /her affective state in the Circumplex Model of Affect. This affective characterization is achieved by estimating the affective arousal and valence of the user’s affective state. In the proposed system the pupil diameter signal is obtained from a desktop eye gaze tracker, while the face expression components, called Facial Animation Parameters (FAPs) are obtained from a Microsoft Kinect module, which also captures the face surface as a cloud of points. Both types of data are recorded 10 times per second. This dissertation implemented pre-processing methods and fixture extraction approaches that yield a reduced number of features representative of discrete 10-second recordings, to estimate the level of affective arousal and the type of affective valence experienced by the user in those intervals. The dissertation uses a machine learning approach, specifically Support Vector Machines (SVMs), to act as a model that will yield estimations of valence and arousal from the features derived from the data recorded. Pupil diameter and facial expression recordings were collected from 50 subjects who volunteered to participate in an FIU IRB-approved experiment to capture their reactions to the presentation of 70 pictures from the International Affective Picture System (IAPS) database, which have been used in large calibration studies and therefore have associated arousal and valence mean values. Additionally, each of the 50 volunteers in the data collection experiment provided their own subjective assessment of the levels of arousal and valence elicited in him / her by each picture. This process resulted in a set of face and pupil data records, along with the expected reaction levels of arousal and valence, i.e., the “labels”, for the data used to train and test the SVM classifiers. The trained SVM classifiers achieved 75% accuracy for valence estimation and 92% accuracy in arousal estimation, confirming the initial viability of non-intrusive affective assessment systems based on pupil diameter and face expression monitoring

    Computationally efficient deformable 3D object tracking with a monocular RGB camera

    Get PDF
    182 p.Monocular RGB cameras are present in most scopes and devices, including embedded environments like robots, cars and home automation. Most of these environments have in common a significant presence of human operators with whom the system has to interact. This context provides the motivation to use the captured monocular images to improve the understanding of the operator and the surrounding scene for more accurate results and applications.However, monocular images do not have depth information, which is a crucial element in understanding the 3D scene correctly. Estimating the three-dimensional information of an object in the scene using a single two-dimensional image is already a challenge. The challenge grows if the object is deformable (e.g., a human body or a human face) and there is a need to track its movements and interactions in the scene.Several methods attempt to solve this task, including modern regression methods based on Deep NeuralNetworks. However, despite the great results, most are computationally demanding and therefore unsuitable for several environments. Computational efficiency is a critical feature for computationally constrained setups like embedded or onboard systems present in robotics and automotive applications, among others.This study proposes computationally efficient methodologies to reconstruct and track three-dimensional deformable objects, such as human faces and human bodies, using a single monocular RGB camera. To model the deformability of faces and bodies, it considers two types of deformations: non-rigid deformations for face tracking, and rigid multi-body deformations for body pose tracking. Furthermore, it studies their performance on computationally restricted devices like smartphones and onboard systems used in the automotive industry. The information extracted from such devices gives valuable insight into human behaviour a crucial element in improving human-machine interaction.We tested the proposed approaches in different challenging application fields like onboard driver monitoring systems, human behaviour analysis from monocular videos, and human face tracking on embedded devices

    A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems

    Get PDF
    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates

    Computationally efficient deformable 3D object tracking with a monocular RGB camera

    Get PDF
    182 p.Monocular RGB cameras are present in most scopes and devices, including embedded environments like robots, cars and home automation. Most of these environments have in common a significant presence of human operators with whom the system has to interact. This context provides the motivation to use the captured monocular images to improve the understanding of the operator and the surrounding scene for more accurate results and applications.However, monocular images do not have depth information, which is a crucial element in understanding the 3D scene correctly. Estimating the three-dimensional information of an object in the scene using a single two-dimensional image is already a challenge. The challenge grows if the object is deformable (e.g., a human body or a human face) and there is a need to track its movements and interactions in the scene.Several methods attempt to solve this task, including modern regression methods based on Deep NeuralNetworks. However, despite the great results, most are computationally demanding and therefore unsuitable for several environments. Computational efficiency is a critical feature for computationally constrained setups like embedded or onboard systems present in robotics and automotive applications, among others.This study proposes computationally efficient methodologies to reconstruct and track three-dimensional deformable objects, such as human faces and human bodies, using a single monocular RGB camera. To model the deformability of faces and bodies, it considers two types of deformations: non-rigid deformations for face tracking, and rigid multi-body deformations for body pose tracking. Furthermore, it studies their performance on computationally restricted devices like smartphones and onboard systems used in the automotive industry. The information extracted from such devices gives valuable insight into human behaviour a crucial element in improving human-machine interaction.We tested the proposed approaches in different challenging application fields like onboard driver monitoring systems, human behaviour analysis from monocular videos, and human face tracking on embedded devices

    Deep Learning for Detecting Multiple Space-Time Action Tubes in Videos

    Get PDF
    In this work, we propose an approach to the spatiotemporal localisation (detection) and classification of multiple concurrent actions within temporally untrimmed videos. Our framework is composed of three stages. In stage 1, appearance and motion detection networks are employed to localise and score actions from colour images and optical flow. In stage 2, the appearance network detections are boosted by combining them with the motion detection scores, in proportion to their respective spatial overlap. In stage 3, sequences of detection boxes most likely to be associated with a single action instance, called action tubes, are constructed by solving two energy maximisation problems via dynamic programming. While in the first pass, action paths spanning the whole video are built by linking detection boxes over time using their class-specific scores and their spatial overlap, in the second pass, temporal trimming is performed by ensuring label consistency for all constituting detection boxes. We demonstrate the performance of our algorithm on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new state-of-the-art results across the board and significantly increasing detection speed at test time. We achieve a huge leap forward in action detection performance and report a 20% and 11% gain in mAP (mean average precision) on UCF-101 and J-HMDB-21 datasets respectively when compared to the state-of-the-art.Comment: Accepted by British Machine Vision Conference 201
    corecore