13 research outputs found

    ESTCube-2 asendi kontrolli režiimide disain ja võrdlus

    Get PDF
    This thesis presents the attitude control problem of ESTCube-2. ESTCube-2 is a 3U CubeSat with a size of 10 x 10 x 30 cm and a weight of about 4 kg. It is the second satellite to be developed by the ESTCube Team and will be equipped with the E-Sail payload for the plasma break experiment, Earth observation camera, a high speed communication system, and a cold gas propulsion module. The satellite will make use of 3 electromagnetic coils, 3 reaction wheels and the cold gas thruster as actuators. The primary purpose of this work was to develop and compare control laws to ful ll the attitude control requirements of the ESTCube-2 mission. To achieve this, the spacecraft dynamics and environmental models are derived and analyzed. PD like controllers and LQR optimal controls are designed to ful ll the pointing requirements of the satellite in addition to the B-dot detumbling control law. Angular rate control law to spin up the satellite for tether deployment is also derived and presented. Simulations of the di erent controllers shows the performance with disturbances also added to the system. Finally recommendations and optimal control situations are presented based on the results

    Anti-Windup Compensator Approach to Nanosatellite Fault Tolerant Architecture

    Get PDF
    Anti-windup (AW) compensator in this study is designed to work with control systems experiencing actuator saturation. While working with an existing controller, the AW compensator prevents degradation in performance during saturation and enhances the system to perform optimally after saturation. In addition, the fault tolerant capability of a proposed integrated fault tolerant architecture is studied with the AW compensator

    Automatic Recognition of Facial Displays of Unfelt Emotions

    Get PDF
    Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average, it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datase

    Automatic Recognition of Facial Displays of Unfelt Emotions

    Get PDF
    Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datas

    Action recognition using single-pixel time-of-flight detection

    Get PDF
    Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject's privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene. Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47 % accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent neural network

    Stabilised LQR control and optimised spin rate control for nanosatellites

    No full text
    This paper presents the design and study of cross product control, Linear-Quadratic Regulator (LQR)optimal control and high spin rate control algorithms for ESTCube-2/3 missions. The three-unit CubeSat is required to spin up in order to centrifugally deploy a 300-m long tether for a plasma brake deorbiting experiment. The algorithm is designed to spin up the satellite to one rotation per second which is achieved in 40 orbits. The LQR optimal controller is designed based on closed-loop step response with controllability and stability analysis to meet the pointing requirements of less than 0.1° for the Earth observation camera and the high-speed communication system. The LQR is based on linearised satellite dynamics with an actuator model. The preliminary simulation results show that the controllers fulfil the requirements set by payloads. While ESTCube-1 used only electromagnetic coils for high spin rate control, ESTCube-2 will make the use of electromagnetic coils, reaction wheels and cold gas thrusters to demonstrate technologies for a deep-space mission ESTCube-3. The attitude control algorithms will be demonstrated in low Earth orbit on ESTCube-2 as a stepping stone for ESTCube-3 which is planned to be launched to lunar orbit where magnetic control is not available.Peer reviewe

    Ethical AI in facial expression analysis: racial bias

    Get PDF
    Facial expression recognition using deep neural networks has become very popular due to their successful performances. However, the datasets used during the development and testing of these methods lack a balanced distribution of races among the sample images. This leaves a possibility of the methods being biased toward certain races. Therefore, a concern about fairness arises, and the lack of research aimed at investigating racial bias only increases the concern. On the other hand, such bias in the method would decrease the real-world performance due to the wrong generalization. For these reasons, in this study, we investigated the racial bias within popular state-of-the-art facial expression recognition methods such as Deep Emotion, Self-Cure Network, ResNet50, InceptionV3, and DenseNet121. We compiled an elaborated dataset with images of different races, cross-checked the bias for methods trained, and tested on images of people of other races. We observed that the methods are inclined towards the races included in the training data. Moreover, an increase in the performance increases the bias as well if the training dataset is imbalanced. Some methods can make up for the bias if enough variance is provided in the training set. However, this does not mitigate the bias completely. Our findings suggest that an unbiased performance can be obtained by adding the missing races into the training data equally

    Automatic recognition of facial displays of unfelt emotions

    No full text
    Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datase

    Action Recognition Using Single-Pixel Time-of-Flight Detection

    No full text
    Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject’s privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene. Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47 % accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent neural network
    corecore