133 research outputs found

    Design of power device sizing and integration for solar-powered aircraft application

    Get PDF
    The power device constitutes the PV cell, rechargeable battery, and maximum power point tracker. Solar aircraft lack proper power device sizing to provide adequate energy to sustain low and high altitude and long endurance flight. This paper conducts the power device sizing and integration for solar-powered aircraft applications (Unmanned Aerial Vehicle). The solar radiation model, the aerodynamic model, the energy and mass balance model, and the adopted aircraft configuration were used to determine the power device sizing, integration, and application. The input variables were aircraft mass 3 kg, wingspan 3.2 m, chord 0.3 m, aspect ratio 11.25, solar radiation 825 W/m2 , lift coefficient 0.913, total drag coefficient 0.047, day time 12 hour, night time 12 hours, respectively. The input variables were incorporated into the MS Excel program to determine the output variables. The output variables are; the power required 10.92 W, the total electrical power 19.47 W, the total electrical energy 465.5 Wh, the daily solar energy 578.33 Wh, the solar cell area 0.62 m, the number of PV cell 32, and the number of the Rechargeable battery 74 respectively. The power device was developed with the PV cell Maxeon Gen III for high efficiency, the rechargeable battery sulfur-lithium battery for high energy density, and the Maximum power point tracker neural network algorithm for smart and efficient response. The PD sizing was validated with three existing designs. The validation results show that 20% reduction of the required number of PV cells and RB and a 30% increase in flight durations

    Penyediaan dan penilaian modul pelaksanaan pameran kejuruteraan elektrik

    Get PDF
    Kajian ini bertujuan untuk menghasilkan satu Modul Pelaksanaan Parneran Kejuruteraan Elektrik yang akan diguna pakai sebagai panduan dan rujukan ahJi jawatankuasa program. Seterusnya modul ini akan di nilai daripada segi bentuk dan ciri modul yang bersesuaian dengan kehendak pengguna. Penilaian bentuk dan ciri modul adalah berdasarkan empat ciri iaitu penggunaan carta alir; lampiran contoh dan carta Gantt, penggunaan bahasa, isi kandungan modul dan keperluan modul. Rekabentuk kajian ini adalah menggunakan kaedah pengkajian secara tinjauan bersampel dengan menggunakan borang soal selidik sebagai instrurnen kajian. Seramai 59 orang yang tereliri daripada pelajar-pelajar Kejuruteraan Elektrik, Kolej Universiti Teknologi Tun Hussein Onn yang terlibat eli dalam program Kelab Kejuruteraan Elektrik di pilih secara rawak untuk bertindak sebagai responden kajian. Data-data kajian dianalisa dengan menggunakan program SP SS Versi 11.0 dan dianalisis dalam bentuk skor min dan peratusan. Keseluruhan hasil kajian ini menunjukkan bahawa kebanyakan responden bersetuju dengan bentuk dan ciri yang digunakan di dalam Modul Pelaksanaan Pameran Kejuruteraan Elektrik. Ini adalah berdasarkan keseluruhan min bagi kesemua ciri yang dikaji iaitu penggunaan carta alir; lampiran contoh dan carta Gantt, penggunaan bahasa, isi kandungan modul dan keperluan modul adalah berada di tahap kesesuaian tinggi iaitu di dalamjulat antara 3.81 hingga 5.00

    Infrared face recognition: a comprehensive review of methodologies and databases

    Full text link
    Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are: (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition, (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies, (iii) a description of the main databases of infrared facial images available to the researcher, and lastly (iv) a discussion of the most promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap with arXiv:1306.160

    Self-adversarial Multi-scale Contrastive Learning for Semantic Segmentation of Thermal Facial Images

    Get PDF
    Segmentation of thermal facial images is a challenging task. This is because facial features often lack salience due to high-dynamic thermal range scenes and occlusion issues. Limited availability of datasets from unconstrained settings further limits the use of the state-of-the-art segmentation networks, loss functions and learning strategies which have been built and validated for RGB images. To address the challenge, we propose Self-Adversarial Multi-scale Contrastive Learning (SAM-CL) framework as a new training strategy for thermal image segmentation. SAM-CL framework consists of a SAM-CL loss function and a thermal image augmentation (TiAug) module as a domain-specific augmentation technique. We use the Thermal-Face-Database to demonstrate effectiveness of our approach. Experiments conducted on the existing segmentation networks (UNET, Attention-UNET, DeepLabV3 and HRNetv2) evidence the consistent performance gains from the SAM-CL framework. Furthermore, we present a qualitative analysis with UBComfort and DeepBreath datasets to discuss how our proposed methods perform in handling unconstrained situations.Comment: Accepted at the British Machine Vision Conference (BMVC), 202

    A system for recognizing human emotions based on speech analysis and facial feature extraction: applications to Human-Robot Interaction

    Get PDF
    With the advance in Artificial Intelligence, humanoid robots start to interact with ordinary people based on the growing understanding of psychological processes. Accumulating evidences in Human Robot Interaction (HRI) suggest that researches are focusing on making an emotional communication between human and robot for creating a social perception, cognition, desired interaction and sensation. Furthermore, robots need to receive human emotion and optimize their behavior to help and interact with a human being in various environments. The most natural way to recognize basic emotions is extracting sets of features from human speech, facial expression and body gesture. A system for recognition of emotions based on speech analysis and facial features extraction can have interesting applications in Human-Robot Interaction. Thus, the Human-Robot Interaction ontology explains how the knowledge of these fundamental sciences is applied in physics (sound analyses), mathematics (face detection and perception), philosophy theory (behavior) and robotic science context. In this project, we carry out a study to recognize basic emotions (sadness, surprise, happiness, anger, fear and disgust). Also, we propose a methodology and a software program for classification of emotions based on speech analysis and facial features extraction. The speech analysis phase attempted to investigate the appropriateness of using acoustic (pitch value, pitch peak, pitch range, intensity and formant), phonetic (speech rate) properties of emotive speech with the freeware program PRAAT, and consists of generating and analyzing a graph of speech signals. The proposed architecture investigated the appropriateness of analyzing emotive speech with the minimal use of signal processing algorithms. 30 participants to the experiment had to repeat five sentences in English (with durations typically between 0.40 s and 2.5 s) in order to extract data relative to pitch (value, range and peak) and rising-falling intonation. Pitch alignments (peak, value and range) have been evaluated and the results have been compared with intensity and speech rate. The facial feature extraction phase uses the mathematical formulation (B\ue9zier curves) and the geometric analysis of the facial image, based on measurements of a set of Action Units (AUs) for classifying the emotion. The proposed technique consists of three steps: (i) detecting the facial region within the image, (ii) extracting and classifying the facial features, (iii) recognizing the emotion. Then, the new data have been merged with reference data in order to recognize the basic emotion. Finally, we combined the two proposed algorithms (speech analysis and facial expression), in order to design a hybrid technique for emotion recognition. Such technique have been implemented in a software program, which can be employed in Human-Robot Interaction. The efficiency of the methodology was evaluated by experimental tests on 30 individuals (15 female and 15 male, 20 to 48 years old) form different ethnic groups, namely: (i) Ten adult European, (ii) Ten Asian (Middle East) adult and (iii) Ten adult American. Eventually, the proposed technique made possible to recognize the basic emotion in most of the cases

    RGB-D And Thermal Sensor Fusion: A Systematic Literature Review

    Full text link
    In the last decade, the computer vision field has seen significant progress in multimodal data fusion and learning, where multiple sensors, including depth, infrared, and visual, are used to capture the environment across diverse spectral ranges. Despite these advancements, there has been no systematic and comprehensive evaluation of fusing RGB-D and thermal modalities to date. While autonomous driving using LiDAR, radar, RGB, and other sensors has garnered substantial research interest, along with the fusion of RGB and depth modalities, the integration of thermal cameras and, specifically, the fusion of RGB-D and thermal data, has received comparatively less attention. This might be partly due to the limited number of publicly available datasets for such applications. This paper provides a comprehensive review of both, state-of-the-art and traditional methods used in fusing RGB-D and thermal camera data for various applications, such as site inspection, human tracking, fault detection, and others. The reviewed literature has been categorised into technical areas, such as 3D reconstruction, segmentation, object detection, available datasets, and other related topics. Following a brief introduction and an overview of the methodology, the study delves into calibration and registration techniques, then examines thermal visualisation and 3D reconstruction, before discussing the application of classic feature-based techniques as well as modern deep learning approaches. The paper concludes with a discourse on current limitations and potential future research directions. It is hoped that this survey will serve as a valuable reference for researchers looking to familiarise themselves with the latest advancements and contribute to the RGB-DT research field.Comment: 33 pages, 20 figure

    Mobile Thermography-based Physiological Computing for Automatic Recognition of a Person’s Mental Stress

    Get PDF
    This thesis explores the use of Mobile Thermography1, a significantly less investigated sensing capability, with the aim of reliably extracting a person’s multiple physiological signatures and recognising mental stress in an automatic, contactless manner. Mobile thermography has greater potentials for real-world applications because of its light-weight, low computation-cost characteristics. In addition, thermography itself does not necessarily require the sensors to be worn directly on the skin. It raises less privacy concerns and is less sensitive to ambient lighting conditions. The work presented in this thesis is structured through a three-stage approach that aims to address the following challenges: i) thermal image processing for mobile thermography in variable thermal range scenes; ii) creation of rich and robust physiology measurements; and iii) automated stress recognition based on such measurements. Through the first stage (Chapter 4), this thesis contributes new processing techniques to address negative effects of environmental temperature changes upon automatic tracking of regions-of-interest and measuring of surface temperature patterns. In the second stage (Chapters 5,6,7), the main contributions are: robustness in tracking respiratory and cardiovascular thermal signatures both in constrained and unconstrained settings (e.g. respiration: strong correlation with ground truth, r=0.9987), and investigation of novel cortical thermal signatures associated with mental stress. The final stage (Chapters 8,9) contributes automatic stress inference systems that focus on capturing richer dynamic information of physiological variability: firstly, a novel respiration representation-based system (which has achieved state-of-the-art performance: 84.59% accuracy, two stress levels), and secondly, a novel cardiovascular representation-based system using short-term measurements of nasal thermal variability and heartrate variability from another sensing channel (78.33% accuracy achieved from 20seconds measurements). Finally, this thesis contributes software libraries and incrementally built labelled datasets of thermal images in both constrained and everyday ubiquitous settings. These are used to evaluate performance of our proposed computational methods across the three-stages

    Development and evaluation of thermal imaging techniques for non-contact respiration monitoring.

    Get PDF
    Respiration rate is one of the main indicators of an individual's health and therefore it requires accurate quantification. Its value can be used to predict life threatening conditions such as the child death syndrome and heart attacks. The current respiration rate monitoring methods are contact based, i.e. a sensing device needs to be attached to the person's body. Physically constraining infants and young children by a sensing device can be stressful to the individuals which in turn affects their respiration rate. Therefore, measuring respiration rate in a non-contact manner (i.e. without attaching the sensing device to the subject) has distinct benefits. Currently there is not any non-contact respiration rate monitoring available for use in medical field.The aim of this study was to investigate thermal imaging as a means for non-contact respiration rate monitoring. Thermal imaging is safe and easy to deploy. Twenty children were enrolled for the study at Sheffield Children Hospital; the children were from 6 month to 17 years old. They slept comfortably in a bed during the recordings. A high resolution high sensitivity (0.08 degree Kelvin) thermal camera (Flir A40) was used for the recordings. The image capture rate was 50 frames per second and its recording duration per subject was two minutes (i.e. 6000 image frames)A median digital lowpass filter was used to remove unwanted frequency spectrum of the images. An important issue was to localize and track the area centered on the tip of the nose (i.e. respiration region of interest, ROI). A number of approaches were developed for this purpose. The most effective approach was to identify use the warmest facial point (i.e. the point where the bridge of the nose meets the corner of one of the eyes). A novel method to analyse the selected ROI was devised. This involved segmenting the ROI into eight equal segments centred on the tip of nose. A respiration signal was produced for each segment across the 6000 recorded images from each subject. The study demonstrated that the process of dividing the ROI into eight segments improves determination of respiration rate. The respiration signals were processed both in the time and frequency domains to determine respiration rates for the 20 subjects included in the study. The respiration values obtained from the two domains were close. During each recording respiration rate was monitored using conventional contact methods (e.g. nostril thermistor, abdomen and chest movement sensor etc). There was a close correlation (correlation value 0.99) between respiration values obtained by thermal imaging and those obtained using conventional contact method.The novel aspects of the study relate to the development of techniques that facilitated thermal imaging as an effective non-contact respiration rate monitoring in both normal and patient subject groups
    • …
    corecore