21 research outputs found
A Comparative Emotions-detection Review for Non-intrusive Vision-Based Facial Expression Recognition
Affective computing advocates for the development of systems and devices that can recognize, interpret, process, and simulate human emotion. In computing, the field seeks to enhance the user experience by finding less intrusive automated solutions. However, initiatives in this area focus on solitary emotions that limit the scalability of the approaches. Further reviews conducted in this area have also focused on solitary emotions, presenting challenges to future researchers when adopting these recommendations. This review aims at highlighting gaps in the application areas of Facial Expression Recognition Techniques by conducting a comparative analysis of various emotion detection datasets, algorithms, and results provided in existing studies. The systematic review adopted the PRISMA model and analyzed eighty-three publications. Findings from the review show that different emotions call for different Facial Expression Recognition techniques, which should be analyzed when conducting Facial Expression Recognition.
Keywords: Facial Expression Recognition, Emotion Detection, Image Processing, Computer Visio
Enhancing Performance of Low-Cost Sensors Using an Infant Care Usecase
The drive toward citizen observatories, remote monitoring and early warning systems has resulted in numerous Internet of things (IoT) innovations. However, affordability and availability of these solutions challenge their sustainability in areas where they are needed the most. While low-cost sensors address this challenge, their reliability is often questionable. In this respect, this study set out to evaluate techniques that can enhance the efficiency of low-cost sensors in a bid to identify ways of developing sustainable IoT solutions. Experiments conducted using an infant postnatal care prototype demonstrate the potential of the identified techniques. The results showed that sensor calibration, configuration, fabrication, fusion, and improvising techniques have the potential to enhance the quality of low-cost sensors. Future work in this area will scale the solution to other use cases
Optimizing Hyperparameters for Enhanced LSTM-Based Prediction System Performance
This research paper explores the application of deep learning and supervised machine learning algorithms, specifically Long Short-Term Memory (LSTM), for stock market prediction. The study focuses on the closing prices of three companies - Tata Steel, Apple, and Powergrid - using a dataset sourced from Yahoo Finance. Performance evaluation of the LSTM model employed RMSE, MAPE, and accuracy metrics, along with hyperparameter calibration to determine the optimal model parameters. The findings indicate that a single-layer LSTM model outperformed a multilayer LSTM model across all companies and evaluation metrics. Furthermore, a comparison with existing research demonstrated the superiority of the proposed model. The study emphasizes the effectiveness of LSTM models for stock price prediction, underscores the significance of proper hyperparameter tuning for optimal performance, and concludes that a single-layer LSTM model can yield superior results compared to a multilayer model
Virosome: A vector in vaccine delivery
These days vaccines are considered the best economical and effective technique for preventing and handling bacterial infections, like human papillomavirus (HPV) or meningitis, allergies, autoimmune Relevant Disorders, Microbial Infections, and many more viral diseases. Annually millions of lives are saved from death-causing infections and other relevant disorders with the help of proper immunization with the help of vaccines. But still, many diseases are not yet preventable by vaccines and there is a chance of the improvement of current vaccines with help of many approaches. One of these is a compound known as a virosome, which is produced when pure haemagglutinin and neuraminidase projections of the influenza virus surface are removed from viral envelope and deposited on the exterior of unilamellar liposomes.Studies have shown that in order to attain the desired therapeutic effect, a physical attachment between the target antigen and the virosomal carrier is necessary. A virosome can bind to and “infect” host cells and deliver the antigen directly into the processing pathway which is an Alternatively, the virosome may be phagocytosed by an APC. So Our review gives a novel idea of the vector technology for vaccine delivery, its properties, method of preparation, and importance as well as some applications of different virosomal vaccine discoveries
Localization of Multi-Class On-Road and Aerial Targets Using mmWave FMCW Radar
mmWave radars play a vital role in autonomous systems, such as unmanned aerial vehicles (UAVs), unmanned surface vehicles (USVs), ground station control and monitoring systems. The challenging task when using mmWave radars is to estimate the accurate angle of arrival (AoA) of the targets, due to the limited number of receivers. In this paper, we present a novel AoA estimation technique, using mmWave FMCW radars operating in the frequency range 77–81 GHz by utilizing the mechanical rotation. Rotating the radar also increases the field of view in both azimuth and elevation. The proposed method estimates the AoA of the targets, using only a single transmitter and receiver. The measurements are carried out in a variety of practical scenarios including pedestrians, a car, and an UAV, also known as a drone. With measured data, range-angle maps are created, and morphological operators are used to estimate the AoA of the targets. We also process radar range-angle images for improved visual representation. The proposed method will be extremely beneficial for practical ground stations, traffic control and monitoring frameworks for both on-ground and airborne vehicles
Recommended from our members
Development of SARS CoV-2 entry inhibitors: Peptide-derived inhibitors of receptor binding domain and viral fusion
Responsible for millions of deaths and hospitalizations in the U.S. alone, the COVID-19 pandemic had an immense impact on all lives, creating an urgent need for effective interventions. COVID-19 is caused by the SARS-CoV-2 virus, which contains a spike (S) protein responsible for viral fusion to human ACE2 receptors (hACE2), thereby infecting the cell. This project aimed to inhibit this viral fusion by testing peptide mimics of the S-protein designed either to block receptor binding domains (RBDs) or heptad repeat (HR) regions on the S protein. In the experiment, cells expressing hACE2 were plated in 24-well plates and grown to confluency overnight. The following day, a pseudovirus expressing the SARS-CoV-2 spike (S) protein and green fluorescent protein (GFP) and peptides were incubated for fifteen minutes and then added to the cells for one hour. Flow cytometry was conducted the next day to assess viral infectivity and the peptides' percent inhibition. The results showed that the peptides were ineffective in inhibiting infection, likely due to the need for higher peptide concentrations and inefficient peptide binding affinity. These findings suggest that future research should explore optimizing peptide concentration and binding efficiency to improve inhibitory effects.Keywords: COVID-19, SARS-CoV-2, peptide mimics, spike protein, viral fusio
Bibliometric Review on Liver and Tumour Segmentation using Deep Learning
One of the major organs in the body is liver where tumors occur often. Malignant liver tumors pose a serious hazard to human life and health. Manual segmentation of the liver organ and tumor from computed tomography (CT) scans is difficult, time-consuming, and skewed to the clinician\u27s experience, yet it is essential for hepatic surgical planning. However, due to the following considerations, segmenting liver tumors from computed tomography (CT) images is difficult: In CT pictures, the contrast between the liver tumor and healthy tissues is low, and the boundary is indistinct; the picture of the liver tumor is confusing, with a wide range of size, shape, and location. Since there have been a lot of medical imaging techniques with their own advantages and disadvantages over the years, such as MRI, Ultra-sonography (US), Computed Tomography (CT), so on and so forth, CT is often preferred due to its high sensibility (93 %) and specificity (93 %), where CT is often preferred due to its high specificity (93 %) (100 %)
Vision-Autocorrect: A Self-Adapting Approach towards Relieving Eye-Strain Using Facial-Expression Recognition
The last two years have seen a rapid rise in the duration of time that both adults and children spend on screens, driven by the recent COVID-19 health pandemic. A key adverse effect is digital eye strain (DES). Recent trends in human-computer interaction and user experience have proposed voice or gesture-guided designs that present more effective and less intrusive automated solutions. These approaches inspired the design of a solution that uses facial expression recognition (FER) techniques to detect DES and autonomously adapt the application to enhance the user’s experience. This study sourced and adapted popular open FER datasets for DES studies, trained convolutional neural network models for DES expression recognition, and designed a self-adaptive solution as a proof of concept. Initial experimental results yielded a model with an accuracy of 77% and resulted in the adaptation of the user application based on the FER classification results. We also provide the developed application, model source code, and adapted dataset used for further improvements in the area. Future work should focus on detecting posture, ergonomics, or distance from the screen
Breathing Rate Classification Using Piezoresistive Sensor Utilizing Continuous Wavelet Transform and Lightweight CNN
The breath rate can now be monitored remotely due to the advancements in digital stethoscope sensor technology, signal processing, and machine learning. Automatic breathing rate classification, on the other hand, provides additional benefits in medical diagnostics. In this paper, a lightweight convolutional neural network is proposed for automatic breathing rate classification utilizing the piezoresistive sensor data. In the proposed work, the raw signals from the piezoresistive sensor are pre-processed using a continuous wavelet transform to generate the corresponding images. These images are then fed into a lightweight convolutional neural network, which efficiently classifies the breathing rate into six classes based on the number of breaths per minute. Through extensive results, we show that the proposed model results in a classification accuracy of 96.40% which is higher than all the benchmark models considered in this paper. We also evaluate the performance of the proposed model using edge computing devices such as Raspberry Pi, Nvidia AGX Xavier, and Nvidia Jetson Nano