2,132 research outputs found
Human mobility monitoring in very low resolution visual sensor network
This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics
ΠΠ΅ΡΠΊΠΎΠ½ΡΠ°ΠΊΡΠ½ΡΠΉ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³ Π΄ΡΡ Π°Π½ΠΈΡ Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΠΎΠΏΡΠΈΡΠ΅ΡΠΊΠΈΡ Π΄Π°ΡΡΠΈΠΊΠΎΠ²
Π¦ΡΠ»Π»Ρ Π΄Π°Π½ΠΎΡ ΡΠΎΠ±ΠΎΡΠΈ Ρ ΠΊΠ»Π°ΡΠΈΡΡΠΊΠ°ΡΡΡ ΠΏΡΠ΄Ρ
ΠΎΠ΄ΡΠ² Π΄ΠΎ Π±Π΅Π·ΠΊΠΎΠ½ΡΠ°ΠΊΡΠ½ΠΎΠ³ΠΎ ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³Ρ Π΄ΠΈΡ
Π°Π½Π½Ρ Ρ ΡΠΎΠ·ΡΠΎΠ±ΠΊΠ° ΡΡΡΡΠΊΡΡΡΠΈ ΡΠΈΡΡΠ΅ΠΌΠΈ ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³Ρ Π· ΡΡΡΠ½Π΅Π½Π½ΡΠΌ Π°ΡΡΠ΅ΡΠ°ΠΊΡΡΠ² ΠΌΡΠΌΡΠΊΠΈ. Π£ΡΡ Π½Π°ΡΠ²Π½Ρ ΠΌΠ΅ΡΠΎΠ΄ΠΈ Π±ΡΠ»ΠΈ ΡΠΎΠ·Π΄ΡΠ»Π΅Π½Ρ Π½Π° Π΄Π²Ρ ΠΎΡΠ½ΠΎΠ²Π½Ρ Π³ΡΡΠΏΠΈ: ΠΌΠ΅ΡΠΎΠ΄ΠΈ Π½Π° ΠΎΡΠ½ΠΎΠ²Ρ Π²ΠΈΠ·Π½Π°ΡΠ΅Π½Π½Ρ Π΄ΠΈΡ
Π°Π½Π½Ρ Π· 3-D Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Π½Ρ ΠΎΠ±'ΡΠΊΡΠ° Ρ ΠΌΠ΅ΡΠΎΠ΄ΠΈ Π½Π° ΠΎΡΠ½ΠΎΠ²Ρ 2-D ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Ρ. ΠΡΠ»Π° ΡΠΎΠ·ΡΠΎΠ±Π»Π΅Π½Π° ΡΡΡΡΠΊΡΡΡΠ° ΡΠΈΡΡΠ΅ΠΌΠΈ ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³Ρ Π΄ΠΈΡ
Π°Π½Π½Ρ Π½Π° ΠΎΡΠ½ΠΎΠ²Ρ ΠΎΠΏΡΠΈΡΠ½ΠΈΡ
ΡΠ΅Π½ΡΠΎΡΡΠ² Π· ΠΌΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ Π²ΠΈΠ΄Π°Π»Π΅Π½Π½Ρ Π°ΡΡΠ΅ΡΠ°ΠΊΡΡΠ² ΠΌΡΠΌΡΠΊΠΈ. ΠΠΎΠ²ΠΈΠΉ ΠΏΡΠ΄Ρ
ΡΠ΄ Π΄ΠΎΠ·Π²ΠΎΠ»ΡΡ ΠΏΠΎΠΊΡΠ°ΡΠΈΡΠΈ ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³ Π΄ΠΈΡ
Π°Π½Π½Ρ Π΄Π»Ρ ΠΎΠ±'ΡΠΊΡΡΠ² Π² ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½Π½Ρ Π»Π΅ΠΆΠ°ΡΠΈ Π½Π° ΡΠΏΠΈΠ½Ρ Ρ Π² ΠΏΠΎΠ·ΠΈΡΡΡ ΡΠΈΠ΄ΡΡΠΈ.The main goal of this paper is to develop classification of non-contact respiration monitoring approaches and proposal of structure for system with facial artifacts rejection. All available techniques were divided into two main groups: based on reconstruction of respiration from 3-D image of object and based on 2-D image processing of techniques. Structure of system for respiration monitoring using optical sensors with facial artifacts removing was developed. New approach allows improving of respiration monitoring for objects in supine position and in a sitting position.Π¦Π΅Π»ΡΡ ΡΠ°Π±ΠΎΡΡ ΡΠ²Π»ΡΠ΅ΡΡΡ ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΈΡ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ΠΎΠ² ΠΊ Π±Π΅ΡΠΊΠΎΠ½ΡΠ°ΠΊΡΠ½ΠΎΠΌΡ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Ρ Π΄ΡΡ
Π°Π½ΠΈΡ ΠΈ ΡΠ°Π·ΡΠ°Π±ΠΎΡΠΊΠ° ΡΡΡΡΠΊΡΡΡΡ ΡΠΈΡΡΠ΅ΠΌΡ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π° Ρ ΡΡΡΡΠ°Π½Π΅Π½ΠΈΠ΅ΠΌ Π°ΡΡΠ΅ΡΠ°ΠΊΡΠΎΠ² ΠΌΠΈΠΌΠΈΠΊΠΈ. ΠΡΠ΅ ΠΈΠΌΠ΅ΡΡΠΈΠ΅ΡΡ ΠΌΠ΅ΡΠΎΠ΄Ρ Π±ΡΠ»ΠΈ ΡΠ°Π·Π΄Π΅Π»Π΅Π½Ρ Π½Π° Π΄Π²Π΅ ΠΎΡΠ½ΠΎΠ²Π½ΡΠ΅ Π³ΡΡΠΏΠΏΡ: ΠΌΠ΅ΡΠΎΠ΄Ρ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ Π΄ΡΡ
Π°Π½ΠΈΡ ΠΈΠ· 3-D ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡ ΠΎΠ±ΡΠ΅ΠΊΡΠ° ΠΈ ΠΌΠ΅ΡΠΎΠ΄Ρ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ 2-D ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠΈ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ. ΠΡΠ»Π° ΡΠ°Π·ΡΠ°Π±ΠΎΡΠ°Π½Π° ΡΡΡΡΠΊΡΡΡΠ° ΡΠΈΡΡΠ΅ΠΌΡ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π° Π΄ΡΡ
Π°Π½ΠΈΡ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΎΠΏΡΠΈΡΠ΅ΡΠΊΠΈΡ
Π΄Π°ΡΡΠΈΠΊΠΎΠ² Ρ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡΡ ΡΠ΄Π°Π»Π΅Π½ΠΈΡ Π°ΡΡΠ΅ΡΠ°ΠΊΡΠΎΠ² ΠΌΠΈΠΌΠΈΠΊΠΈ. ΠΠΎΠ²ΡΠΉ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ ΡΠ»ΡΡΡΠΈΡΡ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³ Π΄ΡΡ
Π°Π½ΠΈΡ Π΄Π»Ρ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² Π² ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΠΈ Π»Π΅ΠΆΠ° Π½Π° ΡΠΏΠΈΠ½Π΅ ΠΈ Π² ΠΏΠΎΠ·ΠΈΡΠΈΠΈ ΡΠΈΠ΄Ρ
Development and Validation of a Three-Dimensional Optical Imaging System for Chest Wall Deformity Measurement
Congenital chest wall deformities (CWD) are malformations of the thoracic cage that become more pronounced during early adolescence. Pectus excavatum (PE) is the most common CWD, characterized by an inward depression of the sternum and adjacent costal cartilage. A cross-sectional computed tomography (CT) image is mainly used to calculate the chest thoracic indices. Physicians use the indices to quantify PE deformity, prescribe surgical or non-surgical therapies, and evaluate treatment outcomes. However, the use of CT is increasingly causing physicians to be concerned about the radiation doses administered to young patients. Furthermore, radiographic indices are an unsafe and expensive method of evaluating non-surgical treatments involving gradual chest wall changes. Flexible tape or a dowel-shaped ruler can be used to measure changes on the anterior side of the thorax; however, these methods are subjective, prone to human error, and cannot accurately measure small changes. This study aims to fill this gap by exploring three-dimensional optical imaging techniques to capture patientsβ chest surfaces.
The dissertation describes the development and validation of a cost-effective and safe method for objectively evaluating treatment progress in children with chest deformities.
First, a study was conducted to evaluate the performance of low-cost 3D scanning technologies in measuring the severity of CWD. Second, a multitemporal surface mesh registration pipeline was developed for aligning 3D torso scans taken at different clinical appointments. Surface deviations were assessed between closely aligned scans. Optical indices were calculated without exposing patients to ionizing radiation, and changes in chest shape were visualized on a color-coded heat map. Additionally, a statistical model of chest shape built from healthy subjects was proposed to assess progress toward normal chest and aesthetic outcomes.
The system was validated with 3D and CT datasets from a multi-institutional cohort. The findings indicate that optical scans can detect differences on a millimeter scale, and optical indices can be applied to approximate radiographic indices. In addition to improving patient awareness, visual representations of changes during nonsurgical treatment can enhance patient compliance
Fusion of Unobtrusive Sensing Solutions for Home-Based Activity Recognition and Classification using Data Mining Models and Methods
This paper proposes the fusion of Unobtrusive Sensing Solutions (USSs) for human Activity Recognition and Classification (ARC) in home environments. It also considers the use of data mining models and methods for cluster-based analysis of datasets obtained from the USSs. The ability to recognise and classify activities performed in home environments can help monitor health parameters in vulnerable individuals. This study addresses five principal concerns in ARC: (i) usersβ privacy, (ii) wearability, (iii) data acquisition in a home environment, (iv) actual recognition of activities, and (v) classification of activities from single to multiple users. Timestamp information from contact sensors mounted at strategic locations in a kitchen environment helped obtain the time, location, and activity of 10 participants during the experiments. A total of 11,980 thermal blobs gleaned from privacy-friendly USSs such as ceiling and lateral thermal sensors were fused using data mining models and methods. Experimental results demonstrated cluster-based activity recognition, classification, and fusion of the datasets with an average regression coefficient of 0.95 for tested features and clusters. In addition, a pooled Mean accuracy of 96.5% was obtained using classification-by-clustering and statistical methods for models such as Neural Network, Support Vector Machine, K-Nearest Neighbour, and Stochastic Gradient Descent on Evaluation Test
Imparting 3D representations to artificial intelligence for a full assessment of pressure injuries.
During recent decades, researches have shown great interest to machine learning techniques in order to extract meaningful information from the large amount of data being collected each day. Especially in the medical field, images play a significant role in the detection of several health issues. Hence, medical image analysis remarkably participates in the diagnosis process and it is considered a suitable environment to interact with the technology of intelligent systems. Deep Learning (DL) has recently captured the interest of researchers as it has proven to be efficient in detecting underlying features in the data and outperformed the classical machine learning methods. The main objective of this dissertation is to prove the efficiency of Deep Learning techniques in tackling one of the important health issues we are facing in our society, through medical imaging. Pressure injuries are a dermatology related health issue associated with increased morbidity and health care costs. Managing pressure injuries appropriately is increasingly important for all the professionals in wound care. Using 2D photographs and 3D meshes of these wounds, collected from collaborating hospitals, our mission is to create intelligent systems for a full non-intrusive assessment of these wounds. Five main tasks have been achieved in this study: a literature review of wound imaging methods using machine learning techniques, the classification and segmentation of the tissue types inside the pressure injury, the segmentation of these wounds and the design of an end-to-end system which measures all the necessary quantitative information from 3D meshes for an efficient assessment of PIs, and the integration of the assessment imaging techniques in a web-based application
Hyperspectral Data Analysis in R: The hsdar Package
Hyperspectral remote sensing is a promising tool for a variety of applications including ecology, geology, analytical chemistry and medical research. This article presents the new hsdar package for R statistical software, which performs a variety of analysis steps taken during a typical hyperspectral remote sensing approach. The package introduces a new class for efficiently storing large hyperspectral data sets such as hyperspectral cubes within R. The package includes several important hyperspectral analysis tools such as continuum removal, normalized ratio indices and integrates two widely used radiation transfer models. In addition, the package provides methods to directly use the functionality of the caret package for machine learning tasks. Two case studies demonstrate the package's range of functionality: First, plant leaf chlorophyll content is estimated and second, cancer in the human larynx is detected from hyperspectral data
Concurrent fNIRS and EEG for brain function investigation: A systematic, methodology-focused review
Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) stand as state-of-the-art techniques for non-invasive functional neuroimaging. On a unimodal basis, EEG has poor spatial resolution while presenting high temporal resolution. In contrast, fNIRS offers better spatial resolution, though it is constrained by its poor temporal resolution. One important merit shared by the EEG and fNIRS is that both modalities have favorable portability and could be integrated into a compatible experimental setup, providing a compelling ground for the development of a multimodal fNIRS-EEG integration analysis approach. Despite a growing number of studies using concurrent fNIRS-EEG designs reported in recent years, the methodological reference of past studies remains unclear. To fill this knowledge gap, this review critically summarizes the status of analysis methods currently used in concurrent fNIRS-EEG studies, providing an up-to-date overview and guideline for future projects to conduct concurrent fNIRS-EEG studies. A literature search was conducted using PubMed and Web of Science through 31 August 2021. After screening and qualification assessment, 92 studies involving concurrent fNIRS-EEG data recordings and analyses were included in the final methodological review. Specifically, three methodological categories of concurrent fNIRS-EEG data analyses, including EEG-informed fNIRS analyses, fNIRS-informed EEG analyses, and parallel fNIRS-EEG analyses, were identified and explained with detailed description. Finally, we highlighted current challenges and potential directions in concurrent fNIRS-EEG data analyses in future research
Footwear-integrated force sensing resistor sensors: A machine learning approach for categorizing lower limb disorders
Lower limb disorders are a substantial contributor to both disability and lower standards of life. The prevalent disorders affecting the lower limbs include osteoarthritis of the knee, hip, and ankle. The present study focuses on the use of footwear that incorporates force-sensing resistor sensors to classify lower limb disorders affecting the knee, hip, and ankle joints. The research collected data from a sample of 117 participants who wore footwear integrated with force-sensing resistor sensors while walking on a predetermined walkway of 9 meters. Extensive preprocessing and feature extraction techniques were applied to form a structured dataset. Several machine learning classifiers were trained and evaluated. According to the findings, the Random Forest model exhibited the highest level of performance on the balanced dataset with an accuracy rate of 96%, while the Decision Tree model achieved an accuracy rate of 91%. The accuracy scores of the Logistic Regression, Gaussian Naive Bayes, and Long Short-Term Memory models were comparatively lower. K-fold cross-validation was also performed to evaluate the modelsβ performance. The results indicate that the integration of force-sensing resistor sensors into footwear, along with the use of machine learning techniques, can accurately categorize lower limb disorders. This offers valuable information for developing customized interventions and treatment plans
Vision Science and Technology at NASA: Results of a Workshop
A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program
- β¦