146 research outputs found
A sub-mW IoT-endnode for always-on visual monitoring and smart triggering
This work presents a fully-programmable Internet of Things (IoT) visual
sensing node that targets sub-mW power consumption in always-on monitoring
scenarios. The system features a spatial-contrast binary
pixel imager with focal-plane processing. The sensor, when working at its
lowest power mode ( at 10 fps), provides as output the number of
changed pixels. Based on this information, a dedicated camera interface,
implemented on a low-power FPGA, wakes up an ultra-low-power parallel
processing unit to extract context-aware visual information. We evaluate the
smart sensor on three always-on visual triggering application scenarios.
Triggering accuracy comparable to RGB image sensors is achieved at nominal
lighting conditions, while consuming an average power between and
, depending on context activity. The digital sub-system is extremely
flexible, thanks to a fully-programmable digital signal processing engine, but
still achieves 19x lower power consumption compared to MCU-based cameras with
significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa
Technology-Assisted Rehabilitation of Writing Skills in Parkinson’s Disease: Visual Cueing versus Intelligent Feedback
Recent research showed that visual cueing can have both beneficial and detrimental effects on handwriting of patients with Parkinson’s disease (PD) and healthy controls depending on the circumstances. Hence, using other sensory modalities to deliver cueing or feedback may be a valuable alternative. Therefore, the current study compared the effects of short-term training with either continuous visual cues or intermittent intelligent verbal feedback. Ten PD patients and nine healthy controls were randomly assigned to one of these training modes. To assess transfer of learning, writing performance was assessed in the absence of cueing and feedback on both trained and untrained writing sequences. The feedback pen and a touch-sensitive writing tablet were used for testing. Both training types resulted in improved writing amplitudes for the trained and untrained sequences. In conclusion, these results suggest that the feedback pen is a valuable tool to implement writing training in a tailor-made fashion for people with PD. Future studies should include larger sample sizes and different subgroups of PD for long-term training with the feedback pen
Compact recurrent neural networks for acoustic event detection on low-energy low-complexity platforms
Outdoor acoustic events detection is an exciting research field but
challenged by the need for complex algorithms and deep learning techniques,
typically requiring many computational, memory, and energy resources. This
challenge discourages IoT implementation, where an efficient use of resources
is required. However, current embedded technologies and microcontrollers have
increased their capabilities without penalizing energy efficiency. This paper
addresses the application of sound event detection at the edge, by optimizing
deep learning techniques on resource-constrained embedded platforms for the
IoT. The contribution is two-fold: firstly, a two-stage student-teacher
approach is presented to make state-of-the-art neural networks for sound event
detection fit on current microcontrollers; secondly, we test our approach on an
ARM Cortex M4, particularly focusing on issues related to 8-bits quantization.
Our embedded implementation can achieve 68% accuracy in recognition on
Urbansound8k, not far from state-of-the-art performance, with an inference time
of 125 ms for each second of the audio stream, and power consumption of 5.5 mW
in just 34.3 kB of RAM
MOCA: A Low-Power, Low-Cost Motion Capture System Based on Integrated Accelerometers
Human-computer interaction (HCI) and virtual reality applications pose the challenge of enabling real-time interfaces for natural interaction. Gesture recognition based on body-mounted accelerometers has been proposed as a viable solution to translate patterns of movements that are associated with user commands, thus substituting point-and-click methods or other cumbersome input devices. On the other hand, cost and power constraints make the implementation of a natural and efficient interface suitable for consumer applications a critical task. Even though several gesture recognition solutions exist, their use in HCI context has been poorly characterized. For this reason, in this paper, we consider a low-cost/low-power wearable motion tracking system based on integrated accelerometers called motion capture with accelerometers (MOCA) that we evaluated for navigation in virtual spaces. Recognition is based on a geometric algorithm that enables efficient and robust detection of rotational movements. Our objective is to demonstrate that such a low-cost and a low-power implementation is suitable for HCI applications. To this purpose, we characterized the system from both a quantitative point of view and a qualitative point of view. First, we performed static and dynamic assessment of movement recognition accuracy. Second, we evaluated the effectiveness of user experience using a 3D game application as a test bed
Adaptable and Robust EEG Bad Channel Detection Using Local Outlier Factor (LOF)
Electroencephalogram (EEG) data are typically affected by artifacts. The detection and removal of bad channels (i.e., with poor signal-to-noise ratio) is a crucial initial step. EEG data acquired from different populations require different cleaning strategies due to the inherent differences in the data quality, the artifacts' nature, and the employed experimental paradigm. To deal with such differences, we propose a robust EEG bad channel detection method based on the Local Outlier Factor (LOF) algorithm. Unlike most existing bad channel detection algorithms that look for the global distribution of channels, LOF identifies bad channels relative to the local cluster of channels, which makes it adaptable to any kind of EEG. To test the performance and versatility of the proposed algorithm, we validated it on EEG acquired from three populations (newborns, infants, and adults) and using two experimental paradigms (event-related and frequency-tagging). We found that LOF can be applied to all kinds of EEG data after calibrating its main hyperparameter: the LOF threshold. We benchmarked the performance of our approach with the existing state-of-the-art (SoA) bad channel detection methods. We found that LOF outperforms all of them by improving the F1 Score, our chosen performance metric, by about 40% for newborns and infants and 87.5% for adults
Efficient Low-Frequency SSVEP Detection with Wearable EEG Using Normalized Canonical Correlation Analysis
Recent studies show that the integrity of core perceptual and cognitive functions may be tested in a short time with Steady-State Visual Evoked Potentials (SSVEP) with low stimulation frequencies, between 1 and 10 Hz. Wearable EEG systems provide unique opportunities to test these brain functions on diverse populations in out-of-the-lab conditions. However, they also pose significant challenges as the number of EEG channels is typically limited, and the recording conditions might induce high noise levels, particularly for low frequencies. Here we tested the performance of Normalized Canonical Correlation Analysis (NCCA), a frequency-normalized version of CCA, to quantify SSVEP from wearable EEG data with stimulation frequencies ranging from 1 to 10 Hz. We validated NCCA on data collected with an 8-channel wearable wireless EEG system based on BioWolf, a compact, ultra-light, ultra-low-power recording platform. The results show that NCCA correctly and rapidly detects SSVEP at the stimulation frequency within a few cycles of stimulation, even at the lowest frequency (4 s recordings are sufficient for a stimulation frequency of 1 Hz), outperforming a state-of-the-art normalized power spectral measure. Importantly, no preliminary artifact correction or channel selection was required. Potential applications of these results to research and clinical studies are discussed
A low power colour-based skin detectors for smart environments
We describe an embedded optical system detecting human skin under a wide range of illuminant conditions. Our attention to such a system is justified by the many applications for which skin detection is needed, e.g. automatic people monitoring and tracking for security reasons or hand gesture recognition for fast and natural human-machine interaction. The presented system consists of a low power RGB sensor connected to an energy efficient microcontroller. The RGB sensor acquires the RGB signal from a region in front of it over a wide dynamic range, converts it in the rg chromaticity space directly on chip and delivers the processed data to the microcontroller. This latter classifies the input signal as skin or non-skin, testing its membership to a skin locus, i.e. to a compact set representing the chromaticities of the human skin tones acquired under several illuminant conditions. The system architecture distributes the computational load of skin detection on both hardware and software, providing a reliable skin detection with a limited energy consumption. This makes the system suitable to be used in smart environments, where energy efficiency is highly desired in order to keep the sensors always ready to receive, process and transmit data without affecting the performance
PENGARUH NEGARA ASAL, CITRA MEREK DAN KEPERCAYAAN MEREK TERHADAP PERSEPSI DARI KUALITAS PRODUK ETUDE HOUSE KOREA SELATAN
The aim of this research is to know the influence of country of origin, brand image
and brand trust on perception of Etude House of South Korea Product Quality.
Total sample used is 100 respondents as user of Etude House. The techique of
analysis used is Structural Equation Modeling (SEM) with AMOS 18. The result
shown that country of origin has an effect on perception of product quality, brand
image has an effect on perception of product quality and brand trust can
positively mediate country of origin and the perception of product quality
Low-Complexity Acoustic Scene Classification in DCASE 2022 Challenge
This paper presents an analysis of the Low-Complexity Acoustic Scene Classification task in DCASE 2022 Challenge. The task was a continuation from the previous years, but the low-complexity requirements were changed to the following: the maximum number of allowed parameters, including the zero-valued ones, was 128 K, with parameters being represented using INT8 numerical format; and the maximum number of multiply-accumulate operations at inference time was 30 million. Despite using the same previous year dataset, the audio samples have been shortened to 1 second instead of 10 second for this year challenge. The provided baseline system is a convolutional neural network which employs post-training quantization of parameters, resulting in 46.5 K parameters, and 29.23 million multiply-and-accumulate operations (MMACs). Its performance on the evaluation data is 44.2% accuracy and 1.532 log-loss. In comparison, the top system in the challenge obtained an accuracy of 59.6% and a log loss of 1.091, having 121 K parameters and 28 MMACs. The task received 48 submissions from 19 different teams, most of which outperformed the baseline system.publishedVersionPeer reviewe
- …