4 research outputs found
Ambient Intelligence in Healthcare: A State-of-the-Art
Information technology advancement leads to an innovative paradigm called Ambient Intelligence (AmI). A digital environment is employed along with AmI to enable individuals to be aware to their behaviors, needs, emotions and gestures. Several applications of the AmI systems in healthcare environment attract several researchers. AmI is considered one of the recent technologies that support hospitals, patients, and specialists for personal healthcare with the aid of artificial intelligence techniques and wireless sensor networks. The improvement in the wearable devices, mobile devices, embedded software and wireless technologies open the doors to advanced applications in the AmI paradigm. The WSN and the BAN collect medical data to be used for the progress of the intelligent systems adapted inevitably. The current study outlines the AmI role in healthcare concerning with its relational and technological nature. Health
Recommended from our members
Simplified inverse filter tracked affective acoustic signals classification incorporating deep convolutional neural networks
Facial expressions, verbal, behavioral, such as limb movements, and physiological features are vital ways for affective human interactions. Researchers have given machines the ability to recognize affective communication through the above modalities in the past decades. In addition to facial expressions, changes in the level of sound, strength, weakness, and turbulence will also convey affective. Extracting affective feature parameters from the acoustic signals have been widely applied in customer service, education, and the medical field. In this research, an improved AlexNet-based deep convolutional neural network (A-DCNN) is presented for acoustic signal recognition. Firstly, preprocessed on signals using simplified inverse filter tracking (SIFT) and short-time Fourier transform (STFT), Mel frequency Cepstrum (MFCC) and waveform-based segmentation were deployed to create the input for the deep neural network (DNN), which was applied widely in signals preprocess for most neural networks. Secondly, acoustic signals were acquired from the public Ryerson Audio-Visual Database of Affective Speech and Song (RAVDESS) affective speech audio system. Through the acoustic signal preprocessing tools, the basic features of the kind of sound signals were calculated and extracted. The proposed DNN based on improved AlexNet has a 95.88% accuracy on classifying eight affective of acoustic signals. By comparing with some linear classifications, such as decision table (DT) and Bayesian inference (BI) and other deep neural networks, such as AlexNet+SVM, recurrent convolutional neural network (R-CNN), etc., the proposed method achieves high effectiveness on the accuracy (A), sensitivity (S1), positive predictive (PP), and f1-score (F1). Acoustic signals affective recognition and classification can be potentially applied in industrial product design through measuring consumers’ affective responses to products; by collecting relevant affective sound data to understand the popularity of the product, and furthermore, to improve the product design and increase the market responsiveness