30 research outputs found

    Deep Learning Based Abnormal Gait Classification System Study with Heterogeneous Sensor Network

    Get PDF
    Gait is one of the important biological characteristics of the human body. Abnormal gait is mostly related to the lesion site and has been demonstrated to play a guiding role in clinical research such as medical diagnosis and disease prevention. In order to promote the research of automatic gait pattern recognition, this paper introduces the research status of abnormal gait recognition and systems analysis of the common gait recognition technologies. Based on this, two gait information extraction methods, sensor-based and vision-based, are studied, including wearable system design and deep neural network-based algorithm design. In the sensor-based study, we proposed a lower limb data acquisition system. The experiment was designed to collect acceleration signals and sEMG signals under normal and pathological gaits. Specifically, wearable hardware-based on MSP430 and upper computer software based on Labview is designed. The hardware system consists of EMG foot ring, high-precision IMU and pressure-sensitive intelligent insole. Data of 15 healthy persons and 15 hemiplegic patients during walking were collected. The classification of gait was carried out based on sEMG and the average accuracy rate can reach 92.8% for CNN. For IMU signals five kinds of abnormal gait are trained based on three models: BPNN, LSTM, and CNN. The experimental results show that the system combined with the neural network can classify different pathological gaits well, and the average accuracy rate of the six-classifications task can reach 93%. In vision-based research, by using human keypoint detection technology, we obtain the precise location of the key points through the fusion of thermal mapping and offset, thus extracts the space-time information of the key points. However, the results show that even the state-of-the-art is not good enough for replacing IMU in gait analysis and classification. The good news is the rhythm wave can be observed within 2 m, which proves that the temporal and spatial information of the key points extracted is highly correlated with the acceleration information collected by IMU, which paved the way for the visual-based abnormal gait classification algorithm.步态指人走路时表现出来的姿态,是人体重要生物特征之一。异常步态多与病变部位有关,作为反映人体健康状况和行为能力的重要特征,其被论证在医疗诊断、疾病预防等临床研究中具有指导作用。为了促进步态模式自动识别的研究,本文介绍了异常步态识别的研究现状,系统地分析了常见步态识别技术以及算法,以此为基础研究了基于传感器与基于视觉两种步态信息提取方法,内容包括可穿戴系统设计与基于深度神经网络的算法设计。 在基于传感器的研究中,本工作开发了下肢步态信息采集系统,并利用该信息采集系统设计实验,采集正常与不同病理步态下的加速度信号与肌电信号,搭建深度神经网络完成分类任务。具体的,在系统搭建部分设计了基于MSP430的可穿戴硬件设备以及基于Labview的上位机软件,该硬件系统由肌电脚环,高精度IMU以及压感智能鞋垫组成,该上位机软件接收、解包蓝牙数据并计算出步频步长等常用步态参数。 在基于运动信号与基于表面肌电的研究中,采集了15名健康人与15名偏瘫病人的步态数据,并针对表面肌电信号训练卷积神经网络进行帕金森步态的识别与分类,平均准确率可达92.8%。针对运动信号训练了反向传播神经网络,LSTM以及卷积神经网络三种模型进行五种异常步态的分类任务。实验结果表明,本工作中步态信息采集系统结合神经网络模型,可以很好地对不同病理步态进行分类,六分类平均正确率可达93%。 在基于视觉的研究中,本文利用人体关键点检测技术,首先检测出图片中的一个或多个人,接着对边界框做图像分割,接着采用全卷积resnet对每一个边界框中的人物的主要关节点做热力图并分析偏移量,最后通过热力图与偏移的融合得到关键点的精确定位。通过该算法提取了不同步态下姿态关键点时空信息,为基于视觉的步态分析系统提供了基础条件。但实验结果表明目前最高准确率的人体关键点检测算法不足以替代IMU实现步态分析与分类。但在2m之内可以观察到节律信息,证明了所提取的关键点时空信息与IMU采集的加速度信息呈现较高相关度,为基于视觉的异常步态分类算法铺平了道路

    Avian muscle development and growth mechanisms: association with muscle myopathies and meat quality Volume II

    Get PDF
    open2siGiven the significant interest in Volume I, it was decided to launch Volume II of the Research Topic “Avian Muscle Development and Growth Mechanisms: Association With Muscle Myopathies and Meat Quality.” The broiler industry is still facing an unsustainable occurrence of growth-related muscular abnormalities that mainly affect fast-growing genotypes selected for high growth rate and breast yield. From their onset, research interest in these issues continues as proven by the temporal trend of published papers during the past decade (Figure 1). Even if meat affected by white striping, wooden breast, and spaghetti meat abnormalities is not harmful for human nutrition, these conditions impair quality traits of both raw and processed meat products causing severe economic losses in the poultry industry worldwide (Petracci et al., 2019; Velleman, 2019). Since the Research Topic of “Avian Muscle Development and Growth Mechanisms: Association With Muscle Myopathies and Meat Quality” is quite diverse, contributions in this second volume reflect the broad scope of areas of investigation related to muscle growth and development with 11 original research papers and one mini-review from prominent scientists in the sector. We hope that this collection will instigate novel questions in the minds of our readers and will be helpful in facilitating the development of the field.openMassimiliano Petracci; Sandra G. VellemanMassimiliano Petracci; Sandra G. Vellema

    Smart System for Prediction of Accurate Surface Electromyography Signals Using an Artificial Neural Network

    Get PDF
    Bioelectric signals are used to measure electrical potential, but there are different types of signals. The electromyography (EMG) is a type of bioelectric signal used to monitor and recode the electrical activity of the muscles. The current work aims to model and reproduce surface EMG (SEMG) signals using an artificial neural network. Such research can aid studies into life enhancement for those suffering from damage or disease affecting their nervous system. The SEMG signal is collected from the surface above the bicep muscle through dynamic (concentric and eccentric) contraction with various loads. In this paper, we use time domain features to analyze the relationship between the amplitude of SEMG signals and the load. We extract some features (e.g., mean absolute value, root mean square, variance and standard deviation) from the collected SEMG signals to estimate the bicep’ muscle force for the various loads. Further, we use the R-squared value to depict the correlation between the SEMG amplitude and the muscle loads by linear fitting. The best performance the ANN model with 60 hidden neurons for three loads used (3 kg, 5 kg and 7 kg) has given a mean square error of 1.145, 1.3659 and 1.4238, respectively. The R-squared observed are 0.9993, 0.99999 and 0.99999 for predicting (reproduction step) of smooth SEMG signals

    GuavaNet: A deep neural network architecture for automatic sensory evaluation to predict degree of acceptability for Guava by a consumer

    Get PDF
    This thesis is divided into two parts:Part I: Analysis of Fruits, Vegetables, Cheese and Fish based on Image Processing using Computer Vision and Deep Learning: A Review. It consists of a comprehensive review of image processing, computer vision and deep learning techniques applied to carry out analysis of fruits, vegetables, cheese and fish.This part also serves as a literature review for Part II.Part II: GuavaNet: A deep neural network architecture for automatic sensory evaluation to predict degree of acceptability for Guava by a consumer. This part introduces to an end-to-end deep neural network architecture that can predict the degree of acceptability by the consumer for a guava based on sensory evaluation

    Out-of-plane action unit recognition using recurrent neural networks

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2015.The face is a fundamental tool to assist in interpersonal communication and interaction between people. Humans use facial expressions to consciously or subconsciously express their emotional states, such as anger or surprise. As humans, we are able to easily identify changes in facial expressions even in complicated scenarios, but the task of facial expression recognition and analysis is complex and challenging to a computer. The automatic analysis of facial expressions by computers has applications in several scientific subjects such as psychology, neurology, pain assessment, lie detection, intelligent environments, psychiatry, and emotion and paralinguistic communication. We look at methods of facial expression recognition, and in particular, the recognition of Facial Action Coding System’s (FACS) Action Units (AUs). Movements of individual muscles on the face are encoded by FACS from slightly different, instant changes in facial appearance. Contractions of specific facial muscles are related to a set of units called AUs. We make use of Speeded Up Robust Features (SURF) to extract keypoints from the face and use the SURF descriptors to create feature vectors. SURF provides smaller sized feature vectors than other commonly used feature extraction techniques. SURF is comparable to or outperforms other methods with respect to distinctiveness, robustness, and repeatability. It is also much faster than other feature detectors and descriptors. The SURF descriptor is scale and rotation invariant and is unaffected by small viewpoint changes or illumination changes. We use the SURF feature vectors to train a recurrent neural network (RNN) to recognize AUs from the Cohn-Kanade database. An RNN is able to handle temporal data received from image sequences in which an AU or combination of AUs are shown to develop from a neutral face. We are recognizing AUs as they provide a more fine-grained means of measurement that is independent of age, ethnicity, gender and different expression appearance. In addition to recognizing FACS AUs from the Cohn-Kanade database, we use our trained RNNs to recognize the development of pain in human subjects. We make use of the UNBC-McMaster pain database which contains image sequences of people experiencing pain. In some cases, the pain results in their face moving out-of-plane or some degree of in-plane movement. The temporal processing ability of RNNs can assist in classifying AUs where the face is occluded and not facing frontally for some part of the sequence. Results are promising when tested on the Cohn-Kanade database. We see higher overall recognition rates for upper face AUs than lower face AUs. Since keypoints are globally extracted from the face in our system, local feature extraction could provide improved recognition results in future work. We also see satisfactory recognition results when tested on samples with out-of-plane head movement, showing the temporal processing ability of RNNs

    TOWARD INTELLIGENT WELDING BY BUILDING ITS DIGITAL TWIN

    Get PDF
    To meet the increasing requirements for production on individualization, efficiency and quality, traditional manufacturing processes are evolving to smart manufacturing with the support from the information technology advancements including cyber-physical systems (CPS), Internet of Things (IoT), big industrial data, and artificial intelligence (AI). The pre-requirement for integrating with these advanced information technologies is to digitalize manufacturing processes such that they can be analyzed, controlled, and interacted with other digitalized components. Digital twin is developed as a general framework to do that by building the digital replicas for the physical entities. This work takes welding manufacturing as the case study to accelerate its transition to intelligent welding by building its digital twin and contributes to digital twin in the following two aspects (1) increasing the information analysis and reasoning ability by integrating deep learning; (2) enhancing the human user operative ability to physical welding manufacturing via digital twins by integrating human-robot interaction (HRI). Firstly, a digital twin of pulsed gas tungsten arc welding (GTAW-P) is developed by integrating deep learning to offer the strong feature extraction and analysis ability. In such a system, the direct information including weld pool images, arc images, welding current and arc voltage is collected by cameras and arc sensors. The undirect information determining the welding quality, i.e., weld joint top-side bead width (TSBW) and back-side bead width (BSBW), is computed by a traditional image processing method and a deep convolutional neural network (CNN) respectively. Based on that, the weld joint geometrical size is controlled to meet the quality requirement in various welding conditions. In the meantime, this developed digital twin is visualized to offer a graphical user interface (GUI) to human users for their effective and intuitive perception to physical welding processes. Secondly, in order to enhance the human operative ability to the physical welding processes via digital twins, HRI is integrated taking virtual reality (VR) as the interface which could transmit the information bidirectionally i.e., transmitting the human commends to welding robots and visualizing the digital twin to human users. Six welders, skilled and unskilled, tested this system by completing the same welding job but demonstrate different patterns and resulted welding qualities. To differentiate their skill levels (skilled or unskilled) from their demonstrated operations, a data-driven approach, FFT-PCA-SVM as a combination of fast Fourier transform (FFT), principal component analysis (PCA), and support vector machine (SVM) is developed and demonstrates the 94.44% classification accuracy. The robots can also work as an assistant to help the human welders to complete the welding tasks by recognizing and executing the intended welding operations. This is done by a developed human intention recognition algorithm based on hidden Markov model (HMM) and the welding experiments show that developed robot-assisted welding can help to improve welding quality. To further take the advantages of the robots i.e., movement accuracy and stability, the role of the robot upgrades to be a collaborator from an assistant to complete a subtask independently i.e., torch weaving and automatic seam tracking in weaving GTAW. The other subtask i.e., welding torch moving along the weld seam is completed by the human users who can adjust the travel speed to control the heat input and ensure the good welding quality. By doing that, the advantages of humans (intelligence) and robots (accuracy and stability) are combined together under this human-robot collaboration framework. The developed digital twin for welding manufacturing helps to promote the next-generation intelligent welding and can be applied in other similar manufacturing processes easily after small modifications including painting, spraying and additive manufacturing

    Upper extremity electromyography signal feature extraction and classification

    Get PDF

    Advances in Sensors, Big Data and Machine Learning in Intelligent Animal Farming

    Get PDF
    Animal production (e.g., milk, meat, and eggs) provides valuable protein production for human beings and animals. However, animal production is facing several challenges worldwide such as environmental impacts and animal welfare/health concerns. In animal farming operations, accurate and efficient monitoring of animal information and behavior can help analyze the health and welfare status of animals and identify sick or abnormal individuals at an early stage to reduce economic losses and protect animal welfare. In recent years, there has been growing interest in animal welfare. At present, sensors, big data, machine learning, and artificial intelligence are used to improve management efficiency, reduce production costs, and enhance animal welfare. Although these technologies still have challenges and limitations, the application and exploration of these technologies in animal farms will greatly promote the intelligent management of farms. Therefore, this Special Issue will collect original papers with novel contributions based on technologies such as sensors, big data, machine learning, and artificial intelligence to study animal behavior monitoring and recognition, environmental monitoring, health evaluation, etc., to promote intelligent and accurate animal farm management

    A survey of the application of soft computing to investment and financial trading

    Get PDF
    corecore