9 research outputs found

    Automated Blood Cell Detection and Counting via Deep Learning for Microfluidic Point-of-Care Medical Devices

    Get PDF
    Automated in-vitro cell detection and counting have been a key theme for artificial and intelligent biological analysis such as biopsy, drug analysis and decease diagnosis. Along with the rapid development of microfluidics and lab-on-chip technologies, in-vitro live cell analysis has been one of the critical tasks for both research and industry communities. However, it is a great challenge to obtain and then predict the precise information of live cells from numerous microscopic videos and images. In this paper, we investigated in-vitro detection of white blood cells using deep neural networks, and discussed how state-of-the-art machine learning techniques could fulfil the needs of medical diagnosis. The approach we used in this study was based on Faster Region-based Convolutional Neural Networks (Faster RCNNs), and a transfer learning process was applied to apply this technique to the microscopic detection of blood cells. Our experimental results demonstrated that fast and efficient analysis of blood cells via automated microscopic imaging can achieve much better accuracy and faster speed than the conventionally applied methods, implying a promising future of this technology to be applied to the microfluidic point-of-care medical devices

    EEG-based Deep Emotional Diagnosis: A Comparative Study

    Get PDF
    Emotion is an important part of people's daily life, particularly relevant to the mental health of people. Emotional diagnosis is closely related to the nervous system, which can well reflect people's mental conditions in response to the surrounding environment or the development of various neurodegenerative diseases. Emotion recognition can help the medical diagnosis of mental health. In recent years, emotion recognition based on EEG has attracted the attention of many researchers accompanying with the continuous development of artificial intelligence and brain computer interface technology. In this paper, we carried out a comparison on the performance of three deep learning techniques on EEG classification, including DNN, CNN and CNN-LSTM. DEAP data set was used in our experiments. EEG signals were transformed from time domain to frequency domain first, and then features are extracted to classify emotions. From our research, it shows these deep learning techniques can achieve good accuracy on emotional diagnosis

    In-house deep environmental sentience for smart homecare solutions toward ageing society.

    Get PDF
    With an increasing amount of elderly people needing home care around the clock, care workers are not able to keep up with the demand of providing maximum support to those who require it. As medical costs of home care increase the quality is care suffering as a result of staff shortages, a solution is desperately needed to make the valuable care time of these workers more efficient. This paper proposes a system that is able to make use of the deep learning resources currently available to produce a base system that could provide a solution to many of the problems that care homes and staff face today. Transfer learning was conducted on a deep convolutional neural network to recognize common household objects was proposed. This system showed promising results with an accuracy, sensitivity and specificity of 90.6%, 0.90977 and 0.99668 respectively. Real-time applications were also considered, with the system achieving a maximum speed of 19.6 FPS on an MSI GTX 1060 GPU with 4GB of VRAM allocated

    3D Printed Brain-Controlled Robot-Arm Prosthetic via Embedded Deep Learning From sEMG Sensors

    Get PDF
    In this paper, we present our work on developing robot arm prosthetic via deep learning. Our work proposes to use transfer learning techniques applied to the Google Inception model to retrain the final layer for surface electromyography (sEMG) classification. Data have been collected using the Thalmic Labs Myo Armband and used to generate graph images comprised of 8 subplots per image containing sEMG data captured from 40 data points per sensor, corresponding to the array of 8 sEMG sensors in the armband. Data captured were then classified into four categories (Fist, Thumbs Up, Open Hand, Rest) via using a deep learning model, Inception-v3, with transfer learning to train the model for accurate prediction of each on real-time input of new data. This trained model was then downloaded to the ARM processor based embedding system to enable the brain-controlled robot-arm prosthetic manufactured from our 3D printer. Testing of the functionality of the method, a robotic arm was produced using a 3D printer and off-the-shelf hardware to control it. SSH communication protocols are employed to execute python files hosted on an embedded Raspberry Pi with ARM processors to trigger movement on the robot arm of the predicted gesture

    A Deep Learning Based Wearable Healthcare Iot Device for AI-Enabled Hearing Assistance Automation

    Get PDF
    With the recent booming of artificial intelligence (AI), particularly deep learning techniques, digital healthcare is one of the prevalent areas that could gain benefits from AI-enabled functionality. This research presents a novel AI-enabled Internet of Things (IoT) device operating from the ESP-8266 platform capable of assisting those who suffer from impairment of hearing or deafness to communicate with others in conversations. In the proposed solution, a server application is created that leverages Google's online speech recognition service to convert the received conversations into texts, then deployed to a micro-display attached to the glasses to display the conversation contents to deaf people, to enable and assist conversation as normal with the general population. Furthermore, in order to raise alert of traffic or dangerous scenarios, an 'urban-emergency' classifier is developed using a deep learning model, Inception-v4, with transfer learning to detect/recognize alerting/alarming sounds, such as a horn sound or a fire alarm, with texts generated to alert the prospective user. The training of Inception-v4 was carried out on a consumer desktop PC and then implemented into the AI-based IoT application. The empirical results indicate that the developed prototype system achieves an accuracy rate of 92% for sound recognition and classification with real-time performance

    Shallow Unorganized Neural Networks Using Smart Neuron Model for Visual Perception

    Get PDF
    The recent success of Deep Neural Networks (DNNs) has revealed the significant capability of neural computing in many challenging applications. Although DNNs are derived from emulating biological neurons, there still exist doubts over whether or not DNNs are the final and best model to emulate the mechanism of human intelligence. In particular, there are two discrepancies between computational DNN models and the observed facts of biological neurons. First, human neurons are interconnected randomly, while DNNs need carefully-designed architectures to work properly. Second, human neurons usually have a long spiking latency (∼100ms) which implies that not many layers can be involved in making a decision, while DNNs could have hundreds of layers to guarantee high accuracy. In this paper, we propose a new computational model, namely shallow unorganized neural networks (SUNNs), in contrast to ANNs/DNNs. The proposed SUNNs differ from standard ANNs or DNNs in three fundamental aspects: 1) SUNNs are based on an adaptive neuron cell model, Smart Neurons, that allows each artificial neuron cell to adaptively respond to its inputs rather than carrying out a fixed weighted-sum operation like the classic neuron model in ANNs/DNNs; 2) SUNNs can cope with computational tasks with very shallow architectures; 3) SUNNs have a natural topology with random interconnections, as the human brain does, and as proposed by Turing’s B-type unorganized machines. We implemented the proposed SUNN architecture and tested it on a number of unsupervised early stage visual perception tasks. Surprisingly, such simple shallow architectures achieved very good results in our experiments. The success of our new computational model makes it the first workable example of Turing’s B-Type unorganized machine that can achieve comparable or better performance against the state-of-the-art algorithms

    Social Behavioral Phenotyping of Drosophila with a 2D-3D Hybrid CNN Framework

    Get PDF
    Behavioural phenotyping of drosphila is an important means in biological and medical research to identify genetic, pathologic or psychologic impact on animal behviour. Automated behavioural phenotyping from videos has been a desired capability that can waive long-time boring manual work in behavioral analysis. In this paper, we introduced deep learning into this challenging topic, and proposed a new 2D+3D hybrid CNN framework for drosphila’s social behavioural phenotyping. In the proposed multitask learning framework, action detection and localization of drosphila jointly is carried out with action classification, and a given video is divided into clips with fixed length. Each clip is fed into the system and a 2-D CNN is applied to extract features at frame level. Features extracted from adjacent frames are then connected and fed into a 3-D CNN with a spatial region proposal layer for classification. In such a 2D+3D hybrid framework, drosophila detection at the frame level enables the action analysis at different durations instead of a fixed period. We tested our framework with different base layers and classification architectures and validated the proposed 3D CNN based social behavioral phenotyping framework under various models, detectors and classifiers
    corecore