22 research outputs found

    A Dense Network Model for Outlier Prediction Using Learning Approaches

    Get PDF
    There are various sub-categories in outlier prediction and the investigators show less attention to related domains like outliers in audio recognition, video recognition, music recognition, etc. However, this research is specific to medical data analysis. It specifically concentrates on predicting the outliers from the medical database. Here, feature mapping and representation are achieved by adopting stacked LSTM-based CNN. The extracted features are fed as an input to the Linear Support Vector Machine () is used for classification purposes. Based on the analysis, it is known that there is a strong correlation between the features related to an individual's emotions. It can be analyzed in both a static and dynamic manner. Adopting both learning approaches is done to boost the drawbacks of one another. The statistical analysis is done with MATLAB 2016a environment where metrics like ROC, MCC, AUC, correlation co-efficiency, and prediction accuracy are evaluated and compared to existing approaches like standard CNN, standard SVM, logistic regression, multi-layer perceptrons, and so on. The anticipated learning model shows superior outcomes, and more concentration is provided to select an emotion recognition dataset connected with all the sub-domains

    FPGA Implementation of Hand-written Number Recognition Based on CNN

    Get PDF
    Convolutional Neural Networks (CNNs) are the state-of-the-art in computer vision for different purposes such as image and video classification, recommender systems and natural language processing. The connectivity pattern between CNNs neurons is inspired by the structure of the animal visual cortex. In order to allow the processing, they are realized with multiple parallel 2-dimensional FIR filters that convolve the input signal with the learned feature maps.  For this reason, a CNN implementation requires highly parallel computations that cannot be achieved using traditional general-purpose processors, which is why they benefit from a very significant speed-up when mapped and run on Field Programmable Gate Arrays (FPGAs). This is because FPGAs offer the capability to design full customizable hardware architectures, providing high flexibility and the availability of hundreds to thousands of on-chip Digital Signal Processing (DSP) blocks. This paper presents an FPGA implementation of a hand-written number recognition system based on CNN. The system has been characterized in terms of classification accuracy, area, speed, and power consumption. The neural network was implemented on a Xilinx XC7A100T FPGA, and it uses 29.69% of Slice LUTs, 4.42% of slice registers and 52.50% block RAMs. We designed the system using a 9-bit representation that allows for avoiding the use of DSP. For this reason, multipliers are implemented using LUTs. The proposed architecture can be easily scaled on different FPGA devices thank its regularity. CNN can reach a classification accuracy of 90%

    Wavelet Transform and Convolutional Neural Network Based Techniques in Combating Sudden Cardiac Death

    Get PDF
    Sudden cardiac death (SCD) is a global threat that demands our attention and research. Statistics show that 50% of cardiac deaths are sudden cardiac death. Therefore, early cardiac arrhythmia detection may lead to timely and proper treatment, saving lives. We proposed a less complex, fast, and more efficient algorithm that quickly and accurately detects heart abnormalities. Firstly, we carefully examined 23 ECG signals of the patients who died from SCD to detect their arrhythmias. Then, we trained a deep learning model to auto-detect and distinguish the most lethal arrhythmias in SCD: Ventricular Tachycardia (VT) and Ventricular Fibrillation (VF), from Normal Sinus Rhythm (NSR). Our work combined two techniques: Wavelet Transform (WT) and pre-trained Convolutional Neural Network (CNN). WT was used to convert an ECG signal into scalogram and CNN for features extraction and arrhythmias classification. When examined in the MIT-BIH Normal Sinus Rhythm, MIT-BIH Malignant Ventricular Ectopy, and Creighton University Ventricular Tachyarrhythmia databases, the proposed methodology obtained an accuracy of 98.7% and an F-score of 0.9867, despite being less expensive and simple to execute

    Revolutionizing Healthcare Image Analysis in Pandemic-Based Fog-Cloud Computing Architectures

    Full text link
    The emergence of pandemics has significantly emphasized the need for effective solutions in healthcare data analysis. One particular challenge in this domain is the manual examination of medical images, such as X-rays and CT scans. This process is time-consuming and involves the logistical complexities of transferring these images to centralized cloud computing servers. Additionally, the speed and accuracy of image analysis are vital for efficient healthcare image management. This research paper introduces an innovative healthcare architecture that tackles the challenges of analysis efficiency and accuracy by harnessing the capabilities of Artificial Intelligence (AI). Specifically, the proposed architecture utilizes fog computing and presents a modified Convolutional Neural Network (CNN) designed specifically for image analysis. Different architectures of CNN layers are thoroughly explored and evaluated to optimize overall performance. To demonstrate the effectiveness of the proposed approach, a dataset of X-ray images is utilized for analysis and evaluation. Comparative assessments are conducted against recent models such as VGG16, VGG19, MobileNet, and related research papers. Notably, the proposed approach achieves an exceptional accuracy rate of 99.88% in classifying normal cases, accompanied by a validation rate of 96.5%, precision and recall rates of 100%, and an F1 score of 100%. These results highlight the immense potential of fog computing and modified CNNs in revolutionizing healthcare image analysis and diagnosis, not only during pandemics but also in the future. By leveraging these technologies, healthcare professionals can enhance the efficiency and accuracy of medical image analysis, leading to improved patient care and outcomes

    Comparison of multi-distance signal level difference Hjorth descriptor and its variations for lung sound classifications

    Get PDF
    A biological signal has the multi-scale and signals complexity properties. Many studies have used the signal complexity calculation methods and multi-scale analysis to analyze the biological signal, such as lung sound. Signal complexity methods used in the biological signal analysis include entropy, fractal analysis, and Hjorth descriptor. Meanwhile, the commonly used multi-scale methods include wavelet analysis, coarse-grained procedure, and empirical mode decomposition (EMD). One of the multi-scale methods in the biological signal analysis is the multi-distance signal level difference (MSLD), which calculates a difference between two signal samples at a specific distance. In previous studies, MSLD was combined with Hjorth descriptor for lung sound classification. MSLD has the potential to be developed by modifying the fundamental equation of MSLD. This study presents the comparison of MSLD and its variations combined with Hjorth descriptor for lung sound classification. The results showed that MSLD and its variations had the highest accuracy of 98.99% for five lung sound data classes. The results of this study provided several alternatives for multi-scale signal complexity analysis method for biological signals

    IOT Based Continuous Glucose Monitoring for Diabetes Mellitus using Deep Siamese Domain Adaptation Convolutional Neural Network

    Get PDF
    The phrase "Internet of Things" (IoT) refers to the forthcoming generation of the Internet, which facilitates interaction among networked devices. IoT functions as an assistant in medicine and is critical to a variety of uses that monitor healthcare facilities. The pattern of observed parameters can be used to predict the type of the disease. Health experts and technologists have developed an excellent system that employs commonly utilized technologies like wearable technology, wireless channels, and other remote devices to deliver cost-effective medical surveillance for people suffering from a range of diseases. Network-connected sensors worn on the body or put in living areas collect large amounts of data to assess the patient's physical and mental wellbeing. In this Manuscript, IoT -based Continuous Glucose Monitoring for Diabetes Mellitus using Deep Siamese Domain Adaptation Convolutional Neural Networ k (CGM-DM- DSDACNN) is proposed. The goal of the work that has been described to investigate whether Continuous Glucose Monitoring System (CGMS) on the basis of IoT is both intrusive also secure. The job at hand is for making an architecture based on IoT that extends from the sensor model to the back-end and displays blood glucose level, body temperature, and contextual data to final users like patients and doctors in graphical and text formats. A higher level of energy economy is also attained by tailoring the Long range Sigfox communication protocol to the glucose monitoring device. Additionally, analyse the energy usage of a sensor device and create energy collecting components for it. Present a Deep Siamese Domain Adaptation Convolutional Neural Network (DSDACNN) as a last resort for alerting patients and medical professionals in the event of anomalous circumstances, like a too -low or too-high glucose level

    fpga implementation of hand written number recognition based on cnn

    Get PDF
    Convolutional Neural Networks (CNNs) are the state-of-the-art in computer vision for different purposes such as image and video classification, recommender systems and natural language processing. The connectivity pattern between CNNs neurons is inspired by the structure of the animal visual cortex. In order to allow the processing, they are realized with multiple parallel 2-dimensional FIR filters that convolve the input signal with the learned feature maps. For this reason, a CNN implementation requires highly parallel computations that cannot be achieved using traditional general-purpose processors, which is why they benefit from a very significant speed-up when mapped and run on Field Programmable Gate Arrays (FPGAs). This is because FPGAs offer the capability to design full customizable hardware architectures, providing high flexibility and the availability of hundreds to thousands of on-chip Digital Signal Processing (DSP) blocks. This paper presents an FPGA implementation of a hand-written number recognition system based on CNN. The system has been characterized in terms of classification accuracy, area, speed, and power consumption. The neural network was implemented on a Xilinx XC7A100T FPGA, and it uses 29.69% of Slice LUTs, 4.42% of slice registers and 52.50% block RAMs. We designed the system using a 9-bit representation that allows for avoiding the use of DSP. For this reason, multipliers are implemented using LUTs. The proposed architecture can be easily scaled on different FPGA devices thank its regularity. CNN can reach a classification accuracy of 90%

    HISTOPATHOLOGY IMAGE CLASSIFICATION USING HYBRID PARALLEL STRUCTURED DEEP-CNN MODELS

    Get PDF
    The healthcare industry is one of the many out there that could majorly benefit from advancement in the technology it utilizes. Artificial intelligence (AI) technologies are especially integral and specifically deep learning (DL); a highly useful data-driven technology. It is applied in a variety of different methods but it mainly depends on the structure of the available data. However, with varying applications, this technology produces data in different contexts with particular connotations. Reports which are the images of scans play a great role in identifying the existence of the disease in a patient. Further, the automation in processing these images using technology like CNN-based models makes it highly efficient in reducing human errors otherwise resulting in large data. Hence this study presents a hybrid deep learning architecture to classify the histopathology images to identify the presence of cancer in a patient. Further, the proposed models are parallelized using the TensorFlow-GPU framework to accelerate the training of these deep CNN (Convolution Neural Networks) architectures. This study uses the transfer learning technique during training and early stopping criteria are used to avoid overfitting during the training phase. these models use LSTM parallel layer imposed in the model to experiment with four considered architectures such as MobileNet, VGG16, and ResNet with 101 and 152 layers. The experimental results produced by these hybrid models show that the capability of Hybrid ResNet101 and Hybrid ResNet152 architectures are highly suitable with an accuracy of 90% and 92%. Finally, this study concludes that the proposed Hybrid ResNet-152 architecture is highly efficient in classifying the histopathology images. The proposed study has conducted a well-focused and detailed experimental study which will further help researchers to understand the deep CNN architectures to be applied in application development

    Development of Speech Command Control Based TinyML System for Post-Stroke Dysarthria Therapy Device

    Get PDF
    Post-stroke dysarthria (PSD) is a widespread outcome of a stroke. To help in the objective evaluation of dysarthria, the development of pathological voice recognition and technology has a lot of attention. Soft robotics therapy devices have been received as an alternative rehabilitation and hand grasp assistance for improving activity daily living (ADL). Despite the significant progress in this field, most soft robotic therapy devices use a complex, bulky, lack of pathological voice recognition model, large computational power, and stationary controller. This study aims to develop a portable wirelessly multi-controller with a simulated dysarthric vowel speech in Bahasa Indonesia and non-dysarthric micro speech recognition, using tiny machine learning (TinyMl) system for hardware efficiency. The speech interface using INMP441, compute with a lightweight Deep Convolutional Neural network (DCNN) design and embedded into ESP-32. Feature model using Short Time Fourier Transform (STFT) and fed into CNN. This method has proven useful in micro-speech recognition with low computational power in both speech scenarios with a level of accuracy above 90%. Realtime inference performance on ESP-32 using hand prosthetics, with 3-level household noise intensity respectively 24db,42db, and 62db, and has respectively resulted from 95%, 85%, and 50% Accuracy. Wireless connectivity success rate with both controllers is around 0.2 - 0.5 ms

    Hybrid CNN+LSTM Deep Learning Model for Intrusions Detection Over IoT Environment

    Get PDF
    The connectivity of devices through the internet plays a remarkable role in our daily lives. Many network-based applications are utilized in different domains, e.g., health care, smart environments, and businesses. These applications offer a wide range of services and provide services to large groups. Therefore, the safety of network-based applications has always been an area of research interest for academia and industry alike. The evolution of deep learning has enabled us to explore new areas of research. Hackers make use of the vulnerabilities in networks and attempt to gain access to confidential systems and information. This information and access to systems can be very harmful and portray losses beyond comprehension. Therefore, detection of these network intrusions is of the utmost importance. Deep learning-based techniques require minimal inputs while exploring every possible feature set in the network. Thus, in this paper, we present a hybrid CNN+LSTM deep learning model for the detection of network intrusions. In this research, we detect DDOS types of network intrusions, i.e., R2L, R2R, Prob, and which belong to the active attack category, and PortScan, which falls in the passive attack category. For this purpose, we used the benchmark CICIDS2017 dataset for conducting the experiments and achieved an accuracy of 99.82% as demonstrated in the experimental results
    corecore