410 research outputs found

    FPGA based technical solutions for high throughput data processing and encryption for 5G communication: A review

    Get PDF
    The field programmable gate array (FPGA) devices are ideal solutions for high-speed processing applications, given their flexibility, parallel processing capability, and power efficiency. In this review paper, at first, an overview of the key applications of FPGA-based platforms in 5G networks/systems is presented, exploiting the improved performances offered by such devices. FPGA-based implementations of cloud radio access network (C-RAN) accelerators, network function virtualization (NFV)-based network slicers, cognitive radio systems, and multiple input multiple output (MIMO) channel characterizers are the main considered applications that can benefit from the high processing rate, power efficiency and flexibility of FPGAs. Furthermore, the implementations of encryption/decryption algorithms by employing the Xilinx Zynq Ultrascale+MPSoC ZCU102 FPGA platform are discussed, and then we introduce our high-speed and lightweight implementation of the well-known AES-128 algorithm, developed on the same FPGA platform, and comparing it with similar solutions already published in the literature. The comparison results indicate that our AES-128 implementation enables efficient hardware usage for a given data-rate (up to 28.16 Gbit/s), resulting in higher efficiency (8.64 Mbps/slice) than other considered solutions. Finally, the applications of the ZCU102 platform for high-speed processing are explored, such as image and signal processing, visual recognition, and hardware resource management

    Neuro-critical multimodal Edge-AI monitoring algorithm and IoT system design and development

    Get PDF
    In recent years, with the continuous development of neurocritical medicine, the success rate of treatment of patients with traumatic brain injury (TBI) has continued to increase, and the prognosis has also improved. TBI patients' condition is usually very complicated, and after treatment, patients often need a more extended time to recover. The degree of recovery is also related to prognosis. However, as a young discipline, neurocritical medicine still has many shortcomings. Especially in most hospitals, the condition of Neuro-intensive Care Unit (NICU) is uneven, the equipment has limited functionality, and there is no unified data specification. Most of the instruments are cumbersome and expensive, and patients often need to pay high medical expenses. Recent years have seen a rapid development of big data and artificial intelligence (AI) technology, which are advancing the medical IoT field. However, further development and a wider range of applications of these technologies are needed to achieve widespread adoption. Based on the above premises, the main contributions of this thesis are the following. First, the design and development of a multi-modal brain monitoring system including 8-channel electroencephalography (EEG) signals, dual-channel NIRS signals, and intracranial pressure (ICP) signals acquisition. Furthermore, an integrated display platform for multi-modal physiological data to display and analysis signals in real-time was designed. This thesis also introduces the use of the Qt signal and slot event processing mechanism and multi-threaded to improve the real-time performance of data processing to a higher level. In addition, multi-modal electrophysiological data storage and processing was realized on cloud server. The system also includes a custom built Django cloud server which realizes real-time transmission between server and WeChat applet. Based on WebSocket protocol, the data transmission delay is less than 10ms. The analysis platform can be equipped with deep learning models to realize the monitoring of patients with epileptic seizures and assess the level of consciousness of Disorders of Consciousness (DOC) patients. This thesis combines the standard open-source data set CHB-MIT, a clinical data set provided by Huashan Hospital, and additional data collected by the system described in this thesis. These data sets are merged to build a deep learning network model and develop related applications for automatic disease diagnosis for smart medical IoT systems. It mainly includes the use of the clinical data to analyze the characteristics of the EEG signal of DOC patients and building a CNN model to evaluate the patient's level of consciousness automatically. Also, epilepsy is a common disease in neuro-intensive care. In this regard, this thesis also analyzes the differences of various deep learning model between the CHB-MIT data set and clinical data set for epilepsy monitoring, in order to select the most appropriate model for the system being designed and developed. Finally, this thesis also verifies the AI-assisted analysis model.. The results show that the accuracy of the CNN network model based on the evaluation of consciousness disorder on the clinical data set reaches 82%. The CNN+STFT network model based on epilepsy monitoring reaches 90% of the accuracy rate in clinical data. Also, the multi-modal brain monitoring system built is fully verified. The EEG signal collected by this system has a high signal-to-noise ratio, strong anti-interference ability, and is very stable. The built brain monitoring system performs well in real-time and stability. Keywords: TBI, Neurocritical care, Multi-modal, Consciousness Assessment, seizures detection, deep learning, CNN, IoT

    Early Flame Detection System Using Real-Time Machine-Vision and Image Analysis

    Get PDF
    From 2010 to 2019, 110,811 fires with losses have been reported to the Office of Fire Marshall and Emergency Management in Ontario. In The USA, the local fire departments responded to 1,338,500 fires in 2020. These fires caused 3,500 civilian deaths, 15,200 civilian injuries and $21.9 billion in property damage. A fire occurs in a structure in the USA every 64 seconds. Those and similar recent statistics from different parts of the world indicate that the current point-type fire detection technology has failed to eliminate the hazards of death, injury, and economic loss caused by fire. This research aims to utilize the latest digital video processing and computer vision technology to develop a more efficient flame detection system. Due to rapid developments in digital Cameras, IoT, and 5G telecommunication technologies, computer-vision based fire detection is getting more attention from researchers in recent years. Computer-vision based fire detection can be as simple as a single IoT camera to detect a fire early before becoming out of control and turning into a threatening risk, triggers a local alarm, and sends remote warning signals to the fire department and emergency management officials. The proposed system does not require high capital costs nor high operation and maintenance costs since it will run on top of the existing infrastructure of the digital security & surveillance system network. Moreover, the proposed system has broad potential for indoor and outdoor applications in urban areas, and it is easily expandable by adding more IP cameras to the existing network. The proposed system incorporates two stages: Stage-I: Detecting the fire candidate region from live video stream based on colour and motion information; and Stage-II: passing the candidate region to a trained Convolutional Neural Network (CNN) classification model to classify the region as fire or non-fire. The main innovation in this approach is its simplicity and suitability for real-time use without compromising the accuracy. The experimental results show that the system training and validation accuracies reach 100% and 98% respectively. Applying the proposed framework as an additional layer of protection integrated into existing indoor and outdoor digital security & surveillance systems is expected to provide early fire detection and allows firefighters and rescue teams to arrive at the scene at its early and offer priceless minutes to the attendees or building occupants to evacuate the hazardous locations. This proposal will save lives and minimize the economic loss in public and private properties

    Vision-Based Soft Mobile Robot Inspired by Silkworm Body and Movement Behavior

    Get PDF
    Designing an inexpensive, low-noise, safe for individual, mobile robot with an efficient vision system represents a challenge. This paper proposes a soft mobile robot inspired by the silkworm body structure and moving behavior. Two identical pneumatic artificial muscles (PAM) have been used to design the body of the robot by sewing the PAMs longitudinally. The proposed robot moves forward, left, and right in steps depending on the relative contraction ratio of the actuators. The connection between the two artificial muscles gives the steering performance at different air pressures of each PAM. A camera (eye) integrated into the proposed soft robot helps it to control its motion and direction. The silkworm soft robot detects a specific object and tracks it continuously. The proposed vision system is used to help with automatic tracking based on deep learning platforms with real-time live IR camera. The object detection platform, named, YOLOv3 is used effectively to solve the challenge of detecting high-speed tiny objects like Tennis balls. The model is trained with a dataset consisting of images of   Tennis balls. The work is simulated with Google Colab and then tested in real-time on an embedded device mated with a fast GPU called Jetson Nano development kit. The presented object follower robot is cheap, fast-tracking, and friendly to the environment. The system reaches a 99% accuracy rate during training and testing. Validation results are obtained and recorded to prove the effectiveness of this novel silkworm soft robot. The research contribution is designing and implementing a soft mobile robot with an effective vision system
    corecore