72 research outputs found

    Evaluation of ultra-wideband in vivo radio channel and its effects on system performance

    Get PDF
    This paper presents bit‐error‐rate (BER) performance analysis and improvement using equalizers for an in vivo radio channel at ultra‐wideband frequencies (3.1 GHz to 10.6 GHz). By conducting simulations using a bandwidth of 50 MHz, we observed that the in vivo radio channel is affected by small‐scale fading. This fading results in intersymbol interference affecting upcoming symbol transmission, causing delayed versions of the symbols to arrive at the receiver side and causes increase in BER. A 29‐taps channel was observed from the experimentally measured data using a human cadaver, and BER was calculated for the measured in vivo channel response along with the ideal additive white Gaussian noise and Rayleigh channel models. Linear and nonlinear adaptive equalizers, ie, decision feedback equalizer (DFE) and least mean square (LMS), were used to improve the BER performance of the in vivo radio channel. It is noticed that both the equalizers improve the BER but DFE has better BER compared to LMS and shows the 2‐dB and 4‐dB performance gains of DFE over the LMS at Eb/No = 12 dB and at Eb/No = 14 dB, respectively. The current findings will help guide future researchers and designers in enhancing systems performance of an ultra‐wideband in vivo wireless systems

    Location Dependent Channel Characteristics for Implantable Devices

    Get PDF
    This paper presents an impact on an in-vivo channel with respect to the position of ex-vivo antenna placement and its location. The paper also shows how the location of the antenna is impacting the channel. Three different parts are considered for the simulations using measured data for 500 MHz bandwidth. The results in the paper present the high location dependent characteristics of the in-vivo channel in the context of changing the position of the ex-vivo antenna. These findings can help in the system design for the future of the implantable devices design to be placed inside the human body

    Development of doubled haploid maize lines by using in vivo haploid technique

    Get PDF
    The doubled haploid technology is now an integral component of modern maizebreeding programs. In this study, the maternal haploid induction (gynogenesis)method was used to derive Doubled-Haploid (DH) lines from elite maize germplasmadapted to Turkey. Temperate haploid inducers (RWS, RWK-76, RWS x RWK-76 andWS14) were used as pollinators, and a set of 30 single-crossses (in FAO 650-700maturity groups) were used as source materials. Putative haploid seeds were selectedbased on expression of R1-nj anthocyanin color marker. Highest haploid induction rate(20.42%) was recorded by using RWK-76 as inducer line, and the lowest haploidinduction rate (17.75%) was obtained through WS14. Putative haploid seeds weregerminated and seedlings were treated with 0.06% colchicine + 0.5%dimethylsulfoxide solution. Following transfer of seedlings into the field, 2178 D0plants were obtained out of a total of 3012 treated haploids. Live plants were from89% of 2178 seedlings which are planted to the field. Fertile plants were formed 57%of live plants. Inbreeding was succeeded in 31.23% of fertile plants and only 7.8% ofinbreeding plants were able to produce seeds. Consequently, 27 doubled haploid lineswere developed

    Analysis & Numerical Simulation of Indian Food Image Classification Using Convolutional Neural Network

    Get PDF
    Recognition of Indian food can be assumed to be a fine-grained visual task owing to recognition property of various food classes. It is therefore important to provide an optimized approach to segmentation and classification for different applications based on food recognition. Food computation mainly utilizes a computer science approach which needs food data from various data outlets like real-time images, social flat-forms, food journaling, food datasets etc, for different modalities. In order to consider Indian food images for a number of applications we need a proper analysis of food images with state-of-art-techniques. The appropriate segmentation and classification methods are required to forecast the relevant and upgraded analysis. As accurate segmentation lead to proper recognition and identification, in essence we have considered segmentation of food items from images. Considering the basic convolution neural network (CNN) model, there are edge and shape constraints that influence the outcome of segmentation on the edge side. Approaches that can solve the problem of edges need to be developed; an edge-adaptive As we have solved the problem of food segmentation with CNN, we also have difficulty in classifying food, which has been an important area for various types of applications. Food analysis is the primary component of health-related applications and is needed in our day to day life. It has the proficiency to directly predict the score function from image pixels, input layer to produce the tensor outputs and convolution layer is used for self- learning kernel through back-propagation. In this method, feature extraction and Max-Pooling is considered with multiple layers, and outputs are obtained using softmax functionality. The proposed implementation tests 92.89% accuracy by considering some data from yummly dataset and by own prepared dataset. Consequently, it is seen that some more improvement is needed in food image classification. We therefore consider the segmented feature of EA-CNN and concatenated it with the feature of our custom Inception-V3 to provide an optimized classification. It enhances the capacity of important features for further classification process. In extension we have considered south Indian food classes, with our own collected food image dataset and got 96.27% accuracy. The obtained accuracy for the considered dataset is very well in comparison with our foregoing method and state-of-the-art techniques.

    From Sophisticated Analysis to Colorimetric Determination: Smartphone Spectrometers and Colorimetry

    Get PDF
    Smartphone-based spectrometer and colorimetry have been gaining relevance due to the widespread advances of devices with increasing computational power, their relatively low cost and portable designs with user-friendly interfaces, and their compatibility with data acquisition and processing for “lab-on-a-chip” systems. They find applications in interdisciplinary fields, including but not limited to medical science, water monitoring, agriculture, and chemical and biological sensing. However, spectrometer and colorimetry designs are challenging tasks in real-life scenarios as several distinctive issues influence the quantitative evaluation process, such as ambient light conditions and device independence. Several approaches have been proposed to overcome the aforementioned challenges and to enhance the performance of smartphone-based colorimetric analysis. This chapter aims at providing researchers with a state-of-the-art overview of smartphone-based spectrometer and colorimetry, which includes hardware designs with 3D printers and sensors and software designs with image processing algorithms and smartphone applications. In addition, assay preparation to mimic the real-life testing environments and performance metrics for quantitative evaluation of proposed designs are presented with the list of new and future trends in this field

    3D Multimodal Brain Tumor Segmentation and Grading Scheme based on Machine, Deep, and Transfer Learning Approaches

    Get PDF
    Glioma is one of the most common tumors of the brain. The detection and grading of glioma at an early stage is very critical for increasing the survival rate of the patients. Computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems are essential and important tools that provide more accurate and systematic results to speed up the decision-making process of clinicians. In this paper, we introduce a method consisting of the variations of the machine, deep, and transfer learning approaches for the effective brain tumor (i.e., glioma) segmentation and grading on the multimodal brain tumor segmentation (BRATS) 2020 dataset. We apply popular and efficient 3D U-Net architecture for the brain tumor segmentation phase. We also utilize 23 different combinations of deep feature sets and machine learning/fine-tuned deep learning CNN models based on Xception, IncResNetv2, and EfficientNet by using 4 different feature sets and 6 learning models for the tumor grading phase. The experimental results demonstrate that the proposed method achieves 99.5% accuracy rate for slice-based tumor grading on BraTS 2020 dataset. Moreover, our method is found to have competitive performance with similar recent works

    De-Noising Signals using Wavelet Transform in Internet of Underwater Things

    Get PDF
    Internet of Underwater Things (IoUT) is an emerging field within Internet of Things (IoT) towards smart cities. IoUT has applications in monitoring underwater structures as well as marine life. This paper presents preliminary work where sensor nodes were built on Arduino Uno platform with temperature and pressure sensors with wireless capability. The sensors nodes were then tested in the Flumes of the COAST laboratory to determine the maximum depth achievable in fresh water before the signal is lost as radio frequencies are susceptible to interference under water. Further, the received signals were de-noised using Wavelet Transform, Daubechies thresholding techniques at level 5. Preliminary results suggest that at a depth of 30 cm, signal was lost, de-noising of the signal was achieved with very small errors (a mean squared error of 0.106 and 0.000446 and Peak-Sign-to-Noise Ratios of 70.18 dB and 58.83 dB for the pressure and temperature signals, respectively. Results from this study will lay the foundation to further investigations in wireless sensor networks in IoUT integrating the de-noising techniques

    Enhancing Retinal Scan Classification: A Comparative Study of Transfer Learning and Ensemble Techniques

    Get PDF
    Ophthalmic diseases are a significant health concern globally, causing visual impairment and blindness in millions of people, particularly in dispersed populations. Among these diseases, retinal fundus diseases are a leading cause of irreversible vision loss, and early diagnosis and treatment can prevent this outcome. Retinal fundus scans have become an indispensable tool for doctors to diagnose multiple ocular diseases simultaneously. In this paper, the results of a variety of deep learning models (DenseNet-201, ResNet125V2, XceptionNet, EfficientNet-B7, MobileNetV2, and EfficientNetV2M) and ensemble learning approaches are presented, which can accurately detect 20 common fundus diseases by analyzing retinal fundus scan images. The proposed model is able to achieve a remarkable accuracy of 96.98% for risk classification and 76.92% for multi-disease detection, demonstrating its potential for use in clinical settings. By utilizing the proposed model, doctors can provide swift and accurate diagnoses to patients, improving their chances of receiving timely treatment and preserving their vision

    Ambient Assisted Living: Scoping Review of Artificial Intelligence Models, Domains, Technology, and Concerns

    Get PDF
    Background: Ambient assisted living (AAL) is a common name for various artificial intelligence (AI)—infused applications and platforms that support their users in need in multiple activities, from health to daily living. These systems use different approaches to learn about their users and make automated decisions, known as AI models, for personalizing their services and increasing outcomes. Given the numerous systems developed and deployed for people with different needs, health conditions, and dispositions toward the technology, it is critical to obtain clear and comprehensive insights concerning AI models used, along with their domains, technology, and concerns, to identify promising directions for future work. Objective: This study aimed to provide a scoping review of the literature on AI models in AAL. In particular, we analyzed specific AI models used in AАL systems, the target domains of the models, the technology using the models, and the major concerns from the end-user perspective. Our goal was to consolidate research on this topic and inform end users, health care professionals and providers, researchers, and practitioners in developing, deploying, and evaluating future intelligent AAL systems. Methods: This study was conducted as a scoping review to identify, analyze, and extract the relevant literature. It used a natural language processing toolkit to retrieve the article corpus for an efficient and comprehensive automated literature search. Relevant articles were then extracted from the corpus and analyzed manually. This review included 5 digital libraries: IEEE, PubMed, Springer, Elsevier, and MDPI. Results: We included a total of 108 articles. The annual distribution of relevant articles showed a growing trend for all categories from January 2010 to July 2022. The AI models mainly used unsupervised and semisupervised approaches. The leading models are deep learning, natural language processing, instance-based learning, and clustering. Activity assistance and recognition were the most common target domains of the models. Ambient sensing, mobile technology, and robotic devices mainly implemented the models. Older adults were the primary beneficiaries, followed by patients and frail persons of various ages. Availability was a top beneficiary concern. Conclusions: This study presents the analytical evidence of AI models in AAL and their domains, technologies, beneficiaries, and concerns. Future research on intelligent AAL should involve health care professionals and caregivers as designers and users, comply with health-related regulations, improve transparency and privacy, integrate with health care technological infrastructure, explain their decisions to the users, and establish evaluation metrics and design guidelines. Trial Registration: PROSPERO (International Prospective Register of Systematic Reviews) CRD42022347590; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022347590This work was part of and supported by GoodBrother, COST Action 19121—Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living
    corecore