191 research outputs found

    Bessel beam generation using dielectric planar lenses at millimeter frequencies

    Get PDF
    In this work a dielectric planar lens is proposed to generate a Bessel beam. The lens works at Ka-band and produces a non-diffraction range within the Fresnel region of the antenna. The methodology to design the aperture antenna at millimetre or microwave frequencies is presented. It is applied to a dielectric planar lens made up of cells that shapes the radiated near-field by adjusting the unit cell response. An approach based on a second order polynomial is proposed to consider the angular dependence of the phase-shift response of the cell in the designing process. In order to implement the lens physically, two novel cells, based on rectangular and hexagonal prisms, are proposed, and their performance is compared. The cells ensure the index dielectric media variation using airgaps to control the overall density of the material. After fully characterizing the cells, a design is carried out for the two proposed type of cells. The requirement for the Bessel beam is a depth-of-field of 650 mm at 28 GHz. After evaluating the design in a full-wave simulation, both prototypes were manufactured using a 3-D printing technique. Finally, the prototypes were measured in a planar acquisition range to evaluate the performances of the Bessel beam. Both lenses show a good agreement between simulations and measurements, obtaining promising results in the Bessel beam generation by index-graded dielectric lenses at Ka-band.info:eu-repo/semantics/publishedVersio

    Experimental Validation of Microwave Tomographywith the DBIM-TwIST Algorithm for Brain StrokeDetection and Classification

    Get PDF
    We present an initial experimental validation of a microwave tomography (MWT) prototype for brain stroke detection and classification using the distorted Born iterative method, two-step iterative shrinkage thresholding (DBIM-TwIST) algorithm. The validation study consists of first preparing and characterizing gel phantoms which mimic the structure and the dielectric properties of a simplified brain model with a haemorrhagic or ischemic stroke target. Then, we measure the S-parameters of the phantoms in our experimental prototype and process the scattered signals from 0.5 to 2.5 GHz using the DBIM-TwIST algorithm to estimate the dielectric properties of the reconstruction domain. Ourresultsdemonstratethatweareabletodetectthestroketargetinscenarios where the initial guess of the inverse problem is only an approximation of the true experimental phantom. Moreover, the prototype can differentiate between haemorrhagic and ischemic strokes based on the estimation of their dielectric properties

    PROCESS Data Infrastructure and Data Services

    Get PDF
    Due to energy limitation and high operational costs, it is likely that exascale computing will not be achieved by one or two datacentres but will require many more. A simple calculation, which aggregates the computation power of the 2017 Top500 supercomputers, can only reach 418 petaflops. Companies like Rescale, which claims 1.4 exaflops of peak computing power, describes its infrastructure as composed of 8 million servers spread across 30 datacentres. Any proposed solution to address exascale computing challenges has to take into consideration these facts and by design should aim to support the use of geographically distributed and likely independent datacentres. It should also consider, whenever possible, the co-allocation of the storage with the computation as it would take 3 years to transfer 1 exabyte on a dedicated 100 Gb Ethernet connection. This means we have to be smart about managing data more and more geographically dispersed and spread across different administrative domains. As the natural settings of the PROCESS project is to operate within the European Research Infrastructure and serve the European research communities facing exascale challenges, it is important that PROCESS architecture and solutions are well positioned within the European computing and data management landscape namely PRACE, EGI, and EUDAT. In this paper we propose a scalable and programmable data infrastructure that is easy to deploy and can be tuned to support various data-intensive scientific applications

    Colour Constancy for Image of Non-Uniformly Lit Scenes

    Get PDF
    Digital camera sensors are designed to record all incident light from a captured scene but they are unable to distinguish between the colour of the light source and the true colour of objects. The resulting captured image exhibits a colour cast toward the colour of light source. This paper presents a colour constancy algorithm for images of scenes lit by non-uniform light sources. The proposed algorithm uses a histogram-based algorithm to determine the number of colour regions. It then applies the K-means++ algorithm on the input image, dividing the image into its segments. The proposed algorithm computes the Normalized Average Absolute Difference (NAAD) for each segment and uses it as a measure to determine if the segment has sufficient colour variations. The initial colour constancy adjustment factors for each segment with sufficient colour variation is calculated. The Colour Constancy Adjustment Weighting Factors (CCAWF) for each pixel of the image are determined by fusing the CCAWFs of the segments, weighted by their normalized Euclidian distance of the pixel from the center of the segments. Results show that the proposed method outperforms the statistical techniques and its images exhibit significantly higher subjective quality to those of the learning-based methods. In addition, the execution time of the proposed algorithm is comparable to statistical-based techniques and is much lower than those of the state-of-the-art learning-based methods

    Role of machine learning in early diagnosis of kidney diseases.

    Get PDF
    Machine learning (ML) and deep learning (DL) approaches have been used as indispensable tools in modern artificial intelligence-based computer-aided diagnostic (AIbased CAD) systems that can provide non-invasive, early, and accurate diagnosis of a given medical condition. These AI-based CAD systems have proven themselves to be reproducible and have the generalization ability to diagnose new unseen cases with several diseases and medical conditions in different organs (e.g., kidneys, prostate, brain, liver, lung, breast, and bladder). In this dissertation, we will focus on the role of such AI-based CAD systems in early diagnosis of two kidney diseases, namely: acute rejection (AR) post kidney transplantation and renal cancer (RC). A new renal computer-assisted diagnostic (Renal-CAD) system was developed to precisely diagnose AR post kidney transplantation at an early stage. The developed Renal-CAD system perform the following main steps: (1) auto-segmentation of the renal allograft from surrounding tissues from diffusion weighted magnetic resonance imaging (DW-MRI) and blood oxygen level-dependent MRI (BOLD-MRI), (2) extraction of image markers, namely: voxel-wise apparent diffusion coefficients (ADCs) are calculated from DW-MRI scans at 11 different low and high b-values and then represented as cumulative distribution functions (CDFs) and extraction of the transverse relaxation rate (R2*) values from the segmented kidneys using BOLD-MRI scans at different echotimes, (3) integration of multimodal image markers with the associated clinical biomarkers, serum creatinine (SCr) and creatinine clearance (CrCl), and (4) diagnosing renal allograft status as nonrejection (NR) or AR by utilizing these integrated biomarkers and the developed deep learning classification model built on stacked auto-encoders (SAEs). Using a leaveone- subject-out cross-validation approach along with SAEs on a total of 30 patients with transplanted kidney (AR = 10 and NR = 20), the Renal-CAD system demonstrated 93.3% accuracy, 90.0% sensitivity, and 95.0% specificity in differentiating AR from NR. Robustness of the Renal-CAD system was also confirmed by the area under the curve value of 0.92. Using a stratified 10-fold cross-validation approach, the Renal-CAD system demonstrated its reproduciblity and robustness with a diagnostic accuracy of 86.7%, sensitivity of 80.0%, specificity of 90.0%, and AUC of 0.88. In addition, a new renal cancer CAD (RC-CAD) system for precise diagnosis of RC at an early stage was developed, which incorporates the following main steps: (1) estimating the morphological features by applying a new parametric spherical harmonic technique, (2) extracting appearance-based features, namely: first order textural features are calculated and second order textural features are extracted after constructing the graylevel co-occurrence matrix (GLCM), (3) estimating the functional features by constructing wash-in/wash-out slopes to quantify the enhancement variations across different contrast enhanced computed tomography (CE-CT) phases, (4) integrating all the aforementioned features and modeling a two-stage multilayer perceptron artificial neural network (MLPANN) classifier to classify the renal tumor as benign or malignant and identify the malignancy subtype. On a total of 140 RC patients (malignant = 70 patients (ccRCC = 40 and nccRCC = 30) and benign angiomyolipoma tumors = 70), the developed RC-CAD system was validated using a leave-one-subject-out cross-validation approach. The developed RC-CAD system achieved a sensitivity of 95.3% ± 2.0%, a specificity of 99.9% ± 0.4%, and Dice similarity coefficient of 0.98 ± 0.01 in differentiating malignant from benign renal tumors, as well as an overall accuracy of 89.6% ± 5.0% in the sub-typing of RCC. The diagnostic abilities of the developed RC-CAD system were further validated using a randomly stratified 10-fold cross-validation approach. The results obtained using the proposed MLP-ANN classification model outperformed other machine learning classifiers (e.g., support vector machine, random forests, and relational functional gradient boosting) as well as other different approaches from the literature. In summary, machine and deep learning approaches have shown potential abilities to be utilized to build AI-based CAD systems. This is evidenced by the promising diagnostic performance obtained by both Renal-CAD and RC-CAD systems. For the Renal- CAD, the integration of functional markers extracted from multimodal MRIs with clinical biomarkers using SAEs classification model, potentially improved the final diagnostic results evidenced by high accuracy, sensitivity, and specificity. The developed Renal-CAD demonstrated high feasibility and efficacy for early, accurate, and non-invasive identification of AR. For the RC-CAD, integrating morphological, textural, and functional features extracted from CE-CT images using a MLP-ANN classification model eventually enhanced the final results in terms of accuracy, sensitivity, and specificity, making the proposed RC-CAD a reliable noninvasive diagnostic tool for RC. The early and accurate diagnosis of AR or RC will help physicians to provide early intervention with the appropriate treatment plan to prolong the life span of the diseased kidney, increase the survival chance of the patient, and thus improve the healthcare outcome in the U.S. and worldwide

    Advances of deep learning in electrical impedance tomography image reconstruction

    Get PDF
    Electrical impedance tomography (EIT) has been widely used in biomedical research because of its advantages of real-time imaging and nature of being non-invasive and radiation-free. Additionally, it can reconstruct the distribution or changes in electrical properties in the sensing area. Recently, with the significant advancements in the use of deep learning in intelligent medical imaging, EIT image reconstruction based on deep learning has received considerable attention. This study introduces the basic principles of EIT and summarizes the application progress of deep learning in EIT image reconstruction with regards to three aspects: a single network reconstruction, deep learning combined with traditional algorithm reconstruction, and multiple network hybrid reconstruction. In future, optimizing the datasets may be the main challenge in applying deep learning for EIT image reconstruction. Adopting a better network structure, focusing on the joint reconstruction of EIT and traditional algorithms, and using multimodal deep learning-based EIT may be the solution to existing problems. In general, deep learning offers a fresh approach for improving the performance of EIT image reconstruction and could be the foundation for building an intelligent integrated EIT diagnostic system in the future

    Real-time vision-based multiple object tracking of a production process : industrial digital twin case study

    Get PDF
    The adoption of Industry 4.0 technologies within the manufacturing and process industries is widely accepted to have benefits for production cycles, increase system flexibility and give production managers more options on the production line through reconfigurable systems. A key enabler in Industry 4.0 technology is the rise in Cyber-Physical Systems (CPS) and Digital Twins (DTs). Both technologies connect the physical to the cyber world in order to generate smart manufacturing capabilities. State of the art research accurately describes the frameworks, challenges and advantages surrounding these technologies but fails to deliver on testbeds and case studies that can be used for development and validation. This research demonstrates a novel proof of concept Industry 4.0 production system which lays the foundations for future research in DT technologies, process optimisation and manufacturing data analytics. Using a connected system of commercial off-the-shelf cameras to retrofit a standard programmable logic controlled production process, a digital simulation is updated in real time to create the DT. The system can identify and accurately track the product through the production cycle whilst updating the DT in real-time. The implemented system is a lightweight, low cost, customable and scalable design solution which provides a testbed for practical Industry 4.0 research both for academic and industrial research purposes

    Real-time vision-based multiple object tracking of a production process : industrial digital twin case study

    Get PDF
    The adoption of Industry 4.0 technologies within the manufacturing and process industries is widely accepted to have benefits for production cycles, increase system flexibility and give production managers more options on the production line through reconfigurable systems. A key enabler in Industry 4.0 technology is the rise in Cyber-Physical Systems (CPS) and Digital Twins (DTs). Both technologies connect the physical to the cyber world in order to generate smart manufacturing capabilities. State of the art research accurately describes the frameworks, challenges and advantages surrounding these technologies but fails to deliver on testbeds and case studies that can be used for development and validation. This research demonstrates a novel proof of concept Industry 4.0 production system which lays the foundations for future research in DT technologies, process optimisation and manufacturing data analytics. Using a connected system of commercial off-the-shelf cameras to retrofit a standard programmable logic controlled production process, a digital simulation is updated in real time to create the DT. The system can identify and accurately track the product through the production cycle whilst updating the DT in real-time. The implemented system is a lightweight, low cost, customable and scalable design solution which provides a testbed for practical Industry 4.0 research both for academic and industrial research purposes

    Machine learning approaches for early prediction of hypertension.

    Get PDF
    Hypertension afflicts one in every three adults and is a leading cause of mortality in 516, 955 patients in USA. The chronic elevation of cerebral perfusion pressure (CPP) changes the cerebrovasculature of the brain and disrupts its vasoregulation mechanisms. Reported correlations between changes in smaller cerebrovascular vessels and hypertension may be used to diagnose hypertension in its early stages, 10-15 years before the appearance of symptoms such as cognitive impairment and memory loss. Specifically, recent studies hypothesized that changes in the cerebrovasculature and CPP precede the systemic elevation of blood pressure. Currently, sphygmomanometers are used to measure repeated brachial artery pressure to diagnose hypertension after its onset. However, this method cannot detect cerebrovascular alterations that lead to adverse events which may occur prior to the onset of hypertension. The early detection and quantification of these cerebral vascular structural changes could help in predicting patients who are at a high risk of developing hypertension as well as other cerebral adverse events. This may enable early medical intervention prior to the onset of hypertension, potentially mitigating vascular-initiated end-organ damage. The goal of this dissertation is to develop a novel efficient noninvasive computer-aided diagnosis (CAD) system for the early prediction of hypertension. The developed CAD system analyzes magnetic resonance angiography (MRA) data of human brains gathered over years to detect and track cerebral vascular alterations correlated with hypertension development. This CAD system can make decisions based on available data to help physicians on predicting potential hypertensive patients before the onset of the disease
    corecore