1,050 research outputs found

    Coronary Artery Centerline Extraction in Cardiac CT Angiography Using a CNN-Based Orientation Classifier

    Full text link
    Coronary artery centerline extraction in cardiac CT angiography (CCTA) images is a prerequisite for evaluation of stenoses and atherosclerotic plaque. We propose an algorithm that extracts coronary artery centerlines in CCTA using a convolutional neural network (CNN). A 3D dilated CNN is trained to predict the most likely direction and radius of an artery at any given point in a CCTA image based on a local image patch. Starting from a single seed point placed manually or automatically anywhere in a coronary artery, a tracker follows the vessel centerline in two directions using the predictions of the CNN. Tracking is terminated when no direction can be identified with high certainty. The CNN was trained using 32 manually annotated centerlines in a training set consisting of 8 CCTA images provided in the MICCAI 2008 Coronary Artery Tracking Challenge (CAT08). Evaluation using 24 test images of the CAT08 challenge showed that extracted centerlines had an average overlap of 93.7% with 96 manually annotated reference centerlines. Extracted centerline points were highly accurate, with an average distance of 0.21 mm to reference centerline points. In a second test set consisting of 50 CCTA scans, 5,448 markers in the coronary arteries were used as seed points to extract single centerlines. This showed strong correspondence between extracted centerlines and manually placed markers. In a third test set containing 36 CCTA scans, fully automatic seeding and centerline extraction led to extraction of on average 92% of clinically relevant coronary artery segments. The proposed method is able to accurately and efficiently determine the direction and radius of coronary arteries. The method can be trained with limited training data, and once trained allows fast automatic or interactive extraction of coronary artery trees from CCTA images.Comment: Accepted in Medical Image Analysi

    Three-dimensional reconstruction of a masonry building through electrical and seismic tomography validated by biological analyses

    Get PDF
    In this paper, we present an integrated approach, for assessing the condition of an ancient Roman building, affected by rising damp and cracking phenomena. The combination of high-resolution geophysical methods, such as seismic and electrical tomography, with biological information, allowed a more detailed evaluation of the state of conservation of the masonry building. A preliminary three-dimensional electrical survey was conducted to detect the existing building foundations and to determine the variation of the resistivity in the ground. Then, electrical and seismic tomography investigations were carried out on an inner wall of opus caementicium, subjected to rising damp effects and cracks. This approach was adopted to obtain a high-resolution image of the wall, which allowed to identify the inner mortar and the outer brick component from resistivity and velocity contrasts. Furthermore, the geophysical results revealed evidence of wall fractures (indicated by low velocity and high resistivity values) and a significant volume where rising of damp was taking place (resulting in a low resistivity zone). Biological analyses validated the geophysical model: in fact, the biological proliferation occurred up to a height of 0.75 m, where the interface between high and low resistivity values was recovered. This approach can be employed to reconstruct a three-dimensional model of masonry structures in order to plan recovery actions

    A comparison of aircraft-based surface-layer observations over Denmark Strait and the Irminger sea with meteorological analyses and QuikSCAT winds

    Get PDF
    A compilation of aircraft observations of the atmospheric surface layer is compared with several meteorological analyses and QuikSCAT wind products. The observations are taken during the Greenland Flow Distortion Experiment, in February and March 2007, during cold-air outbreak conditions and moderate to high wind speeds. About 150 data points spread over six days are used, with each data point derived from a 2-min run (equivalent to a 12 km spatial average). The observations were taken 30–50 m above the sea surface and are adjusted to standard heights. Surface-layer temperature, humidity and wind, as well as sea-surface temperature (SST) and surface turbulent fluxes are compared against co-located data from the ECMWF operational analyses, NCEP Global Reanalyses, NCEP North American Regional Reanalyses (NARR), Met Office North Atlantic European (NAE) operational analyses, two MM5 hindcasts, and two QuikSCAT products. In general, the limited-area models are better at capturing the mesoscale high wind speed features and their associated structure; often the models underestimate the highest wind speeds and gradients. The most significant discrepancies are: a poor simulation of relative humidity by the NCEP global and MM5 models, a cold bias in 2 m air temperature near the sea-ice edge in the NAE model, and an overestimation of wind speed above 20 m s-1 in the QuikSCAT wind products. In addition, the NCEP global, NARR and MM5 models all have significant discrepancies associated with the parametrisation of surface turbulent heat fluxes. A high-resolution prescription of the SST field is crucial in this region, although these were not generally used at this time

    Learning Moore Machines from Input-Output Traces

    Full text link
    The problem of learning automata from example traces (but no equivalence or membership queries) is fundamental in automata learning theory and practice. In this paper we study this problem for finite state machines with inputs and outputs, and in particular for Moore machines. We develop three algorithms for solving this problem: (1) the PTAP algorithm, which transforms a set of input-output traces into an incomplete Moore machine and then completes the machine with self-loops; (2) the PRPNI algorithm, which uses the well-known RPNI algorithm for automata learning to learn a product of automata encoding a Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore machine using PTAP extended with state merging. We prove that MooreMI has the fundamental identification in the limit property. We also compare the algorithms experimentally in terms of the size of the learned machine and several notions of accuracy, introduced in this paper. Finally, we compare with OSTIA, an algorithm that learns a more general class of transducers, and find that OSTIA generally does not learn a Moore machine, even when fed with a characteristic sample

    Experiments using semantics for learning language comprehension and production

    No full text
    Several questions in natural language learning may be addressed by studying formal language learning models. In this work we hope to contribute to a deeper understanding of the role of semantics in language acquisition. We propose a simple formal model of meaning and denotation using finite state transducers, and an algorithm that learns a meaning function from examples consisting of a situation and an utterance denoting something in the situation. We describe the results of testing this algorithm in a domain of geometric shapes and their properties and relations in several natural languages: Arabic, English, Greek, Hebrew, Hindi, Mandarin, Russian, Spanish, and Turkish. In addition, we explore how a learner who has learned to comprehend utterances might go about learning to produce them, and present experimental results for this task. One concrete goal of our formal model is to be able to give an account of interactions in which an adult provides a meaning-preserving and grammatically correct expansion of a child's incomplete utterance

    Artificial intelligence and automation in valvular heart diseases

    Get PDF
    Artificial intelligence (AI) is gradually changing every aspect of social life, and healthcare is no exception. The clinical procedures that were supposed to, and could previously only be handled by human experts can now be carried out by machines in a more accurate and efficient way. The coming era of big data and the advent of supercomputers provides great opportunities to the development of AI technology for the enhancement of diagnosis and clinical decision-making. This review provides an introduction to AI and highlights its applications in the clinical flow of diagnosing and treating valvular heart diseases (VHDs). More specifically, this review first introduces some key concepts and subareas in AI. Secondly, it discusses the application of AI in heart sound auscultation and medical image analysis for assistance in diagnosing VHDs. Thirdly, it introduces using AI algorithms to identify risk factors and predict mortality of cardiac surgery. This review also describes the state-of-the-art autonomous surgical robots and their roles in cardiac surgery and intervention

    How Efficient IsModel-to-Model Data Assimilation atMitigating Atmospheric Forcing Errors in a Regional Ocean Model?

    Get PDF
    This paper examines the efficiency of a recently developed Nesting with Data Assimilation (NDA) method at mitigating errors in heat and momentum fluxes at the ocean surface coming from external forcing. The analysis uses a set of 19 numerical simulations, all using the same ocean model and exactly the same NDA process. One simulation (the reference) uses the original atmospheric data, and the other eighteen simulations are performed with intentionally introduced perturbations in the atmospheric forcing. The NDA algorithm uses model-to-model data assimilation instead of assimilating observations directly. Therefore, it requires a good quality, although a coarser resolution data assimilating parent model. All experiments are carried out in the South East Arabian Sea. The variables under study are sea surface temperature, kinetic energy, relative vorticity and enstrophy. The results show significant improvement in bias, root-mean-square-error, and correlation coefficients between the reference and the perturbed models when they are run in the data assimilating configurations. Residual post-assimilation uncertainties are similar or lower than uncertainties of satellite based observations. Different length of DA cycle within a range from 1 to 8 days has little effect on the accuracy of results

    An operational analysis of Lake Surface Water Temperature

    Get PDF
    Operational analyses of Lake Surface Water Temperature (LSWT) have many potential uses including improvement of numerical weather prediction (NWP) models on regional scales. In November 2011, LSWT was included in the Met Office Operational Sea Surface Temperature and Ice Analysis (OSTIA) product, for 248 lakes globally. The OSTIA analysis procedure, which has been optimised for oceans, has also been used for the lakes in this first version of the product. Infra-red satellite observations of lakes and in situ measurements are assimilated. The satellite observations are based on retrievals optimised for Sea Surface Temperature (SST) which, although they may introduce inaccuracies into the LSWT data, are currently the only near-real-time information available. The LSWT analysis has a global root mean square difference of 1.31 K and a mean difference of 0.65 K (including a cool skin effect of 0.2 K) compared to independent data from the ESA ARC-Lake project for a 3-month period (June to August 2009). It is demonstrated that the OSTIA LSWT is an improvement over the use of climatology to capture the day-to-day variation in global lake surface temperatures

    Temporal evolution of temperatures in the Red Sea and the Gulf of Aden based on in situ observations (1958–2017)

    Get PDF
    The Red Sea holds one of the most diverse marine ecosystems in the world, although fragile and vulnerable to ocean warming. Several studies have analysed the spatio-temporal evolution of temperature in the Red Sea using satellite data, thus focusing only on the surface layer and covering the last ∼30 years. To better understand the long-term variability and trends of temperature in the whole water column, we produce a 3-D gridded temperature product (TEMPERSEA) for the period 1958–2017, based on a large number of in situ observations, covering the Red Sea and the Gulf of Aden. After a specific quality control, a mapping algorithm based on optimal interpolation have been applied to homogenize the data. Also, an estimate of the uncertainties of the product has been generated. The calibration of the algorithm and the uncertainty computation has been done through sensitivity experiments based on synthetic data from a realistic numerical simulation. TEMPERSEA has been compared to satellite observations of sea surface temperature for the period 1981–2017, showing good agreement especially in those periods when a reasonable number of observations were available. Also, very good agreement has been found between air temperatures and reconstructed sea temperatures in the upper 100 m for the whole period 1958–2017, enhancing confidence in the quality of the product. The product has been used to characterize the spatio-temporal variability of the temperature field in the Red Sea and the Gulf of Aden at different timescales (seasonal, interannual and multidecadal). Clear differences have been found between the two regions suggesting that the Red Sea variability is mainly driven by air–sea interactions, while in the Gulf of Aden the lateral advection of water plays a relevant role. Regarding long-term evolution, our results show only positive trends above 40 m depth, with maximum trends of 0.045 + 0.016 ∘C decade−1 at 15 m, and the largest negative trends at 125 m (−0.072+0.011 ∘C decade−1). Multidecadal variations have a strong impact on the trend computation and restricting them to the last 30–40 years of data can bias high the trend estimates.En prensa2,29
    • …
    corecore