181 research outputs found

    On Acceleration of Evolutionary Algorithms Taking Advantage of A Posteriori Error Analysis

    Get PDF
    A variety of important engineering and scientific tasks may be formulated as non-linear, constrained optimization problems. Their solution often demands high computational power. It may be reached by means of appropriate hardware, software or algorithm improvements. The Evolutionary Algorithms (EA) approach to solution of such problems is considered here. The EA are rather slow methods; however, the main advantage of their application is observed in the case of non-convex problems. Particularly high efficiency is demanded in the case of solving large optimization problems. Examples of such problems in engineering include analysis of residual stresses in railroad rails and vehicle wheels, as well as the Physically Based Approximation (PBA) approach to smoothing experimental and/or numerical data. Having in mind such analysis in the future, we focus our current research on the significant EA efficiency increase. Acceleration of the EA is understood here, first of all, as decreasing the total computational time required to solve an optimization problem. Such acceleration may be obtained in various ways. There are at least two gains from the EA acceleration, namely i) saving computational time, and ii) opening a possibility of solving larger optimization problems, than it would be possible with the standard EA. In our recent research we have preliminarily proposed several new speed-up techniques based on simple concepts. In this paper we mainly develop acceleration techniques based on simultaneous solutions averaging well supported by a non-standard application of parallel calculations, and a posteriori solution error analysis. The knowledge about the solution error is used to EA acceleration by means of appropriately modified standard evolutionary operators like selection, crossover, and mutation. Efficiency of the proposed techniques is evaluated using several benchmark tests. These tests indicate significant speed-up of the involved optimization process. Further concepts and improvements are also currently being developed and tested

    Carotid arteries segmentation in CT images with use of a right generalized cylinder model

    Get PDF
    The arterial lumen is modeled by a continuous right generalized cylinder with piece-wise constant parameters. The method is based on the identification of the parameters of each piece from a series of contours extracted along an approximate axis of the artery. This curve is defined by a minimal path between the artery end-points. The contours are extracted using the Fast Marching algorithm. The identification of the axial parameters is based on a geometrical analogy with helical curves, while the identification of the surface parameters uses the Fourier series decomposition of the contours. Thus identified parameters are used as observations in a Kalman optimal estimation scheme that manages the spatial consistency from each piece to another. The method was evaluated on 46 datasets from the MICCAI 3D Segmentation in the Clinic Grand Challenge: Carotid Bifurcation Lumen Segmentation and Stenosis Grading (http://cls2009.bigr.nl)

    Progressive Attenuation of the Longitudinal Kinetics in the Common Carotid Artery: Preliminary in Vivo Assessment

    Full text link
    Longitudinal kinetics (LOKI) of the arterial wall consists of the shearing motion of the intima-media complex over the adventitia layer in the direction parallel to the blood flow during the cardiac cycle. The aim of this study was to investigate the local variability of LOKI amplitude along the length of the vessel. By use of a previously validated motion-estimation framework, 35 in vivo longitudinal B-mode ultrasound cine loops of healthy common carotid arteries were analyzed. Results indicated that LOKI amplitude is progressively attenuated along the length of the artery, as it is larger in regions located on the proximal side of the image (i.e., toward the heart) and smaller in regions located on the distal side of the image (i.e., toward the head), with an average attenuation coefficient of −2.5 ± 2.0%/mm. Reported for the first time in this study, this phenomenon is likely to be of great importance in improving understanding of atherosclerosis mechanisms, and has the potential to be a novel index of arterial stiffness

    Reliability of the nitrogen washin-washout technique to assess end-expiratory lung volume at variable PEEP and tidal volumes

    Get PDF
    International audienceBackgroundEnd-expiratory lung volume measurement by the nitrogen washin-washout technique (EELVWI-WO) may help titrating positive end-expiratory pressure (PEEP) during acute respiratory distress syndrome (ARDS). Validation of this technique has been previously performed using computed tomography (EELVCT), but at mild PEEP levels, and relatively low fraction of inspired oxygen (FiO2), which may have insufficiently challenged the validity of this technique. The aims of this study were (1) to evaluate the reliability of EELVWI-WO measurements at different PEEP and V T during experimental ARDS and (2) to evaluate trending ability of EELVWI-WO to detect EELV changes over time.MethodsARDS was induced in 14 piglets by saline lavage. Optimal PEEP was selected during a decremental PEEP trial, based on best compliance, best EELVWI-WO, or a PEEP-FiO2 table. Eight V T (4 to 20 mL * kg-1) were finally applied at optimal PEEP. EELVWI-WO and EELVCT were determined after ARDS onset, at variable PEEP and V T.ResultsEELVWI-WO underestimated EELVCT with a non-constant linear bias, as it decreased with increasing EELV. Limits of agreement for bias were ±398 mL. Bias between methods was greater at high PEEP, and further increased when high PEEP was combined with low V T. Concordance rate of EELV changes between consecutive measurements was fair (79%). Diagnostic accuracy was good for detection of absolute EELV changes above 200 mL (AUC = 0.79).ConclusionsThe reliability of the WI-WO technique is critically dependent on ventilatory settings, but sufficient to accurately detect EELV change greater than 200 mL

    Vessel Segmentation on Computed Tomography Angiography

    Get PDF
    International audienceThis short paper describes our contribution in the research aimed at model based vessel segmentation on CTA. Although each partner was involved in a main subject among what follows, the contribution is a joint effort of all the partners, as a result of regular visits in France and Israel, as well as between partners in each country. The French Hospital Partner in Lyon provided a large set of CTA studies, including sets with two studies performed on each patient and about 20 studies suitable for work on other aspects of cardiac vessel segmentation

    Adaptación del algoritmo maracas para segmentación de la arteria carótida y cuantificación de estenosis en imágenes tac

    Get PDF
    En este artículo se describen las adaptaciones hechas al algoritmo MARACAS para segmentar y cuantificar estructuras vasculares en imágenes TAC de la arteria carótida. El algoritmo MARACAS, que está basado en un modelo elástico y en un análisis de los valores y vectores propios de la matriz de inercia, fue inicialmente diseñado para segmentar una sola arteria en imágenes ARM. Las modificaciones están principalmente enfocadas a tratar las especificidades de las imágenes TAC, así como la presencia de bifurcaciones. Los algoritmos implementados en esta nueva versión se clasifican en dos niveles. 1) Los procesamientos de bajo nivel (filtrado de ruido y de artificios direccionales, presegmentación y realce) destinados a mejorar la calidad de la imagen y presegmentarla. Estas técnicas están basadas en información a priori sobre el ruido, los artificios y los intervalos típicos de niveles de gris del lumen, del fondo y de las calcificaciones. 2) Los procesamientos de alto nivel para extraer la línea central de la arteria, segmentar el lumen y cuantificar la estenosis. A este nivel, se aplican conocimientos a priori sobre la forma y anatomía de las estructuras vasculares. El método fue evaluado en 31 imágenes suministradas en el concurso “Carotid Lumen Segmentation and Stenosis Grading Grand Challenge” 2009. Los resultados obtenidos en la segmentación arrojaron un coeficiente de similitud de Dice promedio de 80.4% comparado con la segmentación de referencia, y el error promedio de la cuantificación de estenosis fue 14.4%

    Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    Full text link
    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.Rudyanto, RD.; Kerkstra, S.; Van Rikxoort, EM.; Fetita, C.; Brillet, P.; Lefevre, C.; Xue, W.... (2014). Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study. Medical Image Analysis. 18(7):1217-1232. doi:10.1016/j.media.2014.07.003S1217123218

    Prise en compte des discontinuités dans l'estimation du mouvement : une revue

    No full text
    International audienc

    Advances in assessing arterial-wall kinetics from ultrasound-image sequences

    No full text
    invited conferenc

    Detection des objets mobiles dans les scenes naturelles

    No full text
    Nous abordons le problème de détection des objets en mouvement dans des images prises à l'aide d'une caméra fixe. Il n'y a pas de restriction quant à la vitesse de déplacement de l'objet étudié. En revanche, d'éventuels changements d'éclairage sont supposés progressifs. Aucune connaissance préalable de la scène, pas plus que son image prise en absence de l'objet mobile, n'est disponible. Ainsi, la différence d'images, qui permet d'éliminer le fond stationnaire, doit s'associer à une autre opération, pour distinguer la position actuelle de l'objet en mouvement. Quelques méthodes intéressantes sont passées en revue. Les algorithmes accumulatifs permettent de reconstruire l'image de référence représentant uniquement les objets stationnaires, et donc de localiser l'objet en mouvement par la différence absolue entre l'image courante et l'image de référence. Mais le résultat n'est disponible qu'après l'analyse de nombreuses trames. La coïncidence des contours fournit un résultat dès la seconde trame, mais le fond désocculté est interprété comme "en mouvement", même s'il était visible au début de la séquence d'images. Les méthodes proposées visent à concilier les avantages des deux catégories citées et à améliorer la détection des contours coïncidents. Grâce à une opération originale, la coïncidence est déterminée pour les gradients, ce qui permet de localiser l'objet en mouvement dès la seconde image. Les contours statiques visibles dans au moins deux trames consécutives sont accumulés, pour former une image historique du fond et éviter ainsi leur mauvaise interprétation lors des désoccultations. Les techniques développées pour les contours non binaires sont extrapolées aux contours binaires détectés en temps réel. Les applications possibles sont brièvement discutées
    corecore