2,036 research outputs found

    Individual and group dynamic behaviour patterns in bound spaces

    Get PDF
    The behaviour analysis of individual and group dynamics in closed spaces is a subject of extensive research in both academia and industry. However, despite recent technological advancements the problem of implementing the existing methods for visual behaviour data analysis in production systems remains difficult and the applications are available only in special cases in which the resourcing is not a problem. Most of the approaches concentrate on direct extraction and classification of the visual features from the video footage for recognising the dynamic behaviour directly from the source. The adoption of such an approach allows recognising directly the elementary actions of moving objects, which is a difficult task on its own. The major factor that impacts the performance of the methods for video analytics is the necessity to combine processing of enormous volume of video data with complex analysis of this data using and computationally resourcedemanding analytical algorithms. This is not feasible for many applications, which must work in real time. In this research, an alternative simulation-based approach for behaviour analysis has been adopted. It can potentially reduce the requirements for extracting information from real video footage for the purpose of the analysis of the dynamic behaviour. This can be achieved by combining only limited data extracted from the original video footage with a symbolic data about the events registered on the scene, which is generated by 3D simulation synchronized with the original footage. Additionally, through incorporating some physical laws and the logics of dynamic behaviour directly in the 3D model of the visual scene, this framework allows to capture the behavioural patterns using simple syntactic pattern recognition methods. The extensive experiments with the prototype implementation prove in a convincing manner that the 3D simulation generates sufficiently rich data to allow analysing the dynamic behaviour in real-time with sufficient adequacy without the need to use precise physical data, using only a limited data about the objects on the scene, their location and dynamic characteristics. This research can have a wide applicability in different areas where the video analytics is necessary, ranging from public safety and video surveillance to marketing research to computer games and animation. Its limitations are linked to the dependence on some preliminary processing of the video footage which is still less detailed and computationally demanding than the methods which use directly the video frames of the original footage

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion

    Fast volumetric registration method for tumor follow-up in pulmonary CT exams

    Get PDF
    An oncological patient may go through several tomographic acquisitions during a period of time, needing an appropriate registration. We propose an automatic volumetric intrapatient registration method for tumor follow-up in pulmonary CT exams. The performance of our method is evaluated and compared with other registration methods based on optimization techniques. We also compared the metrics behavior to inspect which metric is more sensitive to changes due to the presence of lung tumors

    Fusion of interventional ultrasound & X-ray

    Get PDF
    In einer immer älter werdenden Bevölkerung wird die Behandlung von strukturellen Herzkrankheiten zunehmend wichtiger. Verbesserte medizinische Bildgebung und die Einführung neuer Kathetertechnologien führten dazu, dass immer mehr herkömmliche chirurgische Eingriffe am offenen Herzen durch minimal invasive Methoden abgelöst werden. Diese modernen Interventionen müssen durch verschiedenste Bildgebungsverfahren navigiert werden. Hierzu werden hauptsächlich Röntgenfluoroskopie und transösophageale Echokardiografie (TEE) eingesetzt. Röntgen bietet eine gute Visualisierung der eingeführten Katheter, was essentiell für eine gute Navigation ist. TEE hingegen bietet die Möglichkeit der Weichteilgewebedarstellung und kann damit vor allem zur Darstellung von anatomischen Strukturen, wie z.B. Herzklappen, genutzt werden. Beide Modalitäten erzeugen Bilder in Echtzeit und werden für die erfolgreiche Durchführung minimal invasiver Herzchirurgie zwingend benötigt. Üblicherweise sind beide Systeme eigenständig und nicht miteinander verbunden. Es ist anzunehmen, dass eine Bildfusion beider Welten einen großen Vorteil für die behandelnden Operateure erzeugen kann, vor allem eine verbesserte Kommunikation im Behandlungsteam. Ebenso können sich aus der Anwendung heraus neue chirurgische Worfklows ergeben. Eine direkte Fusion beider Systeme scheint nicht möglich, da die Bilddaten eine zu unterschiedliche Charakteristik aufweisen. Daher kommt in dieser Arbeit eine indirekte Registriermethode zum Einsatz. Die TEE-Sonde ist während der Intervention ständig im Fluoroskopiebild sichtbar. Dadurch wird es möglich, die Sonde im Röntgenbild zu registrieren und daraus die 3D Position abzuleiten. Der Zusammenhang zwischen Ultraschallbild und Ultraschallsonde wird durch eine Kalibrierung bestimmt. In dieser Arbeit wurde die Methode der 2D-3D Registrierung gewählt, um die TEE Sonde auf 2D Röntgenbildern zu erkennen. Es werden verschiedene Beiträge präsentiert, welche einen herkömmlichen 2D-3D Registrieralgorithmus verbessern. Nicht nur im Bereich der Ultraschall-Röntgen-Fusion, sondern auch im Hinblick auf allgemeine Registrierprobleme. Eine eingeführte Methode ist die der planaren Parameter. Diese verbessert die Robustheit und die Registriergeschwindigkeit, vor allem während der Registrierung eines Objekts aus zwei nicht-orthogonalen Richtungen. Ein weiterer Ansatz ist der Austausch der herkömmlichen Erzeugung von sogenannten digital reconstructed radiographs. Diese sind zwar ein integraler Bestandteil einer 2D-3D Registrierung aber gleichzeitig sehr zeitaufwendig zu berechnen. Es führt zu einem erheblichen Geschwindigkeitsgewinn die herkömmliche Methode durch schnelles Rendering von Dreiecksnetzen zu ersetzen. Ebenso wird gezeigt, dass eine Kombination von schnellen lernbasierten Detektionsalgorithmen und 2D-3D Registrierung die Genauigkeit und die Registrierreichweite verbessert. Zum Abschluss werden die ersten Ergebnisse eines klinischen Prototypen präsentiert, welcher die zuvor genannten Methoden verwendet.Today, in an elderly community, the treatment of structural heart disease will become more and more important. Constant improvements of medical imaging technologies and the introduction of new catheter devices caused the trend to replace conventional open heart surgery by minimal invasive interventions. These advanced interventions need to be guided by different medical imaging modalities. The two main imaging systems here are X-ray fluoroscopy and Transesophageal  Echocardiography (TEE). While X-ray provides a good visualization of inserted catheters, which is essential for catheter navigation, TEE can display soft tissues, especially anatomical structures like heart valves. Both modalities provide real-time imaging and are necessary to lead minimal invasive heart surgery to success. Usually, the two systems are detached and not connected. It is conceivable that a fusion of both worlds can create a strong benefit for the physicians. It can lead to a better communication within the clinical team and can probably enable new surgical workflows. Because of the completely different characteristics of the image data, a direct fusion seems to be impossible. Therefore, an indirect registration of Ultrasound and X-ray images is used. The TEE probe is usually visible in the X-ray image during the described minimal-invasive interventions. Thereby, it becomes possible to register the TEE probe in the fluoroscopic images and to establish its 3D position. The relationship of the Ultrasound image to the Ultrasound probe is known by calibration. To register the TEE probe on 2D X-ray images, a 2D-3D registration approach is chosen in this thesis. Several contributions are presented, which are improving the common 2D-3D registration algorithm for the task of Ultrasound and X-ray fusion, but also for general 2D-3D registration problems. One presented approach is the introduction of planar parameters that increase robustness and speed during the registration of an object on two non-orthogonal views. Another approach is to replace the conventional generation of digital reconstructedradiographs, which is an integral part of 2D-3D registration but also a performance bottleneck, with fast triangular mesh rendering. This will result in a significant performance speed-up. It is also shown that a combination of fast learning-based detection algorithms with 2D-3D registration will increase the accuracy and the capture range, instead of employing them solely to the  registration/detection of a TEE probe. Finally, a first clinical prototype is presented which employs the presented approaches and first clinical results are shown

    Computed-Tomography (CT) Scan

    Get PDF
    A computed tomography (CT) scan uses X-rays and a computer to create detailed images of the inside of the body. CT scanners measure, versus different angles, X-ray attenuations when passing through different tissues inside the body through rotation of both X-ray tube and a row of X-ray detectors placed in the gantry. These measurements are then processed using computer algorithms to reconstruct tomographic (cross-sectional) images. CT can produce detailed images of many structures inside the body, including the internal organs, blood vessels, and bones. This book presents a comprehensive overview of CT scanning. Chapters address such topics as instrumental basics, CT imaging in coronavirus, radiation and risk assessment in chest imaging, positron emission tomography (PET), and feature extraction

    Estimation of affine transformations directly from tomographic projections in two and three dimensions

    Get PDF
    This paper presents a new approach to estimate two- and three-dimensional affine transformations from tomographic projections. Instead of estimating the deformation from the reconstructed data, we introduce a method which works directly in the projection domain, using parallel and fan beam projection geometries. We show that any affine deformation can be analytically compensated, and we develop an efficient multiscale estimation framework based on the normalized cross correlation. The accuracy of the approach is verified using simulated and experimental data, and we demonstrate that the new method needs less projection angles and has a much lower computational complexity as compared to approaches based on the standard reconstruction technique

    Development and validation of HRCT airway segmentation algorithms

    Get PDF
    Direct measurements of airway lumen and wall areas are potentially useful as a diagnostic tool and as an aid to understanding the pathophysiology underlying lung disease. Direct measurements can be made from images created by high resolution computer tomography (HRCT) by using computer-based algorithms to segment airways, but current validation techniques cannot adequately establish the accuracy and precision of these algorithms. A detailed review of HRCT airway segmentation algorithms was undertaken, from which three candidate algorithm designs were developed. A custom Windows-based software program was implemented to facilitate multi-modality development and validation of the segmentation algorithms. The performance of the algorithms was examined in clinical HRCT images. A centre-likelihood (CL) ray-casting algorithm was found to be the most suitable algorithm due to its speed and reliability in semi-automatic segmentation and tracking of the airway wall. Several novel refinements were demonstrated to improve the CL algorithm’s robustness in HRCT lung data. The performance of the CL algorithm was then quantified in two-dimensional simulated data to optimise customisable parameters such as edge-detection method, interpolation and number of rays. Novel correction equations to counter the effects of volume averaging and airway orientation angle were derived and demonstrated in three-dimensional simulated data. The optimal CL algorithm was validated with HRCT data using a plastic phantom and a pig lung phantom matched to micro-CT. Accuracy was found to be improved compared to previous studies using similar methods. The volume averaging correction was found to improve precision and accuracy in the plastic phantom but not in the pig lung phantom. When tested in a clinical setting the results of the optimised CL algorithm was in agreement with the results of other measures of lung function. The thesis concludes that the relative contributions of confounders of airway measurement have been quantified in simulated data and the CL algorithm’s performance has been validated in a plastic phantom as well as animal model. This validation protocol has improved the accuracy and precision of measurements made using the CL algorith

    Automatic Estimation of Coronary Blood Flow Velocity Step 1 for Developing a Tool to Diagnose Patients With Micro-Vascular Angina Pectoris

    Get PDF
    Aim: Our aim was to automatically estimate the blood velocity in coronary arteries using cine X-ray angiographic sequence. Estimating the coronary blood velocity is a key approach in investigating patients with angina pectoris and no significant coronary artery disease. Blood velocity estimation is central in assessing coronary flow reserve. Methods and Results: A multi-step automatic method for blood flow velocity estimation based on the information extracted solely from the cine X-ray coronary angiography sequence obtained by invasive selective coronary catheterization was developed. The method includes (1) an iterative process of segmenting coronary arteries modeling and removing the heart motion using a non-rigid registration, (2) measuring the area of the segmented arteries in each frame, (3) fitting the measured sequence of areas with a 7◦ polynomial to find start and stop time of dye propagation, and (4) estimating the blood flow velocity based on the time of the dye propagation and the length of the artery-tree. To evaluate the method, coronary angiography recordings from 21 patients with no obstructive coronary artery disease were used. In addition, coronary flow velocity was measured in the same patients using a modified transthoracic Doppler assessment of the left anterior descending artery. We found a moderate but statistically significant correlation between flow velocity assessed by trans thoracic Doppler and the proposed method applying both Spearman and Pearson tests. Conclusion: Measures of coronary flow velocity using a novel fully automatic method that utilizes the information from the X-ray coronary angiographic sequence were statistically significantly correlated to measurements obtained with transthoracic Doppler recordings.publishedVersio
    corecore