64 research outputs found

    3D MODELLING AND RAPID PROTOTYPING FOR CARDIOVASCULAR SURGICAL PLANNING – TWO CASE STUDIES

    Get PDF
    In the last years, cardiovascular diagnosis, surgical planning and intervention have taken advantages from 3D modelling and rapid prototyping techniques. The starting data for the whole process is represented by medical imagery, in particular, but not exclusively, computed tomography (CT) or multi-slice CT (MCT) and magnetic resonance imaging (MRI). On the medical imagery, regions of interest, i.e. heart chambers, valves, aorta, coronary vessels, etc., are segmented and converted into 3D models, which can be finally converted in physical replicas through 3D printing procedure. In this work, an overview on modern approaches for automatic and semiautomatic segmentation of medical imagery for 3D surface model generation is provided. The issue of accuracy check of surface models is also addressed, together with the critical aspects of converting digital models into physical replicas through 3D printing techniques. A patient-specific 3D modelling and printing procedure (Figure 1), for surgical planning in case of complex heart diseases was developed. The procedure was applied to two case studies, for which MCT scans of the chest are available. In the article, a detailed description on the implemented patient-specific modelling procedure is provided, along with a general discussion on the potentiality and future developments of personalized 3D modelling and printing for surgical planning and surgeons practice

    Challenges in 3D scanning: Focusing on Ears and Multiple View Stereopsis

    Get PDF

    Advanced Image Acquisition, Processing Techniques and Applications

    Get PDF
    "Advanced Image Acquisition, Processing Techniques and Applications" is the first book of a series that provides image processing principles and practical software implementation on a broad range of applications. The book integrates material from leading researchers on Applied Digital Image Acquisition and Processing. An important feature of the book is its emphasis on software tools and scientific computing in order to enhance results and arrive at problem solution

    Vegetation Detection and Classification for Power Line Monitoring

    Get PDF
    Electrical network maintenance inspections must be regularly executed, to provide a continuous distribution of electricity. In forested countries, the electrical network is mostly located within the forest. For this reason, during these inspections, it is also necessary to assure that vegetation growing close to the power line does not potentially endanger it, provoking forest fires or power outages. Several remote sensing techniques have been studied in the last years to replace the labor-intensive and costly traditional approaches, be it field based or airborne surveillance. Besides the previously mentioned disadvantages, these approaches are also prone to error, since they are dependent of a human operator’s interpretation. In recent years, Unmanned Aerial Vehicle (UAV) platform applicability for this purpose has been under debate, due to its flexibility and potential for customisation, as well as the fact it can fly close to the power lines. The present study proposes a vegetation management and power line monitoring method, using a UAV platform. This method starts with the collection of point cloud data in a forest environment composed of power line structures and vegetation growing close to it. Following this process, multiple steps are taken, including: detection of objects in the working environment; classification of said objects into their respective class labels using a feature-based classifier, either vegetation or power line structures; optimisation of the classification results using point cloud filtering or segmentation algorithms. The method is tested using both synthetic and real data of forested areas containing power line structures. The Overall Accuracy of the classification process is about 87% and 97-99% for synthetic and real data, respectively. After the optimisation process, these values were refined to 92% for synthetic data and nearly 100% for real data. A detailed comparison and discussion of results is presented, providing the most important evaluation metrics and a visual representations of the attained results.Manutenções regulares da rede elétrica devem ser realizadas de forma a assegurar uma distribuição contínua de eletricidade. Em países com elevada densidade florestal, a rede elétrica encontra-se localizada maioritariamente no interior das florestas. Por isso, durante estas inspeções, é necessário assegurar também que a vegetação próxima da rede elétrica não a coloca em risco, provocando incêndios ou falhas elétricas. Diversas técnicas de deteção remota foram estudadas nos últimos anos para substituir as tradicionais abordagens dispendiosas com mão-de-obra intensiva, sejam elas através de vigilância terrestre ou aérea. Além das desvantagens mencionadas anteriormente, estas abordagens estão também sujeitas a erros, pois estão dependentes da interpretação de um operador humano. Recentemente, a aplicabilidade de plataformas com Unmanned Aerial Vehicles (UAV) tem sido debatida, devido à sua flexibilidade e potencial personalização, assim como o facto de conseguirem voar mais próximas das linhas elétricas. O presente estudo propõe um método para a gestão da vegetação e monitorização da rede elétrica, utilizando uma plataforma UAV. Este método começa pela recolha de dados point cloud num ambiente florestal composto por estruturas da rede elétrica e vegetação em crescimento próximo da mesma. Em seguida,múltiplos passos são seguidos, incluindo: deteção de objetos no ambiente; classificação destes objetos com as respetivas etiquetas de classe através de um classificador baseado em features, vegetação ou estruturas da rede elétrica; otimização dos resultados da classificação utilizando algoritmos de filtragem ou segmentação de point cloud. Este método é testado usando dados sintéticos e reais de áreas florestais com estruturas elétricas. A exatidão do processo de classificação é cerca de 87% e 97-99% para os dados sintéticos e reais, respetivamente. Após o processo de otimização, estes valores aumentam para 92% para os dados sintéticos e cerca de 100% para os dados reais. Uma comparação e discussão de resultados é apresentada, fornecendo as métricas de avaliação mais importantes e uma representação visual dos resultados obtidos

    Probabilistic partial volume modelling of biomedical tomographic image data

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Clearing the Clouds: Extracting 3D information from amongst the noise

    Get PDF
    Advancements permitting the rapid extraction of 3D point clouds from a variety of imaging modalities across the global landscape have provided a vast collection of high fidelity digital surface models. This has created a situation with unprecedented overabundance of 3D observations which greatly outstrips our current capacity to manage and infer actionable information. While years of research have removed some of the manual analysis burden for many tasks, human analysis is still a cornerstone of 3D scene exploitation. This is especially true for complex tasks which necessitate comprehension of scale, texture and contextual learning. In order to ameliorate the interpretation burden and enable scientific discovery from this volume of data, new processing paradigms are necessary to keep pace. With this context, this dissertation advances fundamental and applied research in 3D point cloud data pre-processing and deep learning from a variety of platforms. We show that the representation of 3D point data is often not ideal and sacrifices fidelity, context or scalability. First ground scanning terrestrial LIght Detection And Ranging (LiDAR) models are shown to have an inherent statistical bias, and present a state of the art method for correcting this, while preserving data fidelity and maintaining semantic structure. This technique is assessed in the dense canopy of Micronesia, with our technique being the best at retaining high levels of detail under extreme down-sampling (\u3c 1%). Airborne systems are then explored with a method which is presented to pre-process data to preserve a global contrast and semantic content in deep learners. This approach is validated with a building footprint detection task from airborne imagery captured in Eastern TN from the 3D Elevation Program (3DEP), our approach was found to achieve significant accuracy improvements over traditional techniques. Finally, topography data spanning the globe is used to assess past and previous global land cover change. Utilizing Shuttle Radar Topography Mission (SRTM) and Moderate Resolution Imaging Spectroradiometer (MODIS) data, paired with the airborne preprocessing technique described previously, a model for predicting land-cover change from topography observations is described. The culmination of these efforts have the potential to enhance the capabilities of automated 3D geospatial processing, substantially lightening the burden of analysts, with implications improving our responses to global security, disaster response, climate change, structural design and extraplanetary exploration

    Modelling appearance and geometry from images

    Get PDF
    Acquisition of realistic and relightable 3D models of large outdoor structures, such as buildings, requires the modelling of detailed geometry and visual appearance. Recovering these material characteristics can be very time consuming and needs specially dedicated equipment. Alternatively, surface detail can be conveyed by textures recovered from images, whose appearance is only valid under the originally photographed viewing and lighting conditions. Methods to easily capture locally detailed geometry, such as cracks in stone walls, and visual appearance require control of lighting conditions, which are usually restricted to small portions of surfaces captured at close range.This thesis investigates the acquisition of high-quality models from images, using simple photographic equipment and modest user intervention. The main focus of this investigation is on approximating detailed local depth information and visual appearance, obtained using a new image-based approach, and combining this with gross-scale 3D geometry. This is achieved by capturing these surface characteristics in small accessible regions and transferring them to the complete façade. This approach yields high-quality models, imparting the illusion of measured reflectance. In this thesis, we first present two novel algorithms for surface detail and visual appearance transfer, where these material properties are captured for small exemplars, using an image-based technique. Second, we develop an interactive solution to solve the problems of performing the transfer over both a large change in scale and to the different materials contained in a complete façade. Aiming to completely automate this process, a novel algorithm to differentiate between materials in the façade and associate them with the correct exemplars is introduced with promising results. Third, we present a new method for texture reconstruction from multiple images that optimises texture quality, by choosing the best view for every point and minimising seams. Material properties are transferred from the exemplars to the texture map, approximating reflectance and meso-structure. The combination of these techniques results in a complete working system capable of producing realistic relightable models of full building façades, containing high-resolution geometry and plausible visual appearance.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Visual Analysis of Extremely Dense Crowded Scenes

    Get PDF
    Visual analysis of dense crowds is particularly challenging due to large number of individuals, occlusions, clutter, and fewer pixels per person which rarely occur in ordinary surveillance scenarios. This dissertation aims to address these challenges in images and videos of extremely dense crowds containing hundreds to thousands of humans. The goal is to tackle the fundamental problems of counting, detecting and tracking people in such images and videos using visual and contextual cues that are automatically derived from the crowded scenes. For counting in an image of extremely dense crowd, we propose to leverage multiple sources of information to compute an estimate of the number of individuals present in the image. Our approach relies on sources such as low confidence head detections, repetition of texture elements (using SIFT), and frequency-domain analysis to estimate counts, along with confidence associated with observing individuals, in an image region. Furthermore, we employ a global consistency constraint on counts using Markov Random Field which caters for disparity in counts in local neighborhoods and across scales. We tested this approach on crowd images with the head counts ranging from 94 to 4543 and obtained encouraging results. Through this approach, we are able to count people in images of high-density crowds unlike previous methods which are only applicable to videos of low to medium density crowded scenes. However, the counting procedure just outputs a single number for a large patch or an entire image. With just the counts, it becomes difficult to measure the counting error for a query image with unknown number of people. For this, we propose to localize humans by finding repetitive patterns in the crowd image. Starting with detections from an underlying head detector, we correlate them within the image after their selection through several criteria: in a pre-defined grid, locally, or at multiple scales by automatically finding the patches that are most representative of recurring patterns in the crowd image. Finally, the set of generated hypotheses is selected using binary integer quadratic programming with Special Ordered Set (SOS) Type 1 constraints. Human Detection is another important problem in the analysis of crowded scenes where the goal is to place a bounding box on visible parts of individuals. Primarily applicable to images depicting medium to high density crowds containing several hundred humans, it is a crucial pre-requisite for many other visual tasks, such as tracking, action recognition or detection of anomalous behaviors, exhibited by individuals in a dense crowd. For detecting humans, we explore context in dense crowds in the form of locally-consistent scale prior which captures the similarity in scale in local neighborhoods with smooth variation over the image. Using the scale and confidence of detections obtained from an underlying human detector, we infer scale and confidence priors using Markov Random Field. In an iterative mechanism, the confidences of detections are modified to reflect consistency with the inferred priors, and the priors are updated based on the new detections. The final set of detections obtained are then reasoned for occlusion using Binary Integer Programming where overlaps and relations between parts of individuals are encoded as linear constraints. Both human detection and occlusion reasoning in this approach are solved with local neighbor-dependent constraints, thereby respecting the inter-dependence between individuals characteristic to dense crowd analysis. In addition, we propose a mechanism to detect different combinations of body parts without requiring annotations for individual combinations. Once human detection and localization is performed, we then use it for tracking people in dense crowds. Similar to the use of context as scale prior for human detection, we exploit it in the form of motion concurrence for tracking individuals in dense crowds. The proposed method for tracking provides an alternative and complementary approach to methods that require modeling of crowd flow. Simultaneously, it is less likely to fail in the case of dynamic crowd flows and anomalies by minimally relying on previous frames. The approach begins with the automatic identification of prominent individuals from the crowd that are easy to track. Then, we use Neighborhood Motion Concurrence to model the behavior of individuals in a dense crowd, this predicts the position of an individual based on the motion of its neighbors. When the individual moves with the crowd flow, we use Neighborhood Motion Concurrence to predict motion while leveraging five-frame instantaneous flow in case of dynamically changing flow and anomalies. All these aspects are then embedded in a framework which imposes hierarchy on the order in which positions of individuals are updated. The results are reported on eight sequences of medium to high density crowds and our approach performs on par with existing approaches without learning or modeling patterns of crowd flow. We experimentally demonstrate the efficacy and reliability of our algorithms by quantifying the performance of counting, localization, as well as human detection and tracking on new and challenging datasets containing hundreds to thousands of humans in a given scene

    Learning based biological image analysis

    Get PDF
    The fate of contemporary scientific research in biology and medicine is bound to the advancements in computational methods. The unprecedented data explosion in microscopy and the crescent interest of life scientists in studying more complex and more subtle interactions stimulate the research for innovative computational solutions on challenging real world applications. Extensions and novel formulations of generic and flexible methods based on learning/inference are necessary to cope with the large variety of the produced data and to avoid continuous reimplementation and heavy parameter tuning. This thesis exploits cutting edge machine learning methods based on structured probabilistic models and weakly supervised learning to provide four novel solutions in the areas of large-scale microscopic imaging and multiple objects tracking. Chapter 2 introduces a weakly supervised learning framework to tackle the problem of detecting defect images while mining massive microscopic imagery databases. This thesis demonstrates accurate prediction with low user annotation effort. Chapter 3 presents a learning approach for counting overlapping objects in images based on local structured predictors. This problem has numerous applications in high throughput microscopy screening such as cells counting for drug toxicity assays. Chapter 4 develops a deterministic graphical model to impose temporal consistency in objects counts when dealing with a video sequence. This Chapter shows that global (temporal and spatial) structural inference consistently improves over local (only spatial) predictions. The method developed in Chapter 4 is used in a novel downstream tracking algorithm which is introduced in Chapter 5. This Chapter tackles, for the first time, the difficult problem of tracking heavily overlapping, translucent and indistinguishable objects. The mutual occlusion event handling of such objects is formulated as a novel structured inference problem based on the minimization of a convex multi-commodity flow energy. The optimal weights of the energy terms are learned with partial user supervision using structured learning with latent variables.To support behavioral biologists, we apply this method to the problem of tracking a community of interacting Drosophila larvae
    • …
    corecore