2,269 research outputs found

    Biharmonic fields and mesh completion

    Get PDF
    We discuss bi-harmonic fields which approximate signed distance fields. We conclude that the bi-harmonic field approximation can be a powerful tool for mesh completion in general and complex cases. We present an adaptive, multigrid algorithm to extrapolate signed distance fields. By defining a volume mask in a closed region bounding the area that must be repaired, the algorithm computes a signed distance field in well-defined regions and uses it as an over-determined boundary condition constraint for the biharmonic field computation in the remaining regions. The algorithm operates locally, within an expanded bounding box of each hole, and therefore scales well with the number of holes in a single, complex model. We discuss this approximation in practical examples in the case of triangular meshes resulting from laser scan acquisitions which require massive hole repair. We conclude that the proposed algorithm is robust and general, and is able to deal with complex topological casesPeer ReviewedPostprint (author's final draft

    Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources

    Get PDF
    Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges

    25th International Congress of the European Association for Endoscopic Surgery (EAES) Frankfurt, Germany, 14-17 June 2017 : Oral Presentations

    Get PDF
    Introduction: Ouyang has recently proposed hiatal surface area (HSA) calculation by multiplanar multislice computer tomography (MDCT) scan as a useful tool for planning treatment of hiatus defects with hiatal hernia (HH), with or without gastroesophageal reflux (MRGE). Preoperative upper endoscopy or barium swallow cannot predict the HSA and pillars conditions. Aim to asses the efficacy of MDCT’s calculation of HSA for planning the best approach for the hiatal defects treatment. Methods: We retrospectively analyzed 25 patients, candidates to laparoscopic antireflux surgery as primary surgery or hiatus repair concomitant with or after bariatric surgery. Patients were analyzed preoperatively and after one-year follow-up by MDCT scan measurement of esophageal hiatus surface. Five normal patients were enrolled as control group. The HSA’s intraoperative calculation was performed after complete dissection of the area considered a triangle. Postoperative CT-scan was done after 12 months or any time reflux symptoms appeared. Results: (1) Mean HSA in control patients with no HH, no MRGE was cm2 and similar in non-complicated patients with previous LSG and cruroplasty. (2) Mean HSA in patients candidates to cruroplasty was 7.40 cm2. (3) Mean HSA in patients candidates to redo cruroplasty for recurrence was 10.11 cm2. Discussion. MDCT scan offer the possibility to obtain an objective measurement of the HSA and the correlation with endoscopic findings and symptoms. The preoperative information allow to discuss with patients the proper technique when a HSA[5 cm2 is detected. During the follow-up a correlation between symptoms and failure of cruroplasty can be assessed. Conclusions: MDCT scan seems to be an effective non-invasive method to plan hiatal defect treatment and to check during the follow-up the potential recurrence. Future research should correlate in larger series imaging data with intraoperative findings

    Towards Image-Guided Pediatric Atrial Septal Defect Repair

    Get PDF
    Congenital heart disease occurs in 107.6 out of 10,000 live births, with Atrial Septal Defects (ASD) accounting for 10\% of these conditions. Historically, ASDs were treated with open heart surgery using cardiopulmonary bypass, allowing a patch to be sewn over the defect. In 1976, King et al. demonstrated use of a transcatheter occlusion procedure, thus reducing the invasiveness of ASD repair. Localization during these catheter based procedures traditionally has relied on bi-plane fluoroscopy; more recently trans-esophageal echocardiography (TEE) and intra-cardiac echocardiography (ICE) have been used to navigate these procedures. Although there is a high success rate using the transcatheter occlusion procedure, fluoroscopy poses radiation dose risk to both patient and clinician. The impact of this dose to the patients is important as many of those undergoing this procedure are children, who have an increased risk associated with radiation exposure. Their longer life expectancy than adults provides a larger window of opportunity for expressing the damaging effects of ionizing radiation. In addition, epidemiologic studies of exposed populations have demonstrated that children are considerably more sensitive to the carcinogenic effects radiation. Image-guided surgery (IGS) uses pre-operative and intra-operative images to guide surgery or an interventional procedure. Central to every IGS system is a software application capable of processing and displaying patient images, registration between multiple coordinate systems, and interfacing with a tool tracking system. We have developed a novel image-guided surgery framework called Kit for Navigation by Image Focused Exploration (KNIFE). This software system serves as the core technology by which a system for reduction of radiation exposure to pediatric patients was developed. The bulk of the initial work in this research endevaour was the development of KNIFE which itself went through countless iterations before arriving at its current state as per the feature requirements established. Secondly, since this work involved the use of captured medical images and their use in an IGS software suite, a brief analysis of the physics behind the images was conducted. Through this aspect of the work, intrinsic parameters (principal point and focal point) of the fluoroscope were quantified using a 3D grid calibration phantom. A second grid phantom was traversed through the fluoroscopic imaging volume of II and flat panel based systems at 2 cm intervals building a scatter field of the volume to demonstrate pincushion and \u27S\u27 distortion in the images. Effects of projection distortion on the images was assessed by measuring the fiducial registration error (FRE) of each point used in two different registration techniques, where both methods utilized ordinary procrustes analysis but the second used a projection matrix built from the fluoroscopes calculated intrinsic parameters. A case study was performed to test whether the projection registration outperforms the rigid transform only. Using the knowledge generated were able to successfully design and complete mock clinical procedures using cardiac phantom models. These mock trials at the beginning of this work used a single point to represent catheter location but this was eventually replaced with a full shape model that offered numerous advantages. At the conclusion of this work a novel protocol for conducting IG ASD procedures was developed. Future work would involve the construction of novel EM tracked tools, phantom models for other vascular diseases and finally clinical integration and use

    Data stream processing meets the Advanced Metering Infrastructure: possibilities, challenges and applications

    Get PDF
    Distribution of electricity is changing.Energy production is increasingly distributed, weather dependent and located in the distribution network, close to consumers.Energy consumption is increasing throughout society and the electrification of transportation is driving distribution networks closer to the limits.Operating the networks closer to their limits also increases the risk for faults.Continuous monitoring of the distribution network closest to the customers is needed in order to mitigate this risk.The Advanced Metering Infrastructure introduced smart meters throughout the distribution network.Data stream processing is a computing paradigm that offers low latency results from analysis on large volumes of the data.This thesis investigates the possibilities and challenges for continuous monitoring that are created when the Advanced Metering Infrastructure and data stream processing meet.The challenges that are addressed in the thesis are efficient processing of unordered (also called out-of-order) data and efficient usage of the computational resources present in the Advanced Metering Infrastructure.Contributions towards more efficient processing of out-of-order data are made with eChIDNA and TinTiN. Both are systems that utilize knowledge about smart meter data to directly produce results where possible and storing only data that is relevant for late data in order to produce updated results when such late data arrives. eChIDNA is integrated in the streaming query itself, while TinTiN is a streaming middleware that can be applied to streaming queries in order to make them resilient against out-of-order data.Eventual determinism is defined in order to formally investigate the deterministic properties of output produced by such systems.Contributions towards efficient usage of the computational resources of the Advanced Metering Infrastructure are made with the application LoCoVolt.LoCoVolt implements a monitoring algorithm that can run on equipment that is localized in the communication infrastructure of the Advanced Metering Infrastructure and can take advantage of the overlap between the communication and distribution networks.All contributions are evaluated on hardware that is available in current AMI systems, using large scale data obtained from a real production AMI

    Methods for 3D Geometry Processing in the Cultural Heritage Domain

    Get PDF
    This thesis presents methods for 3D geometry processing under the aspects of cultural heritage applications. After a short overview over the relevant basics in 3D geometry processing, the present thesis investigates the digital acquisition of 3D models. A particular challenge in this context are on the one hand difficult surface or material properties of the model to be captured. On the other hand, the fully automatic reconstruction of models even with suitable surface properties that can be captured with Laser range scanners is not yet completely solved. This thesis presents two approaches to tackle these challenges. One exploits a thorough capture of the object’s appearance and a coarse reconstruction for a concise and realistic object representation even for objects with problematic surface properties like reflectivity and transparency. The other method concentrates on digitisation via Laser-range scanners and exploits 2D colour images that are typically recorded with the range images for a fully automatic registration technique. After reconstruction, the captured models are often still incomplete, exhibit holes and/or regions of insufficient sampling. In addition to that, holes are often deliberately introduced into a registered model to remove some undesired or defective surface part. In order to produce a visually appealing model, for instance for visualisation purposes, for prototype or replica production, these holes have to be detected and filled. Although completion is a well-established research field in 2D image processing and many approaches do exist for image completion, surface completion in 3D is a fairly new field of research. This thesis presents a hierarchical completion approach that employs and extends successful exemplar-based 2D image processing approaches to 3D and fills in detail-equipped surface patches into missing surface regions. In order to identify and construct suitable surface patches, selfsimilarity and coherence properties of the surface context of the hole are exploited. In addition to the reconstruction and repair, the present thesis also investigates methods for a modification of captured models via interactive modelling. In this context, modelling is regarded as a creative process, for instance for animation purposes. On the other hand, it is also demonstrated how this creative process can be used to introduce human expertise into the otherwise automatic completion process. This way, reconstructions are feasible even of objects where already the data source, the object itself, is incomplete due to corrosion, demolition, or decay.Methoden zur 3D-Geometrieverarbeitung im Kulturerbesektor In dieser Arbeit werden Methoden zur Bearbeitung von digitaler 3D-Geometrie unter besonderer Berücksichtigung des Anwendungsbereichs im Kulturerbesektor vorgestellt. Nach einem kurzen Überblick über die relevanten Grundlagen der dreidimensionalen Geometriebehandlung wird zunächst die digitale Akquise von dreidimensionalen Objekten untersucht. Eine besondere Herausforderung stellen bei der Erfassung einerseits ungünstige Oberflächen- oder Materialeigenschaften der Objekte dar (wie z.B. Reflexivität oder Transparenz), andererseits ist auch die vollautomatische Rekonstruktion von solchen Modellen, die sich verhältnismäßig problemlos mit Laser-Range Scannern erfassen lassen, immer noch nicht vollständig gelöst. Daher bilden zwei neuartige Verfahren, die diesen Herausforderungen begegnen, den Anfang. Auch nach der Registrierung sind die erfassten Datensätze in vielen Fällen unvollständig, weisen Löcher oder nicht ausreichend abgetastete Regionen auf. Darüber hinaus werden in vielen Anwendungen auch, z.B. durch Entfernen unerwünschter Oberflächenregionen, Löcher gewollt hinzugefügt. Für eine optisch ansprechende Rekonstruktion, vor allem zu Visualisierungszwecken, im Bildungs- oder Unterhaltungssektor oder zur Prototyp- und Replik-Erzeugung müssen diese Löcher zunächst automatisch detektiert und anschließend geschlossen werden. Obwohl dies im zweidimensionalen Fall der Bildbearbeitung bereits ein gut untersuchtes Forschungsfeld darstellt und vielfältige Ansätze zur automatischen Bildvervollständigung existieren, ist die Lage im dreidimensionalen Fall anders, und die Übertragung von zweidimensionalen Ansätzen in den 3D stellt vielfach eine große Herausforderung dar, die bislang keine zufriedenstellenden Lösungen erlaubt hat. Nichtsdestoweniger wird in dieser Arbeit ein hierarchisches Verfahren vorgestellt, das beispielbasierte Konzepte aus dem 2D aufgreift und Löcher in Oberflächen im 3D unter Ausnutzung von Selbstähnlichkeiten und Kohärenzeigenschaften des Oberflächenkontextes schließt. Um plausible Oberflächen zu erzeugen werden die Löcher dabei nicht nur glatt gefüllt, sondern auch feinere Details aus dem Kontext rekonstruiert. Abschließend untersucht die vorliegende Arbeit noch die Modifikation der vervollständigten Objekte durch Freiformmodellierung. Dies wird dabei zum einen als kreativer Prozess z.B. zu Animationszwecken betrachtet. Zum anderen wird aber auch untersucht, wie dieser kreative Prozess benutzt werden kann, um etwaig vorhandenes Expertenwissen in die ansonsten automatische Vervollständigung mit einfließen zu lassen. Auf diese Weise werden auch Rekonstruktionen ermöglicht von Objekten, bei denen schon die Datenquelle, also das Objekt selbst z.B. durch Korrosion oder mutwillige Zerstörung unvollständig ist

    Pedal 4 Purification

    Get PDF
    The lack of access to clean drinking water remains one of the largest issues still facing humanity. The Pedal 4 Purification is a product that addresses this need by utilizing pre-existing bicycle infrastructure and local freshwater sources to allow people to purify their own drinking water on a daily basis. Attachable to any standard bicycle, the Pedal 4 Purification product consists of pump, purification, cart and adjustable kickstand subsystems that allow the operator to pump, purify and transport 40L of potable water. Pedaling at the reasonable rate of 60 rpm will provide the optimal flow rate of 1.54L/min through the filter. At this rate, it would take the operator 26 minutes to provide enough drinking water to satiate 20 people for the day. As a result of the drastic reduction in the time and effort it takes for individuals to obtain drinking water, target communities will have more free time allowing them to focus on other pressing issues. Use of the Pedal 4 Purification product results in a 266% increase in user time savings as well as 54 times the amount of water collected per minute of user exertion compared to traditional methods. The Pedal 4 Purification team travelled to San Andres Itzapa, Guatemala to manufacture the entire device at Maya Pedal, an NGO that creates ‘bicimaquinas’ to help provide its local community members with basic human resource infrastructure. After the manufacturing process was complete, the team and Maya Pedal workers drove to the highland community of Patzun to implement the design and instruct the local people about how to use it. User feedback was noted after the successful implementation in the developing community of Patzun

    Vertex classification for non-uniform geometry reduction.

    Get PDF
    Complex models created from isosurface extraction or CAD and highly accurate 3D models produced from high-resolution scanners are useful, for example, for medical simulation, Virtual Reality and entertainment. Often models in general require some sort of manual editing before they can be incorporated in a walkthrough, simulation, computer game or movie. The visualization challenges of a 3D editing tool may be regarded as similar to that of those of other applications that include an element of visualization such as Virtual Reality. However the rendering interaction requirements of each of these applications varies according to their purpose. For rendering photo-realistic images in movies computer farms can render uninterrupted for weeks, a 3D editing tool requires fast access to a model's fine data. In Virtual Reality rendering acceleration techniques such as level of detail can temporarily render parts of a scene with alternative lower complexity versions in order to meet a frame rate tolerable for the user. These alternative versions can be dynamic increments of complexity or static models that were uniformly simplified across the model by minimizing some cost function. Scanners typically have a fixed sampling rate for the entire model being scanned, and therefore may generate large amounts of data in areas not of much interest or that contribute little to the application at hand. It is therefore desirable to simplify such models non-uniformly. Features such as very high curvature areas or borders can be detected automatically and simplified differently to other areas without any interaction or visualization. However a problem arises when one wishes to manually select features of interest in the original model to preserve and create stand alone, non-uniformly reduced versions of large models, for example for medical simulation. To inspect and view such models the memory requirements of LoD representations can be prohibitive and prevent storage of a model in main memory. Furthermore, although asynchronous rendering of a base simplified model ensures a frame rate tolerable to the user whilst detail is paged, no guarantees can be made that what the user is selecting is at the original resolution of the model or of an appropriate LoD owing to disk lag or the complexity of a particular view selected by the user. This thesis presents an interactive method in the con text of a 3D editing application for feature selection from any model that fits in main memory. We present a new compression/decompression of triangle normals and colour technique which does not require dedicated hardware that allows for 87.4% memory reduction and allows larger models to fit in main memory with at most 1.3/2.5 degrees of error on triangle normals and to be viewed interactively. To address scale and available hardware resources, we reference a hierarchy of volumes of different sizes. The distances of the volumes at each level of the hierarchy to the intersection point of the line of sight with the model are calculated and these distances sorted. At startup an appropriate level of the tree is automatically chosen by separating the time required for rendering from that required for sorting and constraining the latter according to the resources available. A clustered navigation skin and depth buffer strategy allows for the interactive visualisation of models of any size, ensuring that triangles from the closest volumes are rendered over the navigation skin even when the clustered skin may be closer to the viewer than the original model. We show results with scanned models, CAD, textured models and an isosurface. This thesis addresses numerical issues arising from the optimisation of cost functions in LoD algorithms and presents a semi-automatic solution for selection of the threshold on the condition number of the matrix to be inverted for optimal placement of the new vertex created by an edge collapse. We show that the units in which a model is expressed may inadvertently affect the condition of these matrices, hence affecting the evaluation of different LoD methods with different solvers. We use the same solver with an automatically calibrated threshold to evaluate different uniform geometry reduction techniques. We then present a framework for non-uniform reduction of regular scanned models that can be used in conjunction with a variety of LoD algorithms. The benefits of non-uniform reduction are presented in the context of an animation system. (Abstract shortened by UMI.)
    corecore