336 research outputs found

    Conditional Adversarial Networks for Multimodal Photo-Realistic Point Cloud Rendering

    Get PDF
    We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.Nutzung von Conditional Generative Adversarial Networks für das multimodale photorealistische Rendering von Punktwolken. Wir untersuchen, ob Conditional Generative Adversarial Networks (C-GANs) für das Rendering von Punktwolken geeignet sind. Zu diesem Zweck haben wir einen Datensatz erstellt, der etwa 150.000 Bildpaare enthält, jedes bestehend aus einem Rendering einer Punktwolke und dem dazugehörigen Kamerabild. Der Datensatz wurde mit unserem Mobile Mapping System aufgezeichnet, wobei die Messkampagnen über ein Jahr verteilt durchgeführt wurden. Unser Modell lernt, ausschließlich auf Basis von Punktwolkendaten realistisch aussehende Bilder vorherzusagen. Wir zeigen, dass wir mit diesem Ansatz Punktwolken ohne die Verwendung von Kamerabildern kolorieren können. Darüber hinaus zeigen wir, dass wir durch die Parametrierung des Aufnahmedatums in der Lage sind, aus identischen Eingabepunktwolken realistisch aussehende Ansichten für verschiedene Jahreszeiten vorherzusagen

    A Review of Panoptic Segmentation for Mobile Mapping Point Clouds

    Full text link
    3D point cloud panoptic segmentation is the combined task to (i) assign each point to a semantic class and (ii) separate the points in each class into object instances. Recently there has been an increased interest in such comprehensive 3D scene understanding, building on the rapid advances of semantic segmentation due to the advent of deep 3D neural networks. Yet, to date there is very little work about panoptic segmentation of outdoor mobile-mapping data, and no systematic comparisons. The present paper tries to close that gap. It reviews the building blocks needed to assemble a panoptic segmentation pipeline and the related literature. Moreover, a modular pipeline is set up to perform comprehensive, systematic experiments to assess the state of panoptic segmentation in the context of street mapping. As a byproduct, we also provide the first public dataset for that task, by extending the NPM3D dataset to include instance labels. That dataset and our source code are publicly available. We discuss which adaptations are need to adapt current panoptic segmentation methods to outdoor scenes and large objects. Our study finds that for mobile mapping data, KPConv performs best but is slower, while PointNet++ is fastest but performs significantly worse. Sparse CNNs are in between. Regardless of the backbone, Instance segmentation by clustering embedding features is better than using shifted coordinates

    BiasBed -- Rigorous Texture Bias Evaluation

    Full text link
    The well-documented presence of texture bias in modern convolutional neural networks has led to a plethora of algorithms that promote an emphasis on shape cues, often to support generalization to new domains. Yet, common datasets, benchmarks and general model selection strategies are missing, and there is no agreed, rigorous evaluation protocol. In this paper, we investigate difficulties and limitations when training networks with reduced texture bias. In particular, we also show that proper evaluation and meaningful comparisons between methods are not trivial. We introduce BiasBed, a testbed for texture- and style-biased training, including multiple datasets and a range of existing algorithms. It comes with an extensive evaluation protocol that includes rigorous hypothesis testing to gauge the significance of the results, despite the considerable training instability of some style bias methods. Our extensive experiments, shed new light on the need for careful, statistically founded evaluation protocols for style bias (and beyond). E.g., we find that some algorithms proposed in the literature do not significantly mitigate the impact of style bias at all. With the release of BiasBed, we hope to foster a common understanding of consistent and meaningful comparisons, and consequently faster progress towards learning methods free of texture bias. Code is available at https://github.com/D1noFuzi/BiasBe

    Towards accurate instance segmentation in large-scale LiDAR point clouds

    Full text link
    Panoptic segmentation is the combination of semantic and instance segmentation: assign the points in a 3D point cloud to semantic categories and partition them into distinct object instances. It has many obvious applications for outdoor scene understanding, from city mapping to forest management. Existing methods struggle to segment nearby instances of the same semantic category, like adjacent pieces of street furniture or neighbouring trees, which limits their usability for inventory- or management-type applications that rely on object instances. This study explores the steps of the panoptic segmentation pipeline concerned with clustering points into object instances, with the goal to alleviate that bottleneck. We find that a carefully designed clustering strategy, which leverages multiple types of learned point embeddings, significantly improves instance segmentation. Experiments on the NPM3D urban mobile mapping dataset and the FOR-instance forest dataset demonstrate the effectiveness and versatility of the proposed strategy

    Innovation activities of firms in Germany - results of the German CIS 2012 and 2014 : background report on the surveys of the Mannheim Innovation Panel conducted in the years 2013 to 2016

    Full text link
    Innovation is regarded as a key driver of productivity and market growth and thus has a great potential for increasing wealth. Surveying innovation activities of firms is an important contribution to a better understanding of the process of innovation and how policy may intervene to maximise the social returns of private investment into innovation. Over the past three decades, research has developed a detailed methodology to collect and analyse innovation activities at the firm level. The Oslo Manual, published by OECD and Eurostat (2005) is one important outcome of these efforts. In 1993 both organisations have started a joint initiative, known as the Community Innovation Survey (CIS), to collect firm level data on innovation across countries in concord (with each other). The German contribution to this activity is the so-called Mannheim Innovation Panel (MIP), an annual survey implemented with the first CIS wave in 1993. The MIP fully applies the methodological recommendations laid down in the Oslo Manual. It is designed as a panel survey, i.e. the same gross sample of firms is surveyed each year, with a biannual refreshment of the sample. The MIP is commissioned by the German Federal Ministry of Education and Research (BMBF) and conducted by the Centre for European Economic Research (ZEW) in cooperation with the Fraunhofer Institute for Systems and Innovation Research (ISI) and the Institute for Applied Social Science (infas). This publication reports main results of the MIP surveys conducted in the years 2013, 2014, 2015 and 2016. The surveys of the years 2013 and 2015 were the German contribution to the CIS for the reference years 2012 and 2014. The purpose of this report is to present descriptive results on various innovation indicators for the German enterprise sector

    Improving deep learning based semantic segmentation with multi view outliner correction

    Get PDF
    The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10'000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach. © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives

    Halothane hepatitis with renal failure treated with hemodialysis and exchange transfusion

    Get PDF
    A 38-year-old white female, hepatitis B antigen negative, developed fluminating hepatic failure associated with oliguria and severe azotemia after two halothane anesthesia and without exposure to other hepatotoxic drugs or blood transfusions. She was treated with multiple hemodialysis and exchange blood transfusion. The combined treatment corrected the uremic abnormalities and improved her level of consciousness. The liver and kidney function gradually improved, and she made a complete recovery, the first recorded with hepatic and renal failure under these post-anesthetic conditions. Further evaluation of this combined treatment used for this patient is warranted. © 1974 The Japan Surgical Society

    Innovation in Germany - results of the German CIS 2006 to 2010

    Get PDF
    Innovation is regarded as a key driver of productivity and market growth and thus has a great potential for increasing wealth. Surveying innovation activities of firms is an important contribution to a better understanding of the process of innovation and how policy may intervene to maximise the social returns of private investment into innovation. Over the past three decades, research has developed a detailed methodology to collect and analyse innovation activities at the firm level. The Oslo Manual, published by OECD and Eurostat (2005) is one important outcome of these efforts. In 1993 both organisations have started a joint initiative, known as the Community Innovation Survey (CIS), to collect firm level data on innovation across countries in concord (with each other). The German contribution to this activity is the so-called Mannheim Innovation Panel (MIP), an annual survey implemented with the first CIS wave in 1993. The MIP fully applies the methodological recommendations laid down in the Oslo Manual. It is designed as a panel survey, i.e. the same gross sample of firms is surveyed each year, with a biannual refreshment of the sample. The MIP is commissioned by the German Federal Ministry of Education and Research (BMBF) and conducted by the Centre for European Economic Research (ZEW) in cooperation with the Fraunhofer Institute Systems and Innovation Research (ISI) and the Institute for Applied Social Science (infas)
    corecore