466 research outputs found

    Local dynamics for fibered holomorphic transformations

    Full text link
    Fibered holomorphic dynamics are skew-product transformations over an irrational rotation, whose fibers are holomorphic functions. In this paper we study such a dynamics on a neighborhood of an invariant curve. We obtain some results analogous to the results in the non fibered case

    Spatial information retrieval and geographical ontologies: an overview of the SPIRIT project

    Get PDF
    A large proportion of the resources available on the world-wide web refer to information that may be regarded as geographically located. Thus most activities and enterprises take place in one or more places on the Earth's surface and there is a wealth of survey data, images, maps and reports that relate to specific places or regions. Despite the prevalence of geographical context, existing web search facilities are poorly adapted to help people find information that relates to a particular location. When the name of a place is typed into a typical search engine, web pages that include that name in their text will be retrieved, but it is likely that many resources that are also associated with the place may not be retrieved. Thus resources relating to places that are inside the specified place may not be found, nor may be places that are nearby or that are equivalent but referred to by another name. Specification of geographical context frequently requires the use of spatial relationships concerning distance or containment for example, yet such terminology cannot be understood by existing search engines. Here we provide a brief survey of existing facilities for geographical information retrieval on the web, before describing a set of tools and techniques that are being developed in the project SPIRIT : Spatially-Aware Information Retrieval on the Internet (funded by European Commission Framework V Project IST-2001-35047)

    Whole-Blood Flow-Cytometric Analysis of Antigen-Specific CD4 T-Cell Cytokine Profiles Distinguishes Active Tuberculosis from Non-Active States

    Get PDF
    T-cell based IFN-γ release assays do not permit distinction of active tuberculosis (TB) from successfully treated disease or latent M. tuberculosis infection. We postulated that IFN-γ and IL-2 cytokine profiles of antigen-specific T cells measured by flow-cytometry ex vivo might correlate with TB disease activity in vivo. Tuberculin (PPD), ESAT-6 and CFP-10 were used as stimuli to determine antigen-specific cytokine profiles in CD4 T cells from 24 patients with active TB and 28 patients with successfully treated TB using flow-cytometry. Moreover, 25 individuals with immunity consistent with latent M. tuberculosis infection and BCG-vaccination, respectively, were recruited. Although the frequency of cytokine secreting PPD reactive CD4 T cells was higher in patients with active TB compared to patients with treated TB (median 0.81% vs. 0.39% of CD4 T cells, p = 0.02), the overlap in frequencies precluded distinction between the groups on an individual basis. When assessing cytokine profiles, PPD specific CD4 T cells secreting both IFN-γ and IL-2 predominated in treated TB, latent infection and BCG-vaccination, whilst in active TB the cytokine profile was shifted towards cells secreting IFN-γ only (p<0.0001). Cytokine profiles of ESAT-6 or CFP-10 reactive CD4 T cells did not differ between the groups. Receiver operator characteristics (ROC) analysis revealed that frequencies of PPD specific IFN-γ/IL-2 dual-positive T cells below 56% were an accurate marker for active TB (specificity 100%, sensitivity 70%) enabling effective discrimination from non-active states. In conclusion, a frequency lower than 56% IFN-γ/IL-2 dual positive PPD-specific circulating CD4 T-cells is strongly indicative of active TB

    EXPLORING ALS AND DIM DATA FOR SEMANTIC SEGMENTATION USING CNNS

    Get PDF
    Over the past years, the algorithms for dense image matching (DIM) to obtain point clouds from aerial images improved significantly. Consequently, DIM point clouds are now a good alternative to the established Airborne Laser Scanning (ALS) point clouds for remote sensing applications. In order to derive high-level applications such as digital terrain models or city models, each point within a point cloud must be assigned a class label. Usually, ALS and DIM are labelled with different classifiers due to their varying characteristics. In this work, we explore both point cloud types in a fully convolutional encoder-decoder network, which learns to classify ALS as well as DIM point clouds. As input, we project the point clouds onto a 2D image raster plane and calculate the minimal, average and maximal height values for each raster cell. The network then differentiates between the classes ground, non-ground, building and no data. We test our network in six training setups using only one point cloud type, both point clouds as well as several transfer-learning approaches. We quantitatively and qualitatively compare all results and discuss the advantages and disadvantages of all setups. The best network achieves an overall accuracy of 96&thinsp;% in an ALS and 83&thinsp;% in a DIM test set

    Joint classification of ALS and DIM point clouds

    Get PDF
    National mapping agencies (NMAs) have to acquire nation-wide Digital Terrain Models on a regular basis as part of their obligations to provide up-to-date data. Point clouds from Airborne Laser Scanning (ALS) are an important data source for this task; recently, NMAs also started deriving Dense Image Matching (DIM) point clouds from aerial images. As a result, NMAs have both point cloud data sources available, which they can exploit for their purposes. In this study, we investigate the potential of transfer learning from ALS to DIM data, so the time consuming step of data labelling can be reduced. Due to their specific individual measurement techniques, both point clouds have various distinct properties such as RGB or intensity values, which are often exploited for classification of either ALS or DIM point clouds. However, those features also hinder transfer learning between these two point cloud types, since they do not exist in the other point cloud type. As the mere 3D point is available in both point cloud types, we focus on transfer learning from an ALS to a DIM point cloud using exclusively the point coordinates. We are tackling the issue of different point densities by rasterizing the point cloud into a 2D grid and take important height features as input for classification. We train an encoder-decoder convolutional neural network with labelled ALS data as a baseline and then fine-tune this baseline with an increasing amount of labelled DIM data. We also train the same network exclusively on all available DIM data as reference to compare our results. We show that only 10% of labelled DIM data increase the classification results notably, which is especially relevant for practical applications

    Improving 3d pedestrian detection for wearable sensor data with 2d human pose

    Get PDF
    Collisions and safety are important concepts when dealing with urban designs like shared spaces. As pedestrians (especially the elderly and disabled people) are more vulnerable to accidents, realising an intelligent mobility aid to avoid collisions is a direction of research that could improve safety using a wearable device. Also, with the improvements in technologies for visualisation and their capabilities to render 3D virtual content, AR devices could be used to realise virtual infrastructure and virtual traffic systems. Such devices (e.g., Hololens) scan the environment using stereo and ToF (Time-of-Flight) sensors, which in principle can be used to detect surrounding objects, including dynamic agents such as pedestrians. This can be used as basis to predict collisions. To envision an AR device as a safety aid and demonstrate its 3D object detection capability (in particular: pedestrian detection), we propose an improvement to the 3D object detection framework Frustum Pointnet with human pose and apply it on the data from an AR device. Using the data from such a device in an indoor setting, we conducted a comparative study to investigate how high level 2D human pose features in our approach could help to improve the detection performance of orientated 3D pedestrian instances over Frustum Pointnet

    Multi-scale building maps from aerial imagery

    Get PDF
    Nowadays, the extraction of buildings from aerial imagery is mainly done through deep convolutional neural networks (DCNNs). Buildings are predicted as binary pixel masks and then regularized to polygons. Restricted by nearby occlusions (such as trees), building eaves, and sometimes imperfect imagery data, these results can hardly be used to generate detailed building footprints comparable to authoritative data. Therefore, most products can only be used for mapping at smaller map scale. The level of detail that should be retained is normally determined by the scale parameter in the regularization algorithm. However, this scale information has been already defined in cartography. From existing maps of different scales, neural network can be used to learn such scale information implicitly. The network can perform generalization directly on the mask output and generate multi-scale building maps at once. In this work, a pipeline method is proposed, which can generate multi-scale building maps from aerial imagery directly. We used a land cover classification model to provide the building blobs. With the models pre-trained for cartographic building generalization, blobs were generalized to three target map scales, 1:10,000, 1:15,000, and 1:25,000. After post-processing with vectorization and regularization, multi-scale building maps were generated and then compared with existing authoritative building data qualitatively and quantitatively. In addition, change detection was performed and suggestions for unmapped buildings could be provided at a desired map scale. . © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives

    Trajectory extraction for analysis of unsafe driving behaviour

    Get PDF
    The environment of the vehicle can significantly influence the driving situation. Which conditions lead to unsafe driving behaviour is not always clear, also not to a human driver, as the causes might be unconscious, and thus cannot be revealed by expert interviews. Therefore, it is important to investigate how such situations can be reliably detected, and then search for their triggers. It is conceivable that such insecure situations (e.g. near-accidents, U-turns, avoiding obstacles) are reflected, for example, as anomalies in the movement trajectories of road users. Collecting real world traffic data in driving studies is very time consuming and expensive. However, a lot of roads or public areas are already monitored with video cameras. In addition, nowadays more and more of such video data is made publicly available over the internet so that the amount of free video data is increasing. This research will exploit the use of such kind of opportunistic VGI. In the paper the first step of an automatic analysis are presented, namely: to introduce a real time processing pipeline to extract road user trajectories from surveillance video data
    • …
    corecore