30 research outputs found

    Retrieving Landmark Salience Based on Wikipedia: An Integrated Ranking Model

    Get PDF
    Landmarks are important for assisting in wayfinding and navigation and for enriching user experience. Although many user-generated geotagged sources exist, landmark entities are still mostly retrieved from authoritative geographic sources. Wikipedia, the world’s largest free encyclopedia, stores geotagged information on many geospatial entities, including a very large and well-founded volume of landmark information. However, not all Wikipedia geotagged landmark entities can be considered valuable and instructive. This research introduces an integrated ranking model for mining landmarks from Wikipedia predicated on estimating and weighting their salience. Other than location, the model is based on the entries’ category and attributed data. Preliminary ranking is formulated on the basis of three spatial descriptors associated with landmark salience, namely permanence, visibility, and uniqueness. This ranking is integrated with a score derived from a set of numerical attributes that are associated with public interest in the Wikipedia page―including the number of redirects and the date of the latest edit. The methodology is comparatively evaluated for various areas in different cities. Results show that the developed integrated ranking model is robust in identifying landmark salience, paving the way for incorporation of Wikipedia’s content into navigation systems

    IMPLEMENTING SIFT AND BI-TRIANGULAR PLANE TRANSFORMATION FOR INTEGRATING DIGITAL TERRAIN MODELS

    Get PDF
    Since their inception in the middle of the twentieth century, Digital Terrain Models (DTMs) have played an important role in many fields and applications that are used by geospatial professionals, ranging from commercial companies to government agencies. Thus, both the scientific community and the industry have introduced many methods and technologies for DTM generation and data handling. These resulted in a high volume and variety of DTM databases, each having different coverage and data-characteristics, such as accuracy, resolution, level-of-detail – amongst others. These various factors can cause a dilemma for scientists, mappers, and engineers that now have to choose a DTM to work with, let alone if several of these representations exist for a specified area. Traditionally, researchers tackled this problem by using only one DTM (e.g., the most accurate or detailed one), and only rarely tried to implement data fusion approaches, combining several DTMs into one cohesive unit. Although to some extent this was successful in reducing errors and improving the overall integrated DTM accuracy, two prominent problems are still scarcely addressed. The first is that the horizontal datum distortions and discrepancies between the DTMs are mostly ignored, with only the height dimension taken into account, even though in most cases these are evident. The second is that most approaches operate on a global scale, and thus do not address the more localized variations and discrepancies that are presented in the different DTMs. Both problems affect the resulting integrated DTM quality, which retains these unresolved distortions and discrepancies, resulting in a representation that is to some extent inferior and ambiguous. In order to tackle this, we propose an image based fusion approach: using the SIFT algorithm for matching and registration of the different representations, alongside localized morphing. Implementing the proposed approach and algorithms on various DTMs, the results are promising, with the capacity correctly geospatially align the DTMs, thus reducing the mean height difference variance between the databases to close to zero, as well as reducing the standard deviation between them by more than 30 %

    Therapeutic limitations in tumor-specific CD8+ memory T cell engraftment

    Get PDF
    BACKGROUND: Adoptive immunotherapy with cytotoxic T lymphocytes (CTL) represents an alternative approach to treating solid tumors. Ideally, this would confer long-term protection against tumor. We previously demonstrated that in vitro-generated tumor-specific CTL from the ovalbumin (OVA)-specific OT-I T cell receptor transgenic mouse persisted long after adoptive transfer as memory T cells. When recipient mice were challenged with the OVA-expressing E.G7 thymoma, tumor growth was delayed and sometimes prevented. The reasons for therapeutic failures were not clear. METHODS: OT-I CTL were adoptively transferred to C57BL/6 mice 21 – 28 days prior to tumor challenge. At this time, the donor cells had the phenotypical and functional characteristics of memory CD8+ T cells. Recipients which developed tumor despite adoptive immunotherapy were analyzed to evaluate the reason(s) for therapeutic failure. RESULTS: Dose-response studies demonstrated that the degree of tumor protection was directly proportional to the number of OT-I CTL adoptively transferred. At a low dose of OT-I CTL, therapeutic failure was attributed to insufficient numbers of OT-I T cells that persisted in vivo, rather than mechanisms that actively suppressed or anergized the OT-I T cells. In recipients of high numbers of OT-I CTL, the E.G7 tumor that developed was shown to be resistant to fresh OT-I CTL when examined ex vivo. Furthermore, these same tumor cells no longer secreted a detectable level of OVA. In this case, resistance to immunotherapy was secondary to selection of clones of E.G7 that expressed a lower level of tumor antigen. CONCLUSIONS: Memory engraftment with tumor-specific CTL provides long-term protection against tumor. However, there are several limitations to this immunotherapeutic strategy, especially when targeting a single antigen. This study illustrates the importance of administering large numbers of effectors to engraft sufficiently efficacious immunologic memory. It also demonstrates the importance of targeting several antigens when developing vaccine strategies for cancer

    High Cooperativity of the SV40 Major Capsid Protein VP1 in Virus Assembly

    Get PDF
    SV40 is a small, non enveloped DNA virus with an icosahedral capsid of 45 nm. The outer shell is composed of pentamers of the major capsid protein, VP1, linked via their flexible carboxy-terminal arms. Its morphogenesis occurs by assembly of capsomers around the viral minichromosome. However the steps leading to the formation of mature virus are poorly understood. Intermediates of the assembly reaction could not be isolated from cells infected with wt SV40. Here we have used recombinant VP1 produced in insect cells for in vitro assembly studies around supercoiled heterologous plasmid DNA carrying a reporter gene. This strategy yields infective nanoparticles, affording a simple quantitative transduction assay. We show that VP1 assembles under physiological conditions into uniform nanoparticles of the same shape, size and CsCl density as the wild type virus. The stoichiometry is one DNA molecule per capsid. VP1 deleted in the C-arm, which is unable to assemble but can bind DNA, was inactive indicating genuine assembly rather than non-specific DNA-binding. The reaction requires host enzymatic activities, consistent with the participation of chaperones, as recently shown. Our results demonstrate dramatic cooperativity of VP1, with a Hill coefficient of ∼6. These findings suggest that assembly may be a concerted reaction. We propose that concerted assembly is facilitated by simultaneous binding of multiple capsomers to a single DNA molecule, as we have recently reported, thus increasing their local concentration. Emerging principles of SV40 assembly may help understanding assembly of other complex systems. In addition, the SV40-based nanoparticles described here are potential gene therapy vectors that combine efficient gene delivery with safety and flexibility

    OCTREE-BASED SIMD STRATEGY FOR ICP REGISTRATION AND ALIGNMENT OF 3D POINT CLOUDS

    Get PDF
    Matching and fusion of 3D point clouds, such as close range laser scans, is important for creating an integrated 3D model data infrastructure. The Iterative Closest Point algorithm for alignment of point clouds is one of the most commonly used algorithms for matching of rigid bodies. Evidently, scans are acquired from different positions and might present different data characterization and accuracies, forcing complex data-handling issues. The growing demand for near real-time applications also introduces new computational requirements and constraints into such processes. This research proposes a methodology to solving the computational and processing complexities in the ICP algorithm by introducing specific performance enhancements to enable more efficient analysis and processing. An Octree data structure together with the caching of localized Delaunay triangulation-based surface meshes is implemented to increase computation efficiency and handling of data. Parallelization of the ICP process is carried out by using the Single Instruction, Multiple Data processing scheme – based on the Divide and Conquer multi-branched paradigm – enabling multiple processing elements to be performed on the same operation on multiple data independently and simultaneously. When compared to the traditional non-parallel list processing the Octree-based SIMD strategy showed a sharp increase in computation performance and efficiency, together with a reliable and accurate alignment of large 3D point clouds, contributing to a qualitative and efficient application

    MULTI-STAGE APPROACH TO TRAVEL-MODE SEGMENTATION AND CLASSIFICATION OF GPS TRACES

    Get PDF
    This paper presents a multi-stage approach toward the robust classification of travel-modes from GPS traces. Due to the fact that GPS traces are often composed of more than one travel-mode, they are segmented to find sub-traces characterized as an individual travel-mode. This is conducted by finding individual movement segments by identifying stops. In the first stage of classification three main travel-mode classes are identified: pedestrian, bicycle, and motorized vehicles; this is achieved based on the identified segments using speed, acceleration and heading related parameters. Then, segments are linked up to form sub-traces of individual travel-mode. After the first stage is achieved, a breakdown classification of the motorized vehicles class is implemented based on sub-traces of individual travel-mode of cars, buses, trams and trains using Support Vector Machines (SVMs) method. This paper presents a qualitative classification of travel-modes, thus introducing new robust and precise capabilities for the problem at hand

    WIKIPEDIA ENTRIES AS A SOURCE OF CAR NAVIGATION LANDMARKS

    No full text
    Car navigation system devices provide today with an easy and simple solution to the basic concept of reaching a destination. Although these systems usually achieve this goal, they still deliver a limited and poor sequence of instructions that do not consider the human nature of using landmarks during wayfinding. This research paper addresses the concept of enriching navigation route instructions by adding supplementary route information in the form of landmarks. We aim at using a contributed source of landmarks information, which is easy to access, available, show high update rate, and have a large scale of information. For this, Wikipedia was chosen, since it represents the world’s largest free encyclopaedia that includes information about many spatial entities. A survey and classification of available landmarks is implemented, coupled with ranking algorithms based on the entries’ categories and attributes. These are aimed at retrieving the most relevant landmark information required that are valuable for the enrichment of a specific navigation route. The paper will present this methodology, together with examples and results, showing the feasibility of using this concept and its potential of enriching navigation processes
    corecore