3,760 research outputs found

    Neogeography: The Challenge of Channelling Large and Ill-Behaved Data Streams

    Get PDF
    Neogeography is the combination of user generated data and experiences with mapping technologies. In this article we present a research project to extract valuable structured information with a geographic component from unstructured user generated text in wikis, forums, or SMSes. The extracted information should be integrated together to form a collective knowledge about certain domain. This structured information can be used further to help users from the same domain who want to get information using simple question answering system. The project intends to help workers communities in developing countries to share their knowledge, providing a simple and cheap way to contribute and get benefit using the available communication technology

    GPHY 489.01: Programming for GIS

    Get PDF

    GPHY 491.01: Programming for GIS

    Get PDF

    DeePOF: A hybrid approach of deep convolutional neural network and friendship to Point‐of‐Interest (POI) recommendation system in location‐based social networks

    Get PDF
    Today, millions of active users spend a percentage of their time on location-based social networks like Yelp and Gowalla and share their rich information. They can easily learn about their friends\u27 behaviors and where they are visiting and be influenced by their style. As a result, the existence of personalized recommendations and the investigation of meaningful features of users and Point of Interests (POIs), given the challenges of rich contents and data sparsity, is a substantial task to accurately recommend the POIs and interests of users in location-based social networks (LBSNs). This work proposes a novel pipeline of POI recommendations named DeePOF based on deep learning and the convolutional neural network. This approach only takes into consideration the influence of the most similar pattern of friendship instead of the friendship of all users. The mean-shift clustering technique is used to detect similarity. The most similar friends\u27 spatial and temporal features are fed into our deep CNN technique. The output of several proposed layers can predict latitude and longitude and the ID of subsequent appropriate places, and then using the friendship interval of a similar pattern, the lowest distance venues are chosen. This combination method is estimated on two popular datasets of LBSNs. Experimental results demonstrate that analyzing similar friendships could make recommendations more accurate and the suggested model for recommending a sequence of top-k POIs outperforms state-of-the-art approaches

    Semantic Models as Knowledge Repositories for Data Modellers in the Financial Industry

    Get PDF
    Data modellers working in the financial industry are expected to use both technical and business knowledge to transform data into the information required to meet regulatory reporting requirements. This dissertation explores the role that semantic models such as ontologies and concept maps can play in the acquisition of financial and regulatory concepts by data modellers. While there is widespread use of semantic models in the financial industry to specify how information is exchanged between IT systems, there is limited use of these models as knowledge repositories. The objective of this research is to evaluate the use of a semantic model based knowledge repository using a combination of interviews, model implementation and experimental evaluation. A semantic model implementation is undertaken to represent the knowledge required to understand sample banking regulatory reports. An iterative process of semantic modelling and knowledge acquisition is followed to create a representation of technical and business domain knowledge in the repository. The completed repository is made up of three concept maps hyper-linked to an ontology. An experimental evaluation of the usefulness of the repository is made by asking both expert and novice financial data modellers to answer questions that required both banking knowledge and an understating of the information in regulatory reports. The research suggests that both novice and expert data modellers found the knowledge in the ontology and concept maps to be accessible, effective and useful. The combination of model types allowing for variations in individual styles of knowledge acquisition. The research suggests that the financial trend in the financial industry for semantic models and ontologies would benefit from knowledge management and modelling techniques

    An IR-based Approach Towards Automated Integration of Geo-spatial Datasets in Map-based Software Systems

    Full text link
    Data is arguably the most valuable asset of the modern world. In this era, the success of any data-intensive solution relies on the quality of data that drives it. Among vast amount of data that are captured, managed, and analyzed everyday, geospatial data are one of the most interesting class of data that hold geographical information of real-world phenomena and can be visualized as digital maps. Geo-spatial data is the source of many enterprise solutions that provide local information and insights. In order to increase the quality of such solutions, companies continuously aggregate geospatial datasets from various sources. However, lack of a global standard model for geospatial datasets makes the task of merging and integrating datasets difficult and error-prone. Traditionally, domain experts manually validate the data integration process by merging new data sources and/or new versions of previous data against conflicts and other requirement violations. However, this approach is not scalable and is hinder toward rapid release, when dealing with frequently changing big datasets. Thus more automated approaches with limited interaction with domain experts is required. As a first step to tackle this problem, in this paper, we leverage Information Retrieval (IR) and geospatial search techniques to propose a systematic and automated conflict identification approach. To evaluate our approach, we conduct a case study in which we measure the accuracy of our approach in several real-world scenarios and we interview with software developers at Localintel Inc. (our industry partner) to get their feedbacks.Comment: ESEC/FSE 2019 - Industry trac

    Comprehensive and Practical Policy Compliance in Data Retrieval Systems

    Get PDF
    Data retrieval systems such as online search engines and online social networks process many data items coming from different sources, each subject to its own data use policy. Ensuring compliance with these policies in a large and fast-evolving system presents a significant technical challenge since bugs, misconfigurations, or operator errors can cause (accidental) policy violations. To prevent such violations, researchers and practitioners develop policy compliance systems. Existing policy compliance systems, however, are either not comprehensive or not practical. To be comprehensive, a compliance system must be able to enforce users' policies regarding their personal privacy preferences, the service provider's own policies regarding data use such as auditing and personalization, and regulatory policies such as data retention and censorship. To be practical, a compliance system needs to meet stringent requirements: (1) runtime overhead must be low; (2) existing applications must run with few modifications; and (3) bugs, misconfigurations, or actions by unprivileged operators must not cause policy violations. In this thesis, we present the design and implementation of two comprehensive and practical compliance systems: Thoth and Shai. Thoth relies on pure runtime monitoring: it tracks data flows by intercepting processes' I/O, and then it checks the associated policies to allow only policy-compliant flows at runtime. Shai, on the other hand, combines offline analysis and light-weight runtime monitoring: it pushes as many policy checks as possible to an offline (flow) analysis by predicting the policies that data-handling processes will be subject to at runtime, and then it compiles those policies into a set of fine-grained I/O capabilities that can be enforced directly by the underlying operating system

    Service Now: CMDB Research

    Get PDF
    The MAPFRE Capstone team has been tasked with reviewing and recommending roadmap on the existing CMDB configuration. Paper discusses the team’s overall research on ServiceNow CMDB, Client’s deliverables and introduction to the latest technological innovations. Based on given objectives and team’s analysis we have recommended key solutions for the client to better understand the IT environment areas of business service impact, asset management, compliance, and configuration management. In addition, our research has covered all the majority of the technical and functional areas to provide greater visibility and insight into existing CMDB and IT environment

    Making tourist guidance systems more intelligent, adaptive and personalised using crowd sourced movement data

    Get PDF
    Ambient intelligence (AmI) provides adaptive, personalized, intelligent, ubiquitous and interactive services to wide range of users. AmI can have a variety of applications, including smart shops, health care, smart home, assisted living, and location-based services. Tourist guidance is one of the applications where AmI can have a great contribution to the quality of the service, as the tourists, who may not be very familiar with the visiting site, need a location-aware, ubiquitous, personalised and informative service. Such services should be able to understand the preferences of the users without requiring the users to specify them, predict their interests, and provide relevant and tailored services in the most appropriate way, including audio, visual, and haptic. This paper shows the use of crowd sourced trajectory data in the detection of points of interests and providing ambient tourist guidance based on the patterns recognised over such data

    Audubon Data Project Final Report

    Get PDF
    The Audubon Data Project was initiated as a Clark University Capstone project. The project’s client, Mass Audubon’s Shaping the Future of Your Community program, had identified a need to improve their data management methods and make better use of their data. The Capstone team, composed of Clark University graduate students, met with the client regularly to review the current state of the data and potential improvements to be made. The process began with a data review. During the review we worked with the client to explicitly define the purposes and requirements of the data, the current process for updating and using the data, and the ways that different types of records were related to one another. After the review, we were able to identify the issues in the current system which we would seek to resolve. These included data integrity issues such as ensuring crucial items (such as a town name) were always included when entering data, and data structure issues such as having a relatively user-friendly way to express relationships and update records that were part of a relationship
    • 

    corecore