190,980 research outputs found

    Harmony An Architecture For Network Centric Heterogeneous Terrain Database Re-generation

    Get PDF
    Homogeneous proprietary online terrain databases are prolific. So also is the one directional generation and update process for these terrain databases. The existence of architectures and common ontologies that enable consistent and harmonious outcomes between distributed, multi-directional, heterogeneous terrain databases are lacking. Further due to technological change that empowers end-users, the expectations for immediate terrain database update are constantly increasing. As an example, a variety of incompatible synthetic environmental representations are used for military Modeling and Simulation applications. Regeneration and near-real-time update of compiled synthetic environments in a distributed, heterogeneous run time environment is an issue that is relevant to correlation of geospecific representations that are optimized for live, virtual, constructive and distributed simulation applications. Military systems of systems like the Future Combat Systems are emblematic of the regeneration challenge. The battlefields of the future will need constant updates of diverse synthetic representations of the real world environment. These updates will be driven by near real-time data from the battlefield as well as other constantly evolving intelligence and remote sensing sources. Since the Future Combat Systems will use embedded training, it will need to maintain a representation correlated with the actual battlefield as well as many other systems. To iv achieve this correlation, constant updates to the heterogeneous synthetic environment representations in the Future Combat Systems platforms will be required. An approach to overcoming the implicit bandwidth and communication limitations is to limit updates to changes only. Today’s traditional military Terrain Database (TDB) generation systems convert standard geographical source data products into many different target formats using what is refer to as pipeline flow paradigm. In the pipeline paradigm, TDBs are originally generated centrally upstream and flow downstream out to numerous divergent and distributed formats. In the pipeline paradigm, updates are centrally managed and distributed. This pipeline paradigm does not account for updates occurring on target formats and therefore such updates are not reflected upstream on the source data that originally generated the TDB. Since target format changes are not incorporated into the upstream geographical source data, adjacent streams of dependent target formats derived from the same geographical source data may not receive the changes either. The outcome of change in the pipeline TDB generation systems paradigm is correlation and interoperability errors between target formats as well as between the original upstream data source. An alternative paradigm that addresses data synchronization of geographical source data and target formats while accommodating bandwidth limitation is needed. This v dissertation proposes a “partial bi-directional TDB regeneration” paradigm, which envisions network based TDB updates between reliable partners. Partial bi-directional TDB regeneration is an approach that is very attractive as it reduces the amount of changes by only updating the affected target format data element. This research, presents an implementation of distributed, partial and bi-directional TDB regeneration through agent theory and ontologies over a network. Agent theory and ontologies are used to interpret data changes on external target formats and implement the necessary transformations on the Internal TDB generation system data elements to achieve consistency between all correlated representations. In this approach a variety of agents will exist and their behavior and knowledge will be customized based on ontologies that describe the target format. It is expected that such a system will provide a TDB generation paradigm that can address the implicit issues of: distribution, time, expertise, monetary, labor-constraints, and update frequency, while addressing the explicit issue of correlation between the external targets formats over time and at the same time reducing bandwidth requirements associated with traditional TDB generation system

    MSUO Information Technology and Geographical Information Systems: Common Protocols & Procedures. Report to the Marine Safety Umbrella Operation

    Get PDF
    The Marine Safety Umbrella Operation (MSUO) facilitates the cooperation between Interreg funded Marine Safety Projects and maritime stakeholders. The main aim of MSUO is to permit efficient operation of new projects through Project Cooperation Initiatives, these include the review of the common protocols and procedures for Information Technology (IT) and Geographical Information Systems (GIS). This study carried out by CSA Group and the National Centre for Geocomputation (NCG) reviews current spatial information standards in Europe and the data management methodologies associated with different marine safety projects. International best practice was reviewed based on the combined experience of spatial data research at NCG and initiatives in the US, Canada and the UK relating to marine security service information and acquisition and integration of large marine datasets for ocean management purposes. This report identifies the most appropriate international data management practices that could be adopted for future MSUO projects

    Large-Scale Distributed Internet-based Discovery Mechanism for Dynamic Spectrum Allocation

    Full text link
    Scarcity of frequencies and the demand for more bandwidth is likely to increase the need for devices that utilize the available frequencies more efficiently. Radios must be able to dynamically find other users of the frequency bands and adapt so that they are not interfered, even if they use different radio protocols. As transmitters far away may cause as much interference as a transmitter located nearby, this mechanism can not be based on location alone. Central databases can be used for this purpose, but require expensive infrastructure and planning to scale. In this paper, we propose a decentralized protocol and architecture for discovering radio devices over the Internet. The protocol has low resource requirements, making it suitable for implementation on limited platforms. We evaluate the protocol through simulation in network topologies with up to 2.3 million nodes, including topologies generated from population patterns in Norway. The protocol has also been implemented as proof-of-concept in real Wi-Fi routers.Comment: Accepted for publication at IEEE DySPAN 201

    Using geographical information systems for management of back-pain data

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2002 MCB UP LtdIn the medical world, statistical visualisation has largely been confined to the realm of relatively simple geographical applications. This remains the case, even though hospitals have been collecting spatial data relating to patients. In particular, hospitals have a wealth of back pain information, which includes pain drawings, usually detailing the spatial distribution and type of pain suffered by back-pain patients. Proposes several technological solutions, which permit data within back-pain datasets to be digitally linked to the pain drawings in order to provide methods of computer-based data management and analysis. In particular, proposes the use of geographical information systems (GIS), up till now a tool used mainly in the geographic and cartographic domains, to provide novel and powerful ways of visualising and managing back-pain data. A comparative evaluation of the proposed solutions shows that, although adding complexity and cost, the GIS-based solution is the one most appropriate for visualisation and analysis of back-pain datasets

    Supporting UK-wide e-clinical trials and studies

    Get PDF
    As clinical trials and epidemiological studies become increasingly large, covering wider (national) geographical areas and involving ever broader populations, the need to provide an information management infrastructure that can support such endeavours is essential. A wealth of clinical data now exists at varying levels of care (primary care, secondary care, etc.). Simple, secure access to such data would greatly benefit the key processes involved in clinical trials and epidemiological studies: patient recruitment, data collection and study management. The Grid paradigm provides one model for seamless access to such data and support of these processes. The VOTES project (Virtual Organisations for Trials and Epidemiological Studies) is a collaboration between several UK institutions to implement a generic framework that effectively leverages the available health-care information across the UK to support more efficient gathering and processing of trial information. The structure of the information available in the health-care domain in the UK itself varies broadly in-line with the national boundaries of the constituent states (England, Scotland, Wales and Northern Ireland). Technologies must address these political boundaries and the impact these boundaries have in terms of for example, information governance, policies, and of course large-scale heterogeneous distribution of the data sets themselves. This paper outlines the methodology in implementing the framework between three specific data sources that serve as useful case studies: Scottish data from the Scottish Care Information (SCI) Store data repository, data on the General Practice Research Database (GPRD) diabetes trial at Imperial College London, and benign prostate hypoplasia (BPH) data from the University of Nottingham. The design, implementation and wider research issues are discussed along with the technological challenges encountered in the project in the application of Grid technologies
    • 

    corecore