313 research outputs found

    Integrated Urban Sensing: A Geo-sensor Network for Public Health Monitoring and Beyond

    Get PDF
    Pervasive environmental monitoring implies a wide range of technical, but also socio-political challenges, and this applies especially to the sensitive context of the city. In this paper, we elucidate issues for bringing out pervasive urban sensor networks and associated concerns relating to fine-grained information provision. We present the Common Scents project, which is based on the Live Geography approach, and show how it can overcome these challenges. As opposed to hitherto sensing networks, which are mostly built up in monolithic and closed systems, the Common Scents approach aims to establish an open, standards based and modular infrastructure. This ensures interoperability, portability and flexibility, which are crucial prerequisites for pervasive urban sensing. The implementation – a real-time data integration and analysis system for air quality assessment – has been realised on top of the CitySense sensor network in the City of Cambridge, MA US together with the city’s Public Health Department responding to concrete needs of the city and its inhabitants. The second pilot using mobile sensors mounted on bicycles has been deployed in Copenhagen, Denmark. Preliminary results show highly fine-grained variability of pollutant dispersion in urban environments.Singapore-MIT Alliance. Center for Environmental Sensing and MonitoringSingapore-MIT Alliance for Research and Technology CenterEuropean Commission (FP7 GENESIS project)Bundesministerium fĂŒr Wissenschaft und ForschungResearch Studio iSPAC

    Semantic technologies: from niche to the mainstream of Web 3? A comprehensive framework for web Information modelling and semantic annotation

    Get PDF
    Context: Web information technologies developed and applied in the last decade have considerably changed the way web applications operate and have revolutionised information management and knowledge discovery. Social technologies, user-generated classification schemes and formal semantics have a far-reaching sphere of influence. They promote collective intelligence, support interoperability, enhance sustainability and instigate innovation. Contribution: The research carried out and consequent publications follow the various paradigms of semantic technologies, assess each approach, evaluate its efficiency, identify the challenges involved and propose a comprehensive framework for web information modelling and semantic annotation, which is the thesis’ original contribution to knowledge. The proposed framework assists web information modelling, facilitates semantic annotation and information retrieval, enables system interoperability and enhances information quality. Implications: Semantic technologies coupled with social media and end-user involvement can instigate innovative influence with wide organisational implications that can benefit a considerable range of industries. The scalable and sustainable business models of social computing and the collective intelligence of organisational social media can be resourcefully paired with internal research and knowledge from interoperable information repositories, back-end databases and legacy systems. Semantified information assets can free human resources so that they can be used to better serve business development, support innovation and increase productivity

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Open source GIS for HIV/AIDS management

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Reliable access to basic services can improve a community's resilience to HIV/AIDS. Accordingly, work is being done to upgrade the physical infrastructure in affected areas, often employing a strategy of decentralised service provision. Spatial characteristics are one of the major determinants in implementing services, even in the smaller municipal areas, and good quality spatial information is needed to inform decision making processes. However, limited funds, technical infrastructure and human resource capacity result in little or no access to spatial information for crucial infrastructure development decisions at local level.</p> <p>This research investigated whether it would be possible to develop a GIS for basic infrastructure planning and management at local level. Given the resource constraints of the local government context, particularly in small municipalities, it was decided that open source software should be used for the prototype system.</p> <p>Results</p> <p>The design and development of a prototype system illustrated that it is possible to develop an open source GIS system that can be used within the context of local information management. Usability tests show a high degree of usability for the system, which is important considering the heavy workload and high staff turnover that characterises local government in South Africa. Local infrastructure management stakeholders interviewed in a case study of a South African municipality see the potential for the use of GIS as a communication tool and are generally positive about the use of GIS for these purposes. They note security issues that may arise through the sharing of information, lack of skills and resource constraints as the major barriers to adoption.</p> <p>Conclusion</p> <p>The case study shows that spatial information is an identified need at local level. Open source GIS software can be used to develop a system to provide local-level stakeholders with spatial information. However, the suitability of the technology is only a part of the system – there are wider information and management issues which need to be addressed before the implementation of a local-level GIS for infrastructure management can be successful.</p

    A Data-driven, High-performance and Intelligent CyberInfrastructure to Advance Spatial Sciences

    Get PDF
    abstract: In the field of Geographic Information Science (GIScience), we have witnessed the unprecedented data deluge brought about by the rapid advancement of high-resolution data observing technologies. For example, with the advancement of Earth Observation (EO) technologies, a massive amount of EO data including remote sensing data and other sensor observation data about earthquake, climate, ocean, hydrology, volcano, glacier, etc., are being collected on a daily basis by a wide range of organizations. In addition to the observation data, human-generated data including microblogs, photos, consumption records, evaluations, unstructured webpages and other Volunteered Geographical Information (VGI) are incessantly generated and shared on the Internet. Meanwhile, the emerging cyberinfrastructure rapidly increases our capacity for handling such massive data with regard to data collection and management, data integration and interoperability, data transmission and visualization, high-performance computing, etc. Cyberinfrastructure (CI) consists of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high-performance networks to improve research productivity and enable breakthroughs that are not otherwise possible. The Geospatial CI (GCI, or CyberGIS), as the synthesis of CI and GIScience has inherent advantages in enabling computationally intensive spatial analysis and modeling (SAM) and collaborative geospatial problem solving and decision making. This dissertation is dedicated to addressing several critical issues and improving the performance of existing methodologies and systems in the field of CyberGIS. My dissertation will include three parts: The first part is focused on developing methodologies to help public researchers find appropriate open geo-spatial datasets from millions of records provided by thousands of organizations scattered around the world efficiently and effectively. Machine learning and semantic search methods will be utilized in this research. The second part develops an interoperable and replicable geoprocessing service by synthesizing the high-performance computing (HPC) environment, the core spatial statistic/analysis algorithms from the widely adopted open source python package – Python Spatial Analysis Library (PySAL), and rich datasets acquired from the first research. The third part is dedicated to studying optimization strategies for feature data transmission and visualization. This study is intended for solving the performance issue in large feature data transmission through the Internet and visualization on the client (browser) side. Taken together, the three parts constitute an endeavor towards the methodological improvement and implementation practice of the data-driven, high-performance and intelligent CI to advance spatial sciences.Dissertation/ThesisDoctoral Dissertation Geography 201

    Mapping and the Citizen Sensor

    Get PDF
    Maps are a fundamental resource in a diverse array of applications ranging from everyday activities, such as route planning through the legal demarcation of space to scientific studies, such as those seeking to understand biodiversity and inform the design of nature reserves for species conservation. For a map to have value, it should provide an accurate and timely representation of the phenomenon depicted and this can be a challenge in a dynamic world. Fortunately, mapping activities have benefitted greatly from recent advances in geoinformation technologies. Satellite remote sensing, for example, now offers unparalleled data acquisition and authoritative mapping agencies have developed systems for the routine production of maps in accordance with strict standards. Until recently, much mapping activity was in the exclusive realm of authoritative agencies but technological development has also allowed the rise of the amateur mapping community. The proliferation of inexpensive and highly mobile and location aware devices together with Web 2.0 technology have fostered the emergence of the citizen as a source of data. Mapping presently benefits from vast amounts of spatial data as well as people able to provide observations of geographic phenomena, which can inform map production, revision and evaluation. The great potential of these developments is, however, often limited by concerns. The latter span issues from the nature of the citizens through the way data are collected and shared to the quality and trustworthiness of the data. This book reports on some of the key issues connected with the use of citizen sensors in mapping. It arises from a European Co-operation in Science and Technology (COST) Action, which explored issues linked to topics ranging from citizen motivation, data acquisition, data quality and the use of citizen derived data in the production of maps that rival, and sometimes surpass, maps arising from authoritative agencies

    A formal architecture-centric and model driven approach for the engineering of science gateways

    Get PDF
    From n-Tier client/server applications, to more complex academic Grids, or even the most recent and promising industrial Clouds, the last decade has witnessed significant developments in distributed computing. In spite of this conceptual heterogeneity, Service-Oriented Architecture (SOA) seems to have emerged as the common and underlying abstraction paradigm, even though different standards and technologies are applied across application domains. Suitable access to data and algorithms resident in SOAs via so-called ‘Science Gateways’ has thus become a pressing need in order to realize the benefits of distributed computing infrastructures.In an attempt to inform service-oriented systems design and developments in Grid-based biomedical research infrastructures, the applicant has consolidated work from three complementary experiences in European projects, which have developed and deployed large-scale production quality infrastructures and more recently Science Gateways to support research in breast cancer, pediatric diseases and neurodegenerative pathologies respectively. In analyzing the requirements from these biomedical applications the applicant was able to elaborate on commonly faced issues in Grid development and deployment, while proposing an adapted and extensible engineering framework. Grids implement a number of protocols, applications, standards and attempt to virtualize and harmonize accesses to them. Most Grid implementations therefore are instantiated as superposed software layers, often resulting in a low quality of services and quality of applications, thus making design and development increasingly complex, and rendering classical software engineering approaches unsuitable for Grid developments.The applicant proposes the application of a formal Model-Driven Engineering (MDE) approach to service-oriented developments, making it possible to define Grid-based architectures and Science Gateways that satisfy quality of service requirements, execution platform and distribution criteria at design time. An novel investigation is thus presented on the applicability of the resulting grid MDE (gMDE) to specific examples and conclusions are drawn on the benefits of this approach and its possible application to other areas, in particular that of Distributed Computing Infrastructures (DCI) interoperability, Science Gateways and Cloud architectures developments

    Towards a National 3D Mapping Product for Great Britain

    Get PDF
    Knowing where something happens and where people are located can be critically important to understand issues ranging from climate change to road accidents, crime, schooling, transport and much more. To analyse these spatial problems, two-dimensional representations of the world, such as paper or digital maps, have traditionally been used. Geographic information systems (GIS) are the tools that enable capture, modelling, storage, retrieval, sharing, manipulation, analysis, and presentation of geographically referenced data. Three-dimensional geographic information (3D GI) is data that can represent real-world features as objects in 3D space. 3D GI offers additional functionality not possible in 2D, including analysing and querying volume, visibility, surface and sub-surface, and shadowing. This thesis contributes to the understanding of user requirements and other data related considerations in the production of 3D geographic information at a national level. The study promotes Ordnance Survey’s efforts in developing a 3D geographic product through: (1) identifying potential applications; (2) analysing existing 3D city modelling approaches; (3) eliciting and formalising user requirements; (4) developing metrics to describe the usefulness of 3D data and; (5) evaluating the commerciality of 3D GI. A review of current applications of 3D showed that visualisation dominated as the main use, allowing for better communication, and supporting decision-making processes. Reflecting this, an examination of existing 3D city models showed that, despite the varying modelling approaches, there was a general focus towards accurate and realistic geometric representation of the urban environment. Web-based questionnaires and semi-structured interviews revealed that while some applications (e.g. subsurface, photovoltaics, air and noise quality) lead the field with a high adoption of 3D, others were laggards due to organisational inertia (e.g. insurance, facilities management). Individuals expressed positive views on the use of 3D, but still struggled to justify the value and business case. Simple building geometry coupled with non-building thematic classes was perceived to be most useful by users. Several metrics were developed to quantify and compare the characteristics of thirty-three 3D datasets. Results showed that geometry-based metrics such as minimum feature length or Euler characteristic can be used to provide additional information as part of fitness-for-purpose evaluations. The metrics can also contribute to quality control during data production. An investigation into the commercial opportunities explored the economic value of 3D, the market size of 3D data in Great Britain, as well as proposed a number of opportunities within the wider business context of Ordnance Survey
    • 

    corecore