100,451 research outputs found

    Uncertainty and risk: politics and analysis

    Get PDF
    In environmental and sustainable development policy issues, and in infrastructural megaprojects and issues of innovative medical technologies as well, public authorities face emergent complexity, high value diversity, difficult-to-structure problems, high decision stakes, high uncertainty, and thus risk. In practice, it is believed, this often leads to crises, controversies, deadlocks, and policy fiascoes. Decision-makers are said to face a crisis in coping with uncertainty. Both the cognitive structure of uncertainty and the political structure of risk decisions have been studied. So far, these scientific literatures exist side by side, with few apparent efforts at theoretically conceptualizing and empirically testing the links between the two. Therefore, this exploratory and conceptual paper takes up the challenge: How should we conceptualize the cognitive structure of uncertainty? How should we conceptualize the political structure of risk? How can we conceptualize the link(s) between the two? Is there any empirical support for a conceptualization that bridges the analytical and political aspects of risk? What are the implications for guidelines for risk analysis and assessment

    FLIAT, an object-relational GIS tool for flood impact assessment in Flanders, Belgium

    Get PDF
    Floods can cause damage to transportation and energy infrastructure, disrupt the delivery of services, and take a toll on public health, sometimes even causing significant loss of life. Although scientists widely stress the compelling need for resilience against extreme events under a changing climate, tools for dealing with expected hazards lag behind. Not only does the socio-economic, ecologic and cultural impact of floods need to be considered, but the potential disruption of a society with regard to priority adaptation guidelines, measures, and policy recommendations need to be considered as well. The main downfall of current impact assessment tools is the raster approach that cannot effectively handle multiple metadata of vital infrastructures, crucial buildings, and vulnerable land use (among other challenges). We have developed a powerful cross-platform flood impact assessment tool (FLIAT) that uses a vector approach linked to a relational database using open source program languages, which can perform parallel computation. As a result, FLIAT can manage multiple detailed datasets, whereby there is no loss of geometrical information. This paper describes the development of FLIAT and the performance of this tool

    Structuring the unstructured data: the use of content analysis

    Get PDF
    Content analysis is a research technique used to organise large amounts of textual data into standardised formats which allows arriving at suggestions/conclusions. Content analysis can be carried out quantitatively by counting the words or qualitatively by coding. The former approach refers to counting the frequency of the keywords and the later refers to identifying similar themes or concepts from the data set. This paper discusses the use of conceptual content analysis by using computerised software to analyse data gathered from semi-structured interviews. The context of the research within which content analysis is used is to identify the influence of performance measurement towards construction research activities. The paper first explains the research methodology pertaining to this study by reasoning out the selection of case study research approach coupled with semi-structured interviews. The paper then discusses how the information gathered from semi-structured interviews is fed into the computerised software to identify and generate main concepts of the study

    Impliance: A Next Generation Information Management Appliance

    Full text link
    ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US

    Open semantic service networks

    Get PDF
    Online service marketplaces will soon be part of the economy to scale the provision of specialized multi-party services through automation and standardization. Current research, such as the *-USDL service description language family, is already deļ¬ning the basic building blocks to model the next generation of business services. Nonetheless, the developments being made do not target to interconnect services via service relationships. Without the concept of relationship, marketplaces will be seen as mere functional silos containing service descriptions. Yet, in real economies, all services are related and connected. Therefore, to address this gap we introduce the concept of open semantic service network (OSSN), concerned with the establishment of rich relationships between services. These networks will provide valuable knowledge on the global service economy, which can be exploited for many socio-economic and scientiļ¬c purposes such as service network analysis, management, and control
    • ā€¦
    corecore