169 research outputs found

    Management of collaborative BIM data by the Federatinon of Distributed Models

    Get PDF
    The architecture engineering and construction sector is currently undergoing a significant period of change and modernization. In the United Kingdom in particular this is driven by the government’s objective of reducing the cost of construction projects. This is to be achieved by requiring all publicly funded projects to utilize fully collaborative building information modeling by 2016. A common goal in increasing building information model (BIM) adoption by the industry is the movement toward the realization of a BIM as either a single data model or a series of tightly coupled federated models. However, there are key obstacles to be overcome, including uncertainty over data ownership, concerns relating to the security/privacy of data, and reluctance to “outsource” data storage. This paper proposes a framework that is able to provide a solution for managing collaboration in the architecture engineering and construction (AEC) sector. The solution presented in this paper provides an overlay that automatically federates and governs distributed BIM data. The use of this overlay provides an integrated BIM model that is physically distributed across the stakeholders in a construction project. The key research question addressed by this paper is whether such an overlay can, by providing dynamic federation and governance of BIM data, overcome some key obstacles to BIM adoption, including questions over data ownership, the security/privacy of data, and reluctance to share data. More specifically, this paper provides the following contributions: (1) presentation of a vision for the implementation and governance of a federated distributed BIM data model; (2) description of the BIM process and governance model that underpins the approach; (3) provision of a validation case study using real construction data from a U.K. highways project, demonstrating that both the federated BIM overlay and the process and governance model are fit for purpose. - See more at: http://ascelibrary.org/doi/full/10.1061/(ASCE)CP.1943-5487.0000657#sthash.jIj574Lh.dpu

    Report on the Detailed Evaluation of StratusLab Products

    No full text
    This document provides a detailed, internal evaluation of the StratusLab v1.0 cloud distribution, providing feedback and informing the roadmap for the second year of the project. This document has evaluated the distribution in three areas: 1) use cases defined in the continuous integration system, 2) requirements and recommendations identified from user and system administrator surveys conducted at the beginning of the project, and 3) scenarios and requirements from the EGI User VIrtualization Workshop. The document identifies areas in which to concentrate efforts in the future. Notably, it reinforces the focus of the work plan for the coming year on issues related to federation of cloud infrastructures

    Doctor of Philosophy

    Get PDF
    dissertationPublic health surveillance systems are crucial for the timely detection and response to public health threats. Since the terrorist attacks of September 11, 2001, and the release of anthrax in the following month, there has been a heightened interest in public health surveillance. The years immediately following these attacks were met with increased awareness and funding from the federal government which has significantly strengthened the United States surveillance capabilities; however, despite these improvements, there are substantial challenges faced by today's public health surveillance systems. Problems with the current surveillance systems include: a) lack of leveraging unstructured public health data for surveillance purposes; and b) lack of information integration and the ability to leverage resources, applications or other surveillance efforts due to systems being built on a centralized model. This research addresses these problems by focusing on the development and evaluation of new informatics methods to improve the public health surveillance. To address the problems above, we first identified a current public surveillance workflow which is affected by the problems described and has the opportunity for enhancement through current informatics techniques. The 122 Mortality Surveillance for Pneumonia and Influenza was chosen as the primary use case for this dissertation work. The second step involved demonstrating the feasibility of using unstructured public health data, in this case death certificates. For this we created and evaluated a pipeline iv composed of a detection rule and natural language processor, for the coding of death certificates and the identification of pneumonia and influenza cases. The second problem was addressed by presenting the rationale of creating a federated model by leveraging grid technology concepts and tools for the sharing and epidemiological analyses of public health data. As a case study of this approach, a secured virtual organization was created where users are able to access two grid data services, using death certificates from the Utah Department of Health, and two analytical grid services, MetaMap and R. A scientific workflow was created using the published services to replicate the mortality surveillance workflow. To validate these approaches, and provide proofs-of-concepts, a series of real-world scenarios were conducted

    Sophisticated Batteryless Sensing

    Get PDF
    Wireless embedded sensing systems have revolutionized scientific, industrial, and consumer applications. Sensors have become a fixture in our daily lives, as well as the scientific and industrial communities by allowing continuous monitoring of people, wildlife, plants, buildings, roads and highways, pipelines, and countless other objects. Recently a new vision for sensing has emerged---known as the Internet-of-Things (IoT)---where trillions of devices invisibly sense, coordinate, and communicate to support our life and well being. However, the sheer scale of the IoT has presented serious problems for current sensing technologies---mainly, the unsustainable maintenance, ecological, and economic costs of recycling or disposing of trillions of batteries. This energy storage bottleneck has prevented massive deployments of tiny sensing devices at the edge of the IoT. This dissertation explores an alternative---leave the batteries behind, and harvest the energy required for sensing tasks from the environment the device is embedded in. These sensors can be made cheaper, smaller, and will last decades longer than their battery powered counterparts, making them a perfect fit for the requirements of the IoT. These sensors can be deployed where battery powered sensors cannot---embedded in concrete, shot into space, or even implanted in animals and people. However, these batteryless sensors may lose power at any point, with no warning, for unpredictable lengths of time. Programming, profiling, debugging, and building applications with these devices pose significant challenges. First, batteryless devices operate in unpredictable environments, where voltages vary and power failures can occur at any time---often devices are in failure for hours. Second, a device\u27s behavior effects the amount of energy they can harvest---meaning small changes in tasks can drastically change harvester efficiency. Third, the programming interfaces of batteryless devices are ill-defined and non- intuitive; most developers have trouble anticipating the problems inherent with an intermittent power supply. Finally, the lack of community, and a standard usable hardware platform have reduced the resources and prototyping ability of the developer. In this dissertation we present solutions to these challenges in the form of a tool for repeatable and realistic experimentation called Ekho, a reconfigurable hardware platform named Flicker, and a language and runtime for timely execution of intermittent programs called Mayfly

    Scientific Workflows: Moving Across Paradigms

    Get PDF
    Modern scientific collaborations have opened up the opportunity to solve complex problems that require both multidisciplinary expertise and large-scale computational experiments. These experiments typically consist of a sequence of processing steps that need to be executed on selected computing platforms. Execution poses a challenge, however, due to (1) the complexity and diversity of applications, (2) the diversity of analysis goals, (3) the heterogeneity of computing platforms, and (4) the volume and distribution of data. A common strategy to make these in silico experiments more manageable is to model them as workflows and to use a workflow management system to organize their execution. This article looks at the overall challenge posed by a new order of scientific experiments and the systems they need to be run on, and examines how this challenge can be addressed by workflows and workflow management systems. It proposes a taxonomy of workflow management system (WMS) characteristics, including aspects previously overlooked. This frames a review of prevalent WMSs used by the scientific community, elucidates their evolution to handle the challenges arising with the emergence of the “fourth paradigm,” and identifies research needed to maintain progress in this area

    Open Workflows: Context-Dependent Construction and Execution in Mobile Wireless Settings

    Get PDF
    Existing workflow middleware executes tasks orchestrated by rules defined in a carefully handcrafted static graph. Workflow management systems have proved effective for service-oriented business automation in stable, wired infrastructures. We introduce a radically new paradigm for workflow construction and execution called open workflow to support goal-directed coordination among physically mobile people and devices that form a transient community over an ad hoc wireless network. The quintessential feature of the open workflow paradigm is dynamic construction and execution of custom, context-specific workflows in response to unpredictable and evolving circumstances by exploiting the knowledge and services available within a given spatiotemporal context. This work introduces the open workflow approach, surveys open research challenges in this promising new field, and presents algorithmic, architectural, and evaluation results for the first practical realization of an open workflow management system

    Developing a Coherent Cyberinfrastructure from Local Campus to National Facilities: Challenges and Strategies

    Get PDF
    A fundamental goal of cyberinfrastructure (CI) is the integration of computing hardware, software, and network technology, along with data, information management, and human resources to advance scholarship and research. Such integration creates opportunities for researchers, educators, and learners to share ideas, expertise, tools, and facilities in new and powerful ways that cannot be realized if each of these components is applied independently. Bridging the gap between the reality of CI today and its potential in the immediate future is critical to building a balanced CI ecosystem that can support future scholarship and research. This report summarizes the observations and recommendations from a workshop in July 2008 sponsored by the EDUCAUSE Net@EDU Campus Cyberinfrastructure Working Group (CCI) and the Coalition for Academic Scientific Computation (CASC). The invitational workshop was hosted at the University Place Conference Center on the IUPUI campus in Indianapolis. Over 50 individuals representing a cross-section of faculty, senior campus information technology leaders, national lab directors, and other CI experts attended. The workshop focused on the challenges that must be addressed to build a coherent CI from the local to the national level, and the potential opportunities that would result. Both the organizing committee and the workshop participants hope that some of the ideas, suggestions, and recommendations in this report will take hold and be implemented in the community. The goal is to create a better, more supportive, more usable CI environment in the future to advance both scholarship and research

    Building the Future Internet through FIRE

    Get PDF
    The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate
    • 

    corecore