13,081 research outputs found

    The interaction of lean and building information modeling in construction

    Get PDF
    Lean construction and Building Information Modeling are quite different initiatives, but both are having profound impacts on the construction industry. A rigorous analysis of the myriad specific interactions between them indicates that a synergy exists which, if properly understood in theoretical terms, can be exploited to improve construction processes beyond the degree to which it might be improved by application of either of these paradigms independently. Using a matrix that juxtaposes BIM functionalities with prescriptive lean construction principles, fifty-six interactions have been identified, all but four of which represent constructive interaction. Although evidence for the majority of these has been found, the matrix is not considered complete, but rather a framework for research to explore the degree of validity of the interactions. Construction executives, managers, designers and developers of IT systems for construction can also benefit from the framework as an aid to recognizing the potential synergies when planning their lean and BIM adoption strategies

    Proceedings of Abstracts Engineering and Computer Science Research Conference 2019

    Get PDF
    © 2019 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Note: Keynote: Fluorescence visualisation to evaluate effectiveness of personal protective equipment for infection control is © 2019 Crown copyright and so is licensed under the Open Government Licence v3.0. Under this licence users are permitted to copy, publish, distribute and transmit the Information; adapt the Information; exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. Where you do any of the above you must acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/This book is the record of abstracts submitted and accepted for presentation at the Inaugural Engineering and Computer Science Research Conference held 17th April 2019 at the University of Hertfordshire, Hatfield, UK. This conference is a local event aiming at bringing together the research students, staff and eminent external guests to celebrate Engineering and Computer Science Research at the University of Hertfordshire. The ECS Research Conference aims to showcase the broad landscape of research taking place in the School of Engineering and Computer Science. The 2019 conference was articulated around three topical cross-disciplinary themes: Make and Preserve the Future; Connect the People and Cities; and Protect and Care

    From the conception to the definition of a new service: the case of the European GeoPKDD project”

    Get PDF
    La tesi affronta il processo che parte dalla generazione di un nuovo servizio di tipo technology push ed arriva fino alla sua definizione, attraverso l’analisi del lavoro svolto per WIND Telecomunicazioni s.p.a. nell’ambito del progetto Europeo GeoPKDD. Dopo un inquadramento teorico sulle metodologie di sviluppo di nuovi servizi e sulle peculiarità di uno sviluppo technology push rispetto al caso market pull, il lavoro si concentra sul processo che, partendo dalla generazione di nuove idee basate sulla tecnologia GeoPKDD, si ù concluso con la definizione delle specifiche finali da implementare nel servizio finale

    Energy efficient scheduling and allocation of tasks in sensor cloud

    Get PDF
    Wireless Sensor Network (WSN) is a class of ad hoc networks that has capability of self-organizing, in-network data processing, and unattended environment monitoring. Sensor-cloud is a cloud of heterogeneous WSNs. It is attractive as it can change the computation paradigm of wireless sensor networks. In Sensor-Cloud, to gain profit from underutilized WSNs, multiple WSN owners collaborate to provide a cloud service. Sensor Cloud users can simply rent the sensing services which eliminates the cost of ownership, enabling the usage of large scale sensor networks become affordable. The nature of Sensor-Cloud enables resource sharing and allows virtual sensors to be scaled up or down. It abstracts different platforms hence giving the impression of a homogeneous network. Further in multi-application environment, users of different applications may require data based on different needs. Hence scheduling scheme in WSNs is required which serves maximum users of various applications. We have proposed a scheduling scheme suitable for the multiple applications in Sensor Cloud. Scheduling scheme is based on TDMA which considers fine granularity of tasks. The performance evaluation shows the better response time, throughput and overall energy consumption as compared to the base case we developed. On the other hand, to minimize the energy consumption in WSN, we design an allocation scheme. In Sensor Cloud, we consider sparsely and densely deployed WSNs working together. Also, in a WSN there might be sparsely and densely deployed zones. Based on spatial correlation and with the help of Voronoi diagram, we turn on minimum number of sensors hence increasing WSN lifetime and covering almost 100 percent area. The performance evaluation of allocation scheme shows energy efficiency by selecting fewer nodes in comparison to other work --Abstract, page iv

    Business Intelligence Through Personalised Location-Aware Service Delivery

    Get PDF

    A Survey on Array Storage, Query Languages, and Systems

    Full text link
    Since scientific investigation is one of the most important providers of massive amounts of ordered data, there is a renewed interest in array data processing in the context of Big Data. To the best of our knowledge, a unified resource that summarizes and analyzes array processing research over its long existence is currently missing. In this survey, we provide a guide for past, present, and future research in array processing. The survey is organized along three main topics. Array storage discusses all the aspects related to array partitioning into chunks. The identification of a reduced set of array operators to form the foundation for an array query language is analyzed across multiple such proposals. Lastly, we survey real systems for array processing. The result is a thorough survey on array data storage and processing that should be consulted by anyone interested in this research topic, independent of experience level. The survey is not complete though. We greatly appreciate pointers towards any work we might have forgotten to mention.Comment: 44 page

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware
    • 

    corecore