8,209 research outputs found

    Technology adoption in the BIM implementation for lean architectural practice

    Get PDF
    Justification for Research: the construction companies are facing barriers and challenges in BIM adoption as there is no clear guidance or best practice studies from which they can learn and build up their capacity for BIM use in order to increase productivity, efficiency, quality, and to attain competitive advantages in the global market and to achieve the targets in environmental sustainability. Purpose: this paper aims to explain a comprehensive and systemic evaluation and assessment of the relevant BIM technologies as part of the BIM adoption and implementation to demonstrate how efficiency gains have been achieved towards a lean architectural practice. Design/Methodology/Approach: The research is undertaken through a KTP (Knowledge transfer Partnership) project between the University of Salford and the John McCall Architects based in Liverpool, which is an SME (Small Medium Enterprise). The overall aim of KTP is to develop Lean Design Practice through the BIM adoption and implementation. The overall BIM implementation approach uses a socio-technical view in which it does not only consider the implementation of technology but also considers the socio-cultural environment that provides the context for its implementation. The technology adoption methodology within the BIM implementation approach is the action research oriented qualitative and quantitative research for discovery, comparison, and experimentation as the KTP project with JMA provides an environment for “learning by doing” Findings: research has proved that BIM technology adoption should be undertaken with a bottom-up approach rather than top-down approach for successful change management and dealing with the resistance to change. As a result of the BIM technology adoption, efficiency gains are achieved through the piloting projects and the design process is improved through the elimination of wastes and value generation. Originality/Value: successful BIM adoption needs an implementation strategy. However, at operational level, it is imperative that professional guidelines are required as part of the implementation strategy. This paper introduces a systematic approach for BIM technology adoption based on a case study implementation and it demonstrates a guideline at operational level for other SME companies of architectural practices

    The Design of a System Architecture for Mobile Multimedia Computers

    Get PDF
    This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile Digital Companion, energy management plays a crucial role in the architecture. As the Companion must remain usable in a variety of environments, it has to be flexible and adaptable to various operating conditions. The Mobile Digital Companion has an unconventional architecture that saves energy by using system decomposition at different levels of the architecture and exploits locality of reference with dedicated, optimised modules. The approach is based on dedicated functionality and the extensive use of energy reduction techniques at all levels of system design. The system has an architecture with a general-purpose processor accompanied by a set of heterogeneous autonomous programmable modules, each providing an energy efficient implementation of dedicated tasks. A reconfigurable internal communication network switch exploits locality of reference and eliminates wasteful data copies

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited

    FFT-Based Deep Learning Deployment in Embedded Systems

    Full text link
    Deep learning has delivered its powerfulness in many application domains, especially in image and speech recognition. As the backbone of deep learning, deep neural networks (DNNs) consist of multiple layers of various types with hundreds to thousands of neurons. Embedded platforms are now becoming essential for deep learning deployment due to their portability, versatility, and energy efficiency. The large model size of DNNs, while providing excellent accuracy, also burdens the embedded platforms with intensive computation and storage. Researchers have investigated on reducing DNN model size with negligible accuracy loss. This work proposes a Fast Fourier Transform (FFT)-based DNN training and inference model suitable for embedded platforms with reduced asymptotic complexity of both computation and storage, making our approach distinguished from existing approaches. We develop the training and inference algorithms based on FFT as the computing kernel and deploy the FFT-based inference model on embedded platforms achieving extraordinary processing speed.Comment: Design, Automation, and Test in Europe (DATE) For source code, please contact Mahdi Nazemi at <[email protected]

    Mechanical characterization of a new architectural concrete with glass-recycled aggregate

    Get PDF
    Concrete is a material which is widely used in architecture, not only for structural purposes but also for architectural elements for its versatility and excellent performance. However, the manufacturing of this material as a mixture of water, cement, and ¿ne and coarse aggregate comes with a high environmental cost, such as gas emissions, among other things. This is the reason why di¿erent alternatives are being proposed in order to replace coarse aggregates with other recycled materials, as it is one of the less sustainable components of the mixture in terms of extraction. One of these alternatives is recycled glass coming from drinking bottles, crushed into small grains and mixed in the same proportions as regular aggregates. This study proposes the mechanical characterization of a new architectural concrete mixture by using white Lafarge cement and glass-recycled aggregates. This proposed concrete is made especially for architectural elements like façade panels, rather than structural elements. The mechanical evaluation of this new material is done through a set of experimental tests under compression and also bending, comparing three di¿erent ratios of glass aggregate in the mixture.Peer ReviewedPostprint (published version

    Massively Parallel Sort-Merge Joins in Main Memory Multi-Core Database Systems

    Full text link
    Two emerging hardware trends will dominate the database system technology in the near future: increasing main memory capacities of several TB per server and massively parallel multi-core processing. Many algorithmic and control techniques in current database technology were devised for disk-based systems where I/O dominated the performance. In this work we take a new look at the well-known sort-merge join which, so far, has not been in the focus of research in scalable massively parallel multi-core data processing as it was deemed inferior to hash joins. We devise a suite of new massively parallel sort-merge (MPSM) join algorithms that are based on partial partition-based sorting. Contrary to classical sort-merge joins, our MPSM algorithms do not rely on a hard to parallelize final merge step to create one complete sort order. Rather they work on the independently created runs in parallel. This way our MPSM algorithms are NUMA-affine as all the sorting is carried out on local memory partitions. An extensive experimental evaluation on a modern 32-core machine with one TB of main memory proves the competitive performance of MPSM on large main memory databases with billions of objects. It scales (almost) linearly in the number of employed cores and clearly outperforms competing hash join proposals - in particular it outperforms the "cutting-edge" Vectorwise parallel query engine by a factor of four.Comment: VLDB201

    Predicting Project Success in Residential Building Projects (RBPs) using Artificial Neural Networks (ANNs)

    Get PDF
    Due to the urban population’s growth and increasing demand for the renewal of old houses, the successful completion of Residential Building Projects (RBPs) has great socioeconomic importance. This study aims to propose a framework to predict the success of RBPs in the construction phase. Therefore, a 3-step method was applied: (1) Identifying and ranking Critical Success Factors (CSFs) involving in RBPs using the Delphi method, (2) Identifying and selecting success criteria and defining the Project Success Index (PSI), and (3) Developing an ANN model to predict the success of RBPs according to the status of CSFs during the construction phase. The model was trained and tested using the data extracted from 121 RBPs in Tehran. The main findings of this study were a prioritized list of most influential success criteria and an efficient ANN model as a Decision Support System (DSS) in RBPs to monitor the projects in advance and take necessary corrective actions. Compared with previous studies on the success assessment of projects, this study is more focused on providing an applicable method for predicting the success of RBPs. Doi: 10.28991/cej-2020-03091612 Full Text: PD

    A high temperature fatigue and structures testing facility

    Get PDF
    As man strives for higher levels of sophistication in air and space transportation, awareness of the need for accurate life and material behavior predictions for advanced propulsion system components is heightened. Such sophistication will require complex operating conditions and advanced materials to meet goals in performance, thrust-to-weight ratio, and fuel efficiency. To accomplish these goals will require that components be designed using a high percentage of the material's ultimate capabilities. This serves only to complicate matters dealing with life and material behavior predictions. An essential component of material behavior model development is the underlying experimentation which must occur to identify phenomena. To support experimentation, the NASA Lewis Research Center's High Temperature Fatigue and Structures Laboratory has been expanded significantly. Several new materials testing systems have been added, as well as an extensive computer system. The intent of this paper is to present an overview of the laboratory, and to discuss specific aspects of the test systems. A limited discussion of computer capabilities will also be presented
    • …
    corecore