1,846 research outputs found
An integrated product and process information modelling system for on-site construction
The inadequate infrastructure that exists for seamless project team communications has its
roots in the problems arising from fragmentation, and the lack of effective co-ordination
between stages of the construction process. The use of disparate computer-aided engineering (CAE) systems by most disciplines is one of the enduring legacies of this problem and makes information exchange between construction team members difficult and, in some cases, impossible. The importance of integrating modelling techniques with a view to creating an integrated product and process model that is applicable to all stages of a construction project's life cycle, is being recognised by the Construction Industry. However, improved methods are still needed to assist the developer in the definition of information model structures, and current modelling methods and standards are only able
to provide limited assistance at various stages of the information modelling process. This research investigates the role of system integration by reviewing product and process
information models, current modelling practices and modelling standards in the construction industry, and draws conclusions with similar practices from other industries, both in terms of product and process representation, and model content. It further reviews various application development tools and information system requirements to support a suitable integrated information structure, for developing an integrated product and process model for design and construction, based on concurrent engineering principles. The functional and information perspectives of the integrated model, which were represented using IDEFO and the unified modelling language (UML), provided the basis for
developing a prototype hyper-integrated product and process information modelling system (HIPPY). Details of the integrated conceptual model's implementation, practical application of the prototype system, using house-building as an example, and evaluation by industry practitioners are also presented. It is concluded that the effective integration of product and process information models is a key component of the implementation of concurrent engineering in construction, and is a vital step towards providing richer information representation, better efficiency, and the flexibility to support life cycle information management during the construction stage of small to medium sized-building projects
Recommended from our members
Mobile collaborative working environment of product design
In response to the arrival of new Web/Internet environments, one of the most attractive challenges in current research is to exploit wireless computing technologies in collaborative product design, and hence to build a ubiquitous mobile information system to enable the collaborative product design within a mobile environment. However, the literature review reveals that although the progress of mobile technologies on wireless networks has largely changed the way people access the Internet; little has been achieved in mobile computing for collaborative product design. The reason is that, due to the distinct features of mobile devices and wireless networks (such as small display screen, limited bandwidth, unreliability of wireless networks, etc.), the methodologies and technologies used in stationary networks are not always applicable to mobile systems. The aim of this research is to establish a Wireless Internet-based Collaborative Working Environment for product design through the combination of multiple technologies, by including: Web services, Parametric Design, the Semantic Web, Agent and Flex Technologies. In order to create, deploy, and manage the distributed resources, Web service is used to implement design resource integration in a platform-independent manner. In addition, Semantic Web Technology is used to create a general knowledge base. This approach includes two components: (1) ontology is used to represent abstract views of product data and (2) added semantic rules are also used to represent relationships among product data. Therefore, an ontology-based description model is thus proposed to facilitate expression and organisation of product information in order to manage and deploy the distributed design resources
Business rules based legacy system evolution towards service-oriented architecture.
Enterprises can be empowered to live up to the potential of becoming dynamic, agile and real-time. Service orientation is emerging from the amalgamation of a number of key business, technology and cultural developments. Three essential trends in particular are coming together to create a new revolutionary breed of enterprise, the service-oriented enterprise (SOE): (1) the continuous performance management of the enterprise; (2) the emergence of business process management; and (3) advances in the standards-based service-oriented infrastructures.
This thesis focuses on this emerging three-layered architecture that builds on a service-oriented architecture framework, with a process layer that brings technology and business together, and a corporate performance layer that continually monitors and improves the performance indicators of global enterprises provides a novel framework for the business context in which to apply the important technical idea of service orientation and moves it from being an interesting tool for engineers to a vehicle for business managers to fundamentally improve their businesses
Recommended from our members
Software lock elision for x86 machine code
More than a decade after becoming a topic of intense research there is no
transactional memory hardware nor any examples of software transactional memory
use outside the research community. Using software transactional memory in large
pieces of software needs copious source code annotations and often means
that standard compilers and debuggers can no longer be used. At the same time,
overheads associated with software transactional memory fail to motivate
programmers to expend the needed effort to use software transactional
memory. The only way around the overheads in the case of general unmanaged code
is the anticipated availability of hardware support. On the other hand, architects
are unwilling to devote power and area budgets in mainstream microprocessors to
hardware transactional memory, pointing to transactional memory being a
"niche" programming construct. A deadlock has thus ensued that is blocking
transactional memory use and experimentation in the mainstream.
This dissertation covers the design and construction of a software transactional
memory runtime system called SLE_x86 that can potentially break this
deadlock by decoupling transactional memory from programs using it. Unlike most
other STM designs, the core design principle is transparency rather than
performance. SLE_x86 operates at the level of x86 machine code, thereby
becoming immediately applicable to binaries for the popular x86
architecture. The only requirement is that the binary synchronise using known
locking constructs or calls such as those in Pthreads or OpenMP
libraries. SLE_x86 provides speculative lock elision (SLE) entirely in
software, executing critical sections in the binary using transactional
memory. Optionally, the critical sections can also be executed without using
transactions by acquiring the protecting lock.
The dissertation makes a careful analysis of the impact on performance due to
the demands of the x86 memory consistency model and the need to transparently
instrument x86 machine code. It shows that both of these problems can be
overcome to reach a reasonable level of performance, where transparent
software transactional memory can perform better than a lock. SLE_x86 can
ensure that programs are ready for transactional memory in any form, without
being explicitly written for it
Granularity in Large-Scale Parallel Functional Programming
This thesis demonstrates how to reduce the runtime of large non-strict functional programs using parallel evaluation. The parallelisation of several programs shows the importance of granularity, i.e. the computation costs of program expressions. The aspect of granularity is studied both on a practical level, by presenting and measuring runtime granularity improvement mechanisms, and at a more formal level, by devising a static granularity analysis. By parallelising several large functional programs this thesis demonstrates for the first time the advantages of combining lazy and parallel evaluation on a large scale: laziness aids modularity, while parallelism reduces runtime. One of the parallel programs is the Lolita system which, with more than 47,000 lines of code, is the largest existing parallel non-strict functional program. A new mechanism for parallel programming, evaluation strategies, to which this thesis contributes, is shown to be useful in this parallelisation. Evaluation strategies simplify parallel programming by separating algorithmic code from code specifying dynamic behaviour. For large programs the abstraction provided by functions is maintained by using a data-oriented style of parallelism, which defines parallelism over intermediate data structures rather than inside the functions. A highly parameterised simulator, GRANSIM, has been constructed collaboratively and is discussed in detail in this thesis. GRANSIM is a tool for architecture-independent parallelisation and a testbed for implementing runtime-system features of the parallel graph reduction model. By providing an idealised as well as an accurate model of the underlying parallel machine, GRANSIM has proven to be an essential part of an integrated parallel software engineering environment. Several parallel runtime- system features, such as granularity improvement mechanisms, have been tested via GRANSIM. It is publicly available and in active use at several universities worldwide. In order to provide granularity information this thesis presents an inference-based static granularity analysis. This analysis combines two existing analyses, one for cost and one for size information. It determines an upper bound for the computation costs of evaluating an expression in a simple strict higher-order language. By exposing recurrences during cost reconstruction and using a library of recurrences and their closed forms, it is possible to infer the costs for some recursive functions. The possible performance improvements are assessed by measuring the parallel performance of a hand-analysed and annotated program
Development of Reaction Injection Moulded Polyurethane Foam Including Assessment of Densification and Reinforcement for use as a Structural Core in Rotationally Moulded Products
To improve the performance of specific rotomoulded products being developed at a local company, reinforcement of the hollow core of the products with reaction injection moulded polyurethane (RIM PU) foam was investigated. Improvement of the foam mechanical properties was also investigated, with density variation and the addition of short glass fibre reinforcement.
Testing showed the foam's mechanical properties were not directly relative to density. When foam density was doubled from 300 to 600kg/m3, the tensile strength increased by a factor of 2.7 and the modulus by a factor of 2.5. For ME1020 (fibre type) 6mm chopped fibre reinforced foam, these increases were larger, at factors of 3.0 and 2.6 for strength and modulus, respectively. For 300kg/m3 foam, fibre made negligible difference to the tensile strength, but the ME1020 reinforced foam was found to have 29% higher modulus than the neat foam at the same density (for 5wt% fibre composites). The 101C (fibre type) reinforced foam performed poorly, even showing a decrease in strength when compared to the neat foam at 600kg/m3 (for 5wt% fibre composites). The bending creep properties of reinforced foam was found to be higher than that of the neat foam in most cases, with ME1020 fibre composite foam performing better than 101C fibre reinforced composites in all cases. 5wt% ME1020 fibre reinforced foam was found to have impact strengths over twice that of neat foams at the same density. Impact strength improvements were also seen for 101C fibre reinforced foam, but to a lesser extent for both foam densities tested.
Morphological analysis of foam tensile fracture surfaces was undertaken and many interesting observations were made. Features such as cell elongation and fibre alignment with the foam flow direction were consistent with foam literature, but some unique features were observed. These include a localised 'string' cell packing trend, and also microscopic areas of localised plastic deformation in cell walls, which were visible as wrinkled surfaces on the foam cell walls.
Modification of the (rotomoulded) skin to foam interface was investigated, as this parameter will likely affect the service performance of the whole product. Experimentation with various methods to increase the skin/foam interfacial shear strength was undertaken, and large improvements were attained with methods trialled and developed. These included adding particles to the rotomoulding charge, which became embedded in the inner skin of the moulded part, and protrude from the inner surface. These particles 'key' into the foam which fills the product's hollow core. Other interfacial shear strength improvement concepts for equipment to be developed were also proposed. One concept proposed is an innovative modification to plasma treatment equipment currently available, which could be used to treat the inner surface of hollow products, to improve the bonding between the inner rotomoulded surface and the foam. Another concept is proposed which may oxidise the inner rotomoulded part surface, but, only at the very end of the rotomoulding cycle, so that the bulk polymer is not degraded. The purpose of this deliberate oxidation is to achieve results similar to those attained by plasma or flame treatment currently used by industry for improving the wettability of PE products
Generating mock skeletons for lightweight Web service testing : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Manawatū New Zealand
Modern application development allows applications to be composed using lightweight HTTP services. Testing such an application requires the availability of services that the application makes requests to. However, continued access to dependent services during testing may be restrained, making adequate testing a significant and non-trivial engineering challenge. The concept of Service Virtualisation is gaining popularity for testing such applications in isolation. It is a practise to simulate the behaviour of dependent services by synthesising responses using semantic models inferred from recorded traffic. Replacing services with their
respective mocks is, therefore, useful to address their absence and move on application testing.
In reality, however, it is unlikely that fully automated service virtualisation solutions can produce highly accurate proxies. Therefore, we recommend using service virtualisation to infer some attributes of HTTP service responses. We further acknowledge that engineers often want to fine-tune this. This requires algorithms to produce readily interpretable and customisable results. We assume that if service virtualisation is based on simple logical rules, engineers would have the potential to understand and customise rules. In this regard, Symbolic Machine Learning approaches can be investigated because of the high provenance of their results.
Accordingly, this thesis examines the appropriateness of symbolic machine learning algorithms to automatically synthesise HTTP services' mock skeletons from network traffic recordings. We consider four commonly used symbolic techniques: the C4.5 decision tree algorithm, the RIPPER and PART rule learners, and the OCEL description logic learning algorithm. The experiments are performed employing network traffic datasets extracted from a few different successful, large-scale HTTP services. The experimental design further focuses on the generation of reproducible results.
The chosen algorithms demonstrate the suitability of training highly accurate and human-readable semantic models for predicting the key aspects of HTTP service responses, such as the status and response headers. Having human-readable logics would make interpretation of the response properties simpler. These mock skeletons can then be easily customised to create mocks that can generate service responses suitable for testing
Expert system prototype for hydraulic system design focusing on concurent engineering aspects /
Tese (Doutorado)- Universidade Federal de Santa Catarina, Centro Tecnológico
- …