147,452 research outputs found
Recommended from our members
A Systematic Performance Study of Object Database Management Systems
Many previous performance benchmarks for Object Database Management Systems (ODBMSs) have typically used arbitrary sets of tests based on what their designers felt were the characteristics of Engineering applications. Increasingly, however, ODBMSs are being used in non-engineering domains, such as Financial Trading, Clinical Healthcare, Telecommunications Network Management, etc. Part of the reason for this is that the technology has matured over the past few years and has become a less risky choice for organisations looking for better w'ays to manage complex data. However, the development of suitable application- or industry-specific benchmarks, based on actual performance studies, has not paralleled this growth.
The research reported here approaches performance evaluation of ODBMSs pragmatically. It uses a combination of case studies and benchmark experiments to investigate the performance characteristics of ODBMSs for particular applications, following the successful use of this approach by Youssef [Youss93] for studying the performance of On- Line Transaction Processing (OLTP) applications for Relational Database Management Systems (RDBMSs).
Six case studies at five organisations showā that organisations consider a wide range of factors when undertaking their own performance studies or benchmarks. Furthermore, none of the studied organisations considered using any public benchmarks. Six current and derived benchmarks also highlight statistically significant performance differences between three major commercial products: Objectivity/DB, ObjectStore and UniSQL. These benchmarks indicate the suitability of the products tested for particular application domains.
The research could not find any evidence at this time to support the concept of a generic or canonical performance workload for ODBMSs. This is demonstrated by the case studies and supported by the benchmark experiments. However, the research shows that performance benchmarks serve a very useful role in ODBMS evaluations and can help identify architectural and quality problems with products that would not otherwise be observed until significant application or system development was already in progress
Decision-focussed resource modelling for design decision support
Resource management including resource allocation, levelling, configuration and monitoring has been recognised as critical to design decision making. It has received increasing research interests in recent years. Different definitions, models and systems have been developed and published in literature. One common issue with existing research is that the resource modelling has focussed on the information view of resources. A few acknowledged the importance of resource capability to design management, but none has addressed the evaluation analysis of resource fitness to effectively support design decisions. This paper proposes a decision-focused resource model framework that addresses the combination of resource evaluation with resource information from multiple perspectives. A resource management system constructed on the resource model framework can provide functions for design engineers to efficiently search and retrieve the best fit resources (based on the evaluation results) to meet decision requirements. Thus, the system has the potential to provide improved decision making performance compared with existing resource management systems
Technology adoption in the BIM implementation for lean architectural practice
Justification for Research: the construction companies are facing barriers and challenges in BIM adoption as there is no clear guidance or best practice studies from which they can learn and build up their capacity for BIM use in order to increase productivity, efficiency, quality, and to attain competitive advantages in the global market and to achieve the targets in environmental sustainability.
Purpose: this paper aims to explain a comprehensive and systemic evaluation and assessment of the relevant BIM technologies as part of the BIM adoption and implementation to demonstrate how efficiency gains have been achieved towards a lean architectural practice.
Design/Methodology/Approach: The research is undertaken through a KTP (Knowledge transfer Partnership) project between the University of Salford and the John McCall Architects based in Liverpool, which is an SME (Small Medium Enterprise). The overall aim of KTP is to develop Lean Design Practice through the BIM adoption and implementation. The overall BIM implementation approach uses a socio-technical view in which it does not only consider the implementation of technology but also considers the socio-cultural environment that provides the context for its implementation. The technology adoption methodology within the BIM implementation approach is the action research oriented qualitative and quantitative research for discovery, comparison, and experimentation as the KTP project with JMA provides an environment for ālearning by doingā
Findings: research has proved that BIM technology adoption should be undertaken with a bottom-up approach rather than top-down approach for successful change management and dealing with the resistance to change. As a result of the BIM technology adoption, efficiency gains are achieved through the piloting projects and the design process is improved through the elimination of wastes and value generation.
Originality/Value: successful BIM adoption needs an implementation strategy. However, at operational level, it is imperative that professional guidelines are required as part of the implementation strategy. This paper introduces a systematic approach for BIM technology adoption based on a case study implementation and it demonstrates a guideline at operational level for other SME companies of architectural practices
Towards Exascale Scientific Metadata Management
Advances in technology and computing hardware are enabling scientists from
all areas of science to produce massive amounts of data using large-scale
simulations or observational facilities. In this era of data deluge, effective
coordination between the data production and the analysis phases hinges on the
availability of metadata that describe the scientific datasets. Existing
workflow engines have been capturing a limited form of metadata to provide
provenance information about the identity and lineage of the data. However,
much of the data produced by simulations, experiments, and analyses still need
to be annotated manually in an ad hoc manner by domain scientists. Systematic
and transparent acquisition of rich metadata becomes a crucial prerequisite to
sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and
domain-agnostic metadata management infrastructure that can meet the demands of
extreme-scale science is notable by its absence.
To address this gap in scientific data management research and practice, we
present our vision for an integrated approach that (1) automatically captures
and manipulates information-rich metadata while the data is being produced or
analyzed and (2) stores metadata within each dataset to permeate
metadata-oblivious processes and to query metadata through established and
standardized data access interfaces. We motivate the need for the proposed
integrated approach using applications from plasma physics, climate modeling
and neuroscience, and then discuss research challenges and possible solutions
Performance Testing of Distributed Component Architectures
Performance characteristics, such as response time, throughput andscalability, are key quality attributes of distributed applications. Current practice,however, rarely applies systematic techniques to evaluate performance characteristics.We argue that evaluation of performance is particularly crucial in early developmentstages, when important architectural choices are made. At first glance, thiscontradicts the use of testing techniques, which are usually applied towards the endof a project. In this chapter, we assume that many distributed systems are builtwith middleware technologies, such as the Java 2 Enterprise Edition (J2EE) or theCommon Object Request Broker Architecture (CORBA). These provide servicesand facilities whose implementations are available when architectures are defined.We also note that it is the middleware functionality, such as transaction and persistenceservices, remote communication primitives and threading policy primitives,that dominates distributed system performance. Drawing on these observations, thischapter presents a novel approach to performance testing of distributed applications.We propose to derive application-specific test cases from architecture designs so thatthe performance of a distributed application can be tested based on the middlewaresoftware at early stages of a development process. We report empirical results thatsupport the viability of the approach
Model-driven performance evaluation for service engineering
Service engineering and service-oriented architecture as an
integration and platform technology is a recent approach to software systems integration. Software quality aspects such as performance are of central importance for the integration of heterogeneous, distributed service-based systems. Empirical performance evaluation is a process of
measuring and calculating performance metrics of the implemented software. We present an approach for the empirical, model-based performance evaluation of services and service compositions in the context of model-driven service engineering. Temporal databases theory is utilised
for the empirical performance evaluation of model-driven developed service systems
A systematic approach for monitoring and evaluating the construction project progress
A persistent problem in construction is to document changes which occur in the field and to prepare the as-built schedule. In current practice, deviations from planned performance can only be reported after significant time has elapsed and manual monitoring of the construction activities are costly and error prone. Availability of advanced portable computing, multimedia and wireless communication allows, even encourages fundamental changes in many jobsite processes. However a recent investigation indicated that there is a lack of systematic and automated evaluation and monitoring in construction projects. The aim of this study is to identifytechniques that can be used in the construction industry for monitoring and evaluating the
physical progress, and also to establish how current computer technology can be utilised for monitoring the actual physical progress at the construction site. This study discusses the results of questionnaire survey conducted within Malaysian Construction Industry and suggests a prototype system, namely Digitalising Construction Monitoring (DCM). DCM prototype system
integrates the information from construction drawings, digital images of construction site progress and planned schedule of work. Using emerging technologies and information system the DCM re-engineer the traditional practice for monitoring the project progress. This system can automatically interpret CAD drawings of buildings and extract data on its structural components and store in database. It can also extract the engineering information from digital images and when these two databases are simulated the percentage of progress can be calculated and viewed in Microsoft Project automatically. The application of DCM system for monitoring the project progress enables project management teams to better track and controls the productivity and quality of construction projects. The use of the DCM can help resident engineer, construction manager and site engineer in monitoring and evaluating project performance. This model will improve decision-making process and provides better mechanism for advanced project management
- ā¦