488,332 research outputs found
Metadata And Data Management In High Performance File And Storage Systems
With the advent of emerging e-Science applications, today\u27s scientific research increasingly relies on petascale-and-beyond computing over large data sets of the same magnitude. While the computational power of supercomputers has recently entered the era of petascale, the performance of their storage system is far lagged behind by many orders of magnitude. This places an imperative demand on revolutionizing their underlying I/O systems, on which the management of both metadata and data is deemed to have significant performance implications. Prefetching/caching and data locality awareness optimizations, as conventional and effective management techniques for metadata and data I/O performance enhancement, still play their crucial roles in current parallel and distributed file systems. In this study, we examine the limitations of existing prefetching/caching techniques and explore the untapped potentials of data locality optimization techniques in the new era of petascale computing. For metadata I/O access, we propose a novel weighted-graph-based prefetching technique, built on both direct and indirect successor relationship, to reap performance benefit from prefetching specifically for clustered metadata serversan arrangement envisioned necessary for petabyte scale distributed storage systems. For data I/O access, we design and implement Segment-structured On-disk data Grouping and Prefetching (SOGP), a combined prefetching and data placement technique to boost the local data read performance for parallel file systems, especially for those applications with partially overlapped access patterns. One high-performance local I/O software package in SOGP work for Parallel Virtual File System in the number of about 2000 C lines was released to Argonne National Laboratory in 2007 for potential integration into the production mode
Coordination in the Decentralized Assembly System with Dual Supply Modes
This paper investigates a decentralized assembly system that consists of one assembler and two independent suppliers; wherein one supplier is perfectly reliable for the production, while the other generates yield uncertainty. Facing the random market demand, the assembler has to order the components from one supplier in advance and meanwhile requires the other supplier to deliver the components under VMI mode. We construct a Nash game between the supplier and the assembler so as to derive their equilibrium procurement/production strategies. The results show that the channel’s performance is highly undermined by the decentralization between players and also the combination of two supply modes. Compared to the centralized system, we propose an advance payment contract to perfectly coordinate the supply chain performance. The numerical examples indicate some management implications on the supply mode comparison and sensitivity analysis
Driving automation: Learning from aviation about design philosophies
Full vehicle automation is predicted to be on British roads by 2030 (Walker et al., 2001). However, experience in aviation gives us some cause for concern for the 'drive-by-wire' car (Stanton and Marsden, 1996). Two different philosophies have emerged in aviation for dealing with the human factor: hard vs. soft automation, depending on whether the computer or the pilot has ultimate authority (Hughes and Dornheim, 1995). This paper speculates whether hard or soft automation provides the best solution for road vehicles, and considers an alternative design philosophy in vehicles of the future based on coordination and cooperation
Development of a knowledge-based system for the repair and maintenance of concrete structures
PhD ThesisInformation Technology (IT) can exploit strategic opportunities for new ways of
facilitating information and data exchange and the exchange of expert and specialist
opinions in any field of engineering. Knowledge-Based Systems are sophisticated
computer programs which store expert knowledge on specific subject and are applied to a
broad range of engineering problems. Integrated Database applications have facilitated
the essential capability of storing data to overcome an increasing information malaise.
Integrating these areas of Information Technology (IT) can be used to bring a group of
experts in any field of engineering closer together by allowing them to communicate and
exchange information and opinions.
The central feature of this research study is the integration of these hitherto separate areas
of Information Technology (IT). In this thesis an adaptable Graphic User Interface
Centred application comprising a Knowledge-Based Expert System (DEMARECEXPERT),
a Database Management System (REPCON) and Evaluation program
(ECON) alongside visualisation technologies is developed to produce an innovative
platform which will facilitate and encourage the development of knowledge in concrete
repair. Diagnosis, Evaluation, MAintenance and REpair of Concrete structures
(DEMAREQ is a flexible application which can be used in four modes of Education,
Diagnostic, Evaluation and Evolution. In the educational mode an inexperienced user can
develop a better understanding of the repair of concrete technology by navigating through
a database of textual and pictorial data.
In the diagnostic mode, pictures and descriptive information taken from the database and
performance of the expert system (DEMAREC-EXPERT) are used in a way that makes
problem solving and decision making easier. The DEMAREC-EXPERT system is
coupled to the REPCON (as an independent database) in order to provide the user with
recommendations related to the best course required for maintenance and in the selection
of materials and methods for the repair of concrete.
In the evaluation mode the conditions observed are described in unambiguous terms that
can be used by the user to be able to take engineering and management actions for the
repair and maintenance of the structure.
In the evolution mode of the application, the nature of distress, repair and maintenance of
concrete structures within the extent of the database management system has been
assessedT. he new methodology of data/usere valuation could have wider implications in
many knowledge rich areas of expertise. The benefit of using REPCON lies in the
enhanced levels of confidence which can be attributed to the data and to contribution of
that data. Effectively, REPCON is designed to model a true evolution of a field of
expertise but allows that expertise to move on in faster and more structured manner.
This research has wider implications than within the realm of concrete repair. The
methodology described in this thesis is developed to provide tecýnology transfer of
information from experts, specialists to other practitioners and vice versa and it provides
a common forum for communication and exchange information between them. Indeed,
one of the strengths of the system is the way in which it allows the promotion and
relegation of knowledge according to the opinion of users of different levels of ability
from expert to novice. It creates a flexible environment in which an inexperienced user
can develop his knowledge in maintenance and concrete repair structures. It is explained
how an expert and a specialist can contribute his experience and knowledge towards
improving and evolving the problem solving capability of the application
Distribution network optimisation for an active network management system
The connection of Distributed Generators (DGs) to a distribution network causes technical concerns for Distribution Network Operators (DNOs) which include power flow management, loss increase and voltage management problems. An Active Network Management System can provide monitoring and control of the distribution network as well as providing the infrastructure and technology for full integration of DGs into the distribution network. The Optimal Power Flow (OPF) method is a valuable tool in providing optimal control solutions for active network management system applications.
The research presented here has concentrated on the development of a multi-objective OPF to provide power flow management, voltage control solutions and network optimisation strategies. The OPF has been shown to provide accurate solutions for variety of network topologies. It is possible to apply time-series of load and generation data to the OPF in a loop, generating optimal network solutions to maintain the network within thermal and voltage limits. The OPF incorporates not only the DG real power output maximisation, but also network loss minimisation as well as minimising the dispatch of DG reactive power. This investigation uses a direct Interior Point (IP) method as the solution methodology which is speed efficient and converges in polynomial time. Each objective function has been assigned a weighting factor, making it possible to favour one objective function and ignore the others. Contributions to enhance the performance of the IP OPF algorithm include a new generic barrier parameter formulation and a new swing bus formulation to model energy export/import in the main optimisation routine.
A Terminal Voltage Regulator Mode (TVRM) and Power Factor Regulation Mode (PFRM) for DG were incorporated in the main optimisation routine. The main motivation is to compare these two decentralised DG control methods in terms of the achieving the maximum DG real power generation. The DG operation methods of TVRM and PFRM are compared with the optimisation results obtained from centralised dispatch in terms of the DG capacity achieved as it produces the optimum overall network solution. A suitable value of the droop and local voltage regulator dead-bands were determined for particular DGs. Furthermore, the effect of these decentralised DG control methods on distribution network losses are considered in a measure to assess the financial implications from a DNO's perspective
Ethics and taxation : a cross-national comparison of UK and Turkish firms
This paper investigates responses to tax related ethical issues facing busines
- …