932,882 research outputs found
Complexity in Designing Energy Efficient Buildings: Towards Understanding Decision Networks in Design
Most important decisions for designing energy efficient buildings are made in the early stages of design. Designing is a complex interdisciplinary task, and energy efficiency requirements are pushing boundaries even further. This study analyzes the level of complexity for energy efficient building design and possible remedies for managing or reducing the complexity. Methodologically, we used the design structure matrix for mapping the current design tasks and hierarchical decomposition of lifecycle analysis for visualizing the interdependency of the design tasks and design disciplines and how changes propagate throughout the system, tasks and disciplines. We have visualized the interdependency of design tasks and design disciplines and how changes propagate throughout the system. Current design of energy efficiency building is a linear and one-shot approach without iterations planned into the process. Broken management techniques do not help to reduce the complexit
A batch scheduler with high level components
In this article we present the design choices and the evaluation of a batch
scheduler for large clusters, named OAR. This batch scheduler is based upon an
original design that emphasizes on low software complexity by using high level
tools. The global architecture is built upon the scripting language Perl and
the relational database engine Mysql. The goal of the project OAR is to prove
that it is possible today to build a complex system for ressource management
using such tools without sacrificing efficiency and scalability. Currently, our
system offers most of the important features implemented by other batch
schedulers such as priority scheduling (by queues), reservations, backfilling
and some global computing support. Despite the use of high level tools, our
experiments show that our system has performances close to other systems.
Furthermore, OAR is currently exploited for the management of 700 nodes (a
metropolitan GRID) and has shown good efficiency and robustness
Estimation of Defect proneness Using Design complexity Measurements in Object- Oriented Software
Software engineering is continuously facing the challenges of growing
complexity of software packages and increased level of data on defects and
drawbacks from software production process. This makes a clarion call for
inventions and methods which can enable a more reusable, reliable, easily
maintainable and high quality software systems with deeper control on software
generation process. Quality and productivity are indeed the two most important
parameters for controlling any industrial process. Implementation of a
successful control system requires some means of measurement. Software metrics
play an important role in the management aspects of the software development
process such as better planning, assessment of improvements, resource
allocation and reduction of unpredictability. The process involving early
detection of potential problems, productivity evaluation and evaluating
external quality factors such as reusability, maintainability, defect proneness
and complexity are of utmost importance. Here we discuss the application of CK
metrics and estimation model to predict the external quality parameters for
optimizing the design process and production process for desired levels of
quality. Estimation of defect-proneness in object-oriented system at design
level is developed using a novel methodology where models of relationship
between CK metrics and defect-proneness index is achieved. A multifunctional
estimation approach captures the correlation between CK metrics and defect
proneness level of software modules.Comment: 5 pages, 1 figur
Prognostics: Design, Implementation, and Challenges
Prognostics is an essential part of condition-based maintenance (CBM), described as predicting the remaining useful life
(RUL) of a system. It is also a key technology for an integrated vehicle health management (IVHM) system that leads
to improved safety and reliability. A vast amount of research has been presented in the literature to develop prognostics
models that are able to predict a system’s RUL. These models can be broadly categorised into experience-based models,
data-driven models and physics-based models. Therefore, careful consideration needs to be given to selecting which
prognostics model to take forward and apply for each real application. Currently, developing reliable prognostics models
in real life is challenging for various reasons, such as the design complexity associated with a system, the high uncertainty
and its propagation in the degradation, system level prognostics, the evaluation framework and a lack of prognostics
standards. This paper is written with the aim to bring forth the challenges and opportunities for developing prognostics
models for complex systems and making researchers aware of these challenges and opportunities
A Method for Visualizing the Structural Complexity of Organizational Architectures
To achieve a high level of performance and efficiency, contemporary aerospace systems must become increasingly complex. While complexity management traditionally focuses on a product’s components and their interconnectedness, organizational representation in complexity analysis is just as essential. This thesis addresses this organizational aspect of complexity through an Organizational Complexity Metric (OCM) to aid complexity management. The OCM augments Sinha’s structural complexity metric for product architectures into a metric that can be applied to organizations. Utilizing nested numerical design structure matrices (DSMs), a compact visual representation of organizational complexity was developed. Within the nested numerical DSM are existing organizational datasets used to quantify the complexity of both organizational system components and their interfaces. The OCM was applied to a hypothetical system example, as well as an existing aerospace organizational architecture. Through the development of the OCM, this thesis assumed that each dataset was collected in a statistically sufficient manner and has a reasonable correlation to system complexity. This thesis recognizes the lack of complete human representation and aims to provide a platform for expansion. Before a true organizational complexity metric can be applied to real systems, additional human considerations should be considered. These limitations differ from organization to organization and should be taken into consideration before implementation into a working system. The visualization of organizational complexity uses a color gradient to show the relative complexity density of different parts of the organization
From MARTE to Reconfigurable NoCs: A model driven design methodology
Due to the continuous exponential rise in SoC's design complexity, there is a critical need to find new seamless methodologies and tools to handle the SoC co-design aspects. We address this issue and propose a novel SoC co-design methodology based on Model Driven Engineering and the MARTE (Modeling and Analysis of Real-Time and Embedded Systems) standard proposed by Object Management Group, to raise the design abstraction levels. Extensions of this standard have enabled us to move from high level specifications to execution platforms such as reconfigurable FPGAs. In this paper, we present a high level modeling approach that targets modern Network on Chips systems. The overall objective: to perform system modeling at a high abstraction level expressed in Unified Modeling Language (UML); and afterwards, transform these high level models into detailed enriched lower level models in order to automatically generate the necessary code for final FPGA synthesis
Modelling and Co-simulation of hybrid vehicles: A thermal management perspective
Thermal management plays a vital role in the modern vehicle design and delivery. It enables the thermal analysis and optimisation of energy distribution to improve performance, increase efficiency and reduce emissions. Due to the complexity of the overall vehicle system, it is necessary to use a combination of simulation tools. Therefore, the co-simulation is at the centre of the design and analysis of electric, hybrid vehicles. For a holistic vehicle simulation to be realized, the simulation environment must support many physical domains. In this paper, a wide variety of system designs for modelling vehicle thermal performance are reviewed, providing an overview of necessary considerations for developing a cost-effective tool to evaluate fuel consumption and emissions across dynamic drive-cycles and under a range of weather conditions. The virtual models reviewed in this paper provide tools for component-level, system-level and control design, analysis, and optimisation. This paper concerns the latest techniques for an overall vehicle model development and software integration of multi-domain subsystems from a thermal management view and discusses the challenges presented for future studies
A graph-based aspect interference detection approach for UML-based aspect-oriented models
Aspect Oriented Modeling (AOM) techniques facilitate separate modeling of concerns and allow for a more flexible composition of these than traditional modeling technique. While this improves the understandability of each submodel, in order to reason about the behavior of the composed system and to detect conflicts among submodels, automated tool support is required. Current techniques for conflict detection among aspects generally have at least one of the following weaknesses. They require to manually model the abstract semantics for each system; or they derive the system semantics from code assuming one specific aspect-oriented language. Defining an extra semantics model for verification bears the risk of inconsistencies between the actual and the verified design; verifying only at implementation level hinders fixng errors in earlier phases. We propose a technique for fully automatic detection of conflicts between aspects at the model level; more specifically, our approach works on UML models with an extension for modeling pointcuts and advice. As back-end we use a graph-based model checker, for which we have defined an operational semantics of UML diagrams, pointcuts and advice. In order to simulate the system, we automatically derive a graph model from the diagrams. The result is another graph, which represents all possible program executions, and which can be verified against a declarative specification of invariants.\ud
To demonstrate our approach, we discuss a UML-based AOM model of the "Crisis Management System" and a possible design and evolution scenario. The complexity of the system makes con°icts among composed aspects hard to detect: already in the case of two simulated aspects, the state space contains 623 di®erent states and 9 different execution paths. Nevertheless, in case the right pruning methods are used, the state-space only grows linearly with the number of aspects; therefore, the automatic analysis scales
Adaptive Performance and Power Management in Distributed Computing Systems
The complexity of distributed computing systems has raised two unprecedented challenges for system management. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, system power consumption must be controlled in order to avoid system failures caused by power capacity overload or system overheating due to increasingly high server density. However, most existing work, unfortunately, either relies on open-loop estimations based on off-line profiled system models, or evolves in a more ad hoc fashion, which requires exhaustive iterations of tuning and testing, or oversimplifies the problem by ignoring the coupling between different system characteristics (\ie, response time and throughput, power consumption of different servers). As a result, the majority of previous work lacks rigorous guarantees on the performance and power consumption for computing systems, and may result in degraded overall system performance. In this thesis, we extensively study adaptive performance/power management and power-efficient performance management for distributed computing systems such as information dissemination systems, power grid management systems, and data centers, by proposing Multiple-Input-Multiple-Output (MIMO) control and hierarchical designs based on feedback control theory. For adaptive performance management, we design an integrated solution that controls both the average response time and CPU utilization in information dissemination systems to achieve bounded response time for high-priority information and maximized system throughput in an example information dissemination system. In addition, we design a hierarchical control solution to guarantee the deadlines of real-time tasks in power grid computing by grouping them based on their characteristics, respectively. For adaptive power management, we design MIMO optimal control solutions for power control at the cluster and server level and a hierarchical solution for large-scale data centers. Our MIMO control design can capture the coupling among different system characteristics, while our hierarchical design can coordinate controllers at different levels. For power-efficient performance management, we discuss a two-layer coordinated management solution for virtualized data centers. Experimental results in both physical testbeds and simulations demonstrate that all the solutions outperform state-of-the-art management schemes by significantly improving overall system performance
- …