115,857 research outputs found
Creating Responsive Information Systems with the Help of SSADM
In this paper, a program for a research is outlined. Firstly, the concept of responsive information systems is defined and then the notion of the capacity planning and software performance engineering is clarified. Secondly, the purpose of the proposed methodology of capacity planning, the interface to information systems analysis and development methodologies (SSADM), the advantage of knowledge-based approach is discussed. The interfaces to CASE tools more precisely to data dictionaries or repositories (IRDS) are examined in the context of a certain systems analysis and design methodology (e.g. SSADM)
Resilience markers for safer systems and organisations
If computer systems are to be designed to foster resilient
performance it is important to be able to identify contributors to resilience. The
emerging practice of Resilience Engineering has identified that people are still a
primary source of resilience, and that the design of distributed systems should
provide ways of helping people and organisations to cope with complexity.
Although resilience has been identified as a desired property, researchers and
practitioners do not have a clear understanding of what manifestations of
resilience look like. This paper discusses some examples of strategies that
people can adopt that improve the resilience of a system. Critically, analysis
reveals that the generation of these strategies is only possible if the system
facilitates them. As an example, this paper discusses practices, such as
reflection, that are known to encourage resilient behavior in people. Reflection
allows systems to better prepare for oncoming demands. We show that
contributors to the practice of reflection manifest themselves at different levels
of abstraction: from individual strategies to practices in, for example, control
room environments. The analysis of interaction at these levels enables resilient
properties of a system to be âseenâ, so that systems can be designed to explicitly
support them. We then present an analysis of resilience at an organisational
level within the nuclear domain. This highlights some of the challenges facing
the Resilience Engineering approach and the need for using a collective
language to articulate knowledge of resilient practices across domains
Tuning the Level of Concurrency in Software Transactional Memory: An Overview of Recent Analytical, Machine Learning and Mixed Approaches
Synchronization transparency offered by Software Transactional Memory (STM) must not come at the expense of run-time efficiency, thus demanding from the STM-designer the inclusion of mechanisms properly oriented to performance and other quality indexes. Particularly, one core issue to cope with in STM is related to exploiting parallelism while also avoiding thrashing phenomena due to excessive transaction rollbacks, caused by excessively high levels of contention on logical resources, namely concurrently accessed data portions. A means to address run-time efficiency consists in dynamically determining the best-suited level of concurrency (number of threads) to be employed for running the application (or specific application phases) on top of the STM layer. For too low levels of concurrency, parallelism can be hampered. Conversely, over-dimensioning the concurrency level may give rise to the aforementioned thrashing phenomena caused by excessive data contentionâan aspect which has reflections also on the side of reduced energy-efficiency. In this chapter we overview a set of recent techniques aimed at building âapplication-specificâ performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value. Although they share some base concepts while modeling the system performance vs the degree of concurrency, these techniques rely on disparate methods, such as machine learning or analytic methods (or combinations of the two), and achieve different tradeoffs in terms of the relation between the precision of the performance model and the latency for model instantiation. Implications of the different tradeoffs in real-life scenarios are also discussed
Strategic Human Capital Management: NRC Could Better Manage the Size and Composition of Its Workforce by Further Incorporating Leading Practices
[Excerpt] After the passage of the Energy Policy Act of 2005, which included tax incentives for nuclear energy, NRC significantly expanded its workforce to meet the demands of an anticipated increase in workload that ultimately did not occur. More recently, a forecast for reduced growth in the nuclear industry prompted NRC to develop plans for changing its structure and workforce to better respond to changes in the nuclear industry. Strategic human capital planning is one of several actions the agency is taking.
The explanatory statement accompanying the Consolidated Appropriations Act for fiscal year 2016 included a provision for GAO to report on NRCâs workforce management. GAO examined NRCâs strategic human capital management efforts and the extent to which these efforts incorporate leading practices.
GAO reviewed NRCâs strategic workforce plan and other related documents and interviewed knowledgeable NRC officials
- âŠ