327 research outputs found
Mapping General System Characteristics to Non- Functional Requirements
The Function point analysis (FPA) method is the preferred scheme of
estimation for project managers to determine the size, effort, schedule,
resource loading and other such parameters. The FPA method by International
Function Point Users Group (IFPUG) has captured the critical implementation
features of an application through fourteen general system characteristics.
However, Non- functional requirements (NFRs) such as functionality,
reliability, efficiency, usability, maintainability, portability, etc. have not
been included in the FPA estimation method. This paper discusses some of the
NFRs and tries to determine a degree of influence for each of them. An attempt
to factor the NFRs into estimation has been made. This approach needs to be
validated with data collection and analysis.Comment: 5 page
Recommended from our members
Organisational learning model for utility asset management using knowledge engineering approach.
Under the evolving environment, a utility company is required to improve the operation and maintenance of its physical assets usually in the forms of an asset management program. This paper proposes an organisational learning model for the utility companies with respect to the asset management activities. CommonKADS is utilised as a tool to capture the knowledge associated with managing the assets from the learning processes of the utility company. A case study of Bangpakong power plant in Thailand is presented. The results show that by applying the proposed methodologies, the learning processes within the utility companies can be categorised and explained by five major learning steps of breakdown, corrective, preventive, predictive, and proactive maintenances
Recommended from our members
Dynamic adjustment of age distribution in Human Resource Management by genetic algorithms.
Adjustment of a given age distribution to a desired
age distribution within a required time frame is dynamically
performed for the purpose of Human Resource (HR) planning
in Human Resource Management (HRM). The adjustment
process is carried out by adding the adjustment magnitudes to
the existing number of employees at the selected age groups on
the yearly basis. A model of a discrete dynamical system is
employed to emulate the evolution of the age distribution used
under the adjustment process. Genetic Algorithms (GA) is
applied for determining the adjustment magnitudes that
influence the dynamics of the system. An interesting aspect
of the problem lies in the high number of constraints;
though the constraints are fundamental, they are
considerably higher in number than in many other
optimization problems. An adaptive penalty scheme is
proposed for handling the constraints. Numerical
examples show that GA with the utilized adaptive penalty
scheme provides potential means for HR planning in HRM
Pressure Ulcer Risk and Prevention: Examining the Inter-Rater Reliability of the National Database of Nursing Quality Indicators® (NDNQI)
ABSTRACT Measuring and reporting performance have become the norm. The purpose of this descriptive multi-site (N = 36 NDNQI-participating hospitals) study was to examine the reliability of the National Database of Nursing Quality Indicators® (NDNQI®) pressure ulcer (PrU) risk and prevention measures. This is the first known study to examine the inter-rater reliability of these measures. Data for Part 1 of this two-part study were extracted from 1,637 patient records by 120 raters. One rater at each hospital was considered the "expert". Agreement between the expert and non-expert raters was calculated for the risk measures. Among the patients, 530 were "at risk" for PrU, and included in calculations of agreement for the prevention measures. In Part 2, raters completed an online survey about the methods they use to collect these data. Cohen's kappa values varied widely within and across hospitals. Because most patients were assessed for PrU risk, and those at risk received prevention, the prevalence of a "Yes" response was high suggesting prevalence-adjusted kappa (PAK) may be a better estimate of inter-rater reliability than Cohen's kappa. PAK values for: Skin assessment, PAK = .977, 95% CI [.966 - .989]; Risk assessment, PAK = .978, 95% CI [.964 -.993]; Time since last risk assessment, PAK = .790, 95% CI [.729 - .852]; Risk assessment scale, PAK = .997, 95% CI [.991 - 1.0]; Risk status, PAK = .877, 95% CI [.838 - .917]; Any prevention, PAK = .856, 95% [.769 - .943]; Skin assessment documented, PAK = .956, 95% CI [.904 - 1.0]; and Pressure-redistribution surface use, PAK = .839, 95% CI [.763 - .916] indicated substantial to near perfect agreement. PAK values for: Routine repositioning, PAK = .577, 95% CI [.494 - .661]; Nutritional support, PAK = .500, 95% CI [.418 - .581]; and Moisture management, PAK = .556, 95% CI [.469 - .643] indicated moderate agreement. Results provide support for the reliability of all (5) PrU risk measures, and three of six prevention measures. Areas of disagreement between the expert and non-expert raters should direct education to improve reliability. Results of the online survey suggest raters need further training on the NDNQI guidelines for PrU data collection
Recommended from our members
Test data generation from UML state machine diagrams using GAs
Automatic test data generation helps testers to validate
software against user requirements more easily. Test
data can be generated from many sources; for example,
experience of testers, source program, or software
specification. Selecting a proper test data set is a
decision making task. Testers have to decide what test
data that they should use, and a heuristic technique is
needed to solve this problem automatically. In this
paper, we propose a framework for generating test data
from software specifications. The selected specification
is Unified Modeling Language (UML) state machine
diagram. UML state machine diagram describes a
system in term of state which can be changed when
there is an action occurring in the system. The
generated test data is a sequence of these actions.
These sequences of action help testers to know how they
should test the system. The quality of generated test
data is measured by the number of transitions which is
fired using the test data. The more transitions test data
can fire, the better quality of test data is. The number of
coverage transitions is also used as a feedback for a
heuristic search for a better test set. Genetic algorithms
(GAs) are selected for searching the best test data. Our
experimental results show that the proposed GA-based
approach can work well for generating test data for
some types of UML state machine diagrams
Recommended from our members
An automatic test data generation from UML state diagram using genetic algorithm.
Software testing is a part of software development
process. However, this part is the first one to miss by software
developers if there is a limited time to complete the project.
Software developers often finish their software construction
closed to the delivery time, they usually don¿t have enough time
to create effective test cases for testing their programs. Creating
test cases manually is a huge work for software developers in the
rush hours. A tool which automatically generates test cases and
test data can help the software developers to create test cases
from software designs/models in early stage of the software
development (before coding). Heuristic techniques can be applied
for creating quality test data. In this paper, a GA-based test data
generation technique has been proposed to generate test data
from UML state diagram, so that test data can be generated
before coding. The paper details the GA implementation to
generate sequences of triggers for UML state diagram as test
cases. The proposed algorithm has been demonstrated manually
for an example of a vending machine
Heavy Quarkonium Melting in Large N Thermal QCD
Large N QCD is mostly governed by planar diagrams and should show linear
confinement when these diagrams are suitably summed. The linear confinement of
quarks in a class of these theories using gravity duals that capture the
logarithmic runnings of the coupling constants in the IR and strongly coupled
asymptotic conformal behavior in the UV was studied in our previous work. We
also extended the theories to high temperatures and argued the possibilities of
meltings and suppressions of heavy quarkonium states. In this paper we give a
formal proof of melting using very generic choices of UV completions, and point
out some subtleties associated with meltings in generic large N theories. Our
proof requires only the existence of well defined UV behaviors that are devoid
of Landau poles and UV divergences of the Wilson loops, allowing degrees of
freedom to increase monotonously with energy scale. We determine the melting
temperatures of heavy quarkonium states, which could suggest the presence of
deconfinement phase transitions in these theories.Comment: 15 pages, LaTex file, 6 eps figures; v2: typos corrected and
references added; v3: some additional typos corrected, and the draft slightly
enlarged. Final version to appear in Physics Letters
Recommended from our members
Enhancing university research activities with knowledge management.
In the new economy, innovation is regarded as
one of the solutions for almost every organisation to survive
in the new business era. Universities, especially in terms of
research activities, are no difference since they strive for
novelties which potentially lead to innovation. An
experienced researcher in the university has continually
created tacit knowledge in a specific domain, but typically
found it difficult to share this tacit knowledge among other
researchers for the problem solving purpose. To overcome
this problem and to better stimulate knowledge sharing
activities among university researchers, Knowledge
Management and Knowledge Engineering, particularly
KADS, are utilised in this paper to assist a group of different
domain researchers in putting their experiences together. In
this way, each researcher can make explicit his or her tacit
knowledge into KADS task, inference and domain
knowledge models. The structured knowledge models
captured from different researchers can then be merged
together. In this paper, the research in Knowledge
Management is selected as a case study, and the results show
that the relevant tacit knowledge has been made explicit from
a researcher and allow other researchers to share the
knowledge as well as to add their own knowledge. Hence,
their common research theme is effectively created, and also
maintained by a group of researchers
Argyres-Douglas Loci, Singularity Structures and Wall-Crossings in Pure N=2 Gauge Theories with Classical Gauge Groups
N=2 Seiberg-Witten theories allow an interesting interplay between the
Argyres-Douglas loci, singularity structures and wall-crossing formulae. In
this paper we investigate this connection by first studying the singularity
structures of hyper-elliptic Seiberg-Witten curves for pure N=2 gauge theories
with SU(r+1) and Sp(2r) gauge groups, and propose new methods to locate the
Argyres-Douglas loci in the moduli space, where multiple mutually non-local BPS
states become massless. In a region of the moduli space, we compute dyon
charges for all 2r+2 and 2r+1 massless dyons for SU(r+1) and Sp(2r) gauge
groups respectively for rank r>1. From here we elucidate the connection to the
wall-crossing phenomena for pure Sp(4) Seiberg-Witten theory near the
Argyres-Douglas loci, despite our emphasis being only at the massless sector of
the BPS spectra. We also present 2r-1 candidates for the maximal
Argyres-Douglas points for pure SO(2r+1) Seiberg-Witten theory.Comment: 81 pages, 41 figures, LaTeX; v2: Minor cosmetic changes and
correction of a typographical error in acknowledgement. Final version to
appear in JHE
- …