371 research outputs found
REI:An integrated measure for software reusability
To capitalize upon the benefits of software reuse, an efficient selection among candidate reusable assets should be performed in terms of functional fitness and adaptability. The reusability of assets is usually measured through reusability indices. However, these do not capture all facets of reusability, such as structural characteristics, external quality attributes, and documentation. In this paper, we propose a reusability index (REI) as a synthesis of various software metrics and evaluate its ability to quantify reuse, based on IEEE Standard on Software Metrics Validity. The proposed index is compared with existing ones through a case study on 80 reusable open-source assets. To illustrate the applicability of the proposed index, we performed a pilot study, where real-world reuse decisions have been compared with decisions imposed by the use of metrics (including REI). The results of the study suggest that the proposed index presents the highest predictive and discriminative power; it is the most consistent in ranking reusable assets and the most strongly correlated to their levels of reuse. The findings of the paper are discussed to understand the most important aspects in reusability assessment (interpretation of results), and interesting implications for research and practice are provided
Spin-Mapping Methods for Simulating Ultrafast Nonadiabatic Dynamics
Many chemical reactions exhibit nonadiabatic effects as a consequence of coupling between electronic states and/or interaction with light. While a fully quantum description of nonadiabatic reactions is unfeasible for most realistic molecules, a more computationally tractable approach is to combine a classical description of the nuclei with a quantum description of the electronic states. Combining the formalisms of quantum and classical dynamics is however a difficult problem for which standard methods (such as Ehrenfest dynamics and surface hopping) may be insufficient. In this article, we review a new trajectory-based approach developed in our group that is able to describe nonadiabatic dynamics with a higher accuracy than previous approaches but for a similar level of computational effort. This method treats the electronic states with a phase-space representation for discrete-level systems, which in the two-level case is analogous to a spin-½. We point out the key features of the method and demonstrate its use in a variety of applications, including ultrafast transfer through conical intersections, damped coherent excitation under coupling to a strong light field, and nonlinear spectroscopy of light-harvesting complexes
Demystifying Data Science Projects: A Look on the People and Process of Data Science Today
Peer reviewe
An empirical cognitive model of the development of shared understanding of requirements
It is well documented that customers and software development teams need to share and refine understanding of the requirements throughout the software development lifecycle. The development of this shared understand- ing is complex and error-prone however. Techniques and tools to support the development of a shared understanding of requirements (SUR) should be based on a clear conceptualization of the phenomenon, with a basis on relevant theory and analysis of observed practice. This study contributes to this with a detailed conceptualization of SUR development as sequence of group-level state transi- tions based on specializing the Team Mental Model construct. Furthermore it proposes a novel group-level cognitive model as the main result of an analysis of data collected from the observation of an Agile software development team over a period of several months. The initial high-level application of the model shows it has promise for providing new insights into supporting SUR development
Risk of death by suicide following self-harm presentations to healthcare: development and validation of a multivariable clinical prediction rule (OxSATS)
Background Assessment of suicide risk in individuals who have self-harmed is common in emergency departments, but is often based on tools developed for other purposes.
Objective We developed and validated a predictive model for suicide following self-harm.
Methods We used data from Swedish population-based registers. A cohort of 53 172 individuals aged 10+ years, with healthcare episodes of self-harm, was split into development (37 523 individuals, of whom 391 died from suicide within 12 months) and validation (15 649 individuals, 178 suicides within 12 months) samples. We fitted a multivariable accelerated failure time model for the association between risk factors and time to suicide. The final model contains 11 factors: age, sex, and variables related to substance misuse, mental health and treatment, and history of self-harm. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis guidelines were followed for the design and reporting of this work.
Findings An 11-item risk model to predict suicide was developed using sociodemographic and clinical risk factors, and showed good discrimination (c-index 0.77, 95% CI 0.75 to 0.78) and calibration in external validation. For risk of suicide within 12 months, using a 1% cut-off, sensitivity was 82% (75% to 87%) and specificity was 54% (53% to 55%). A web-based risk calculator is available (Oxford Suicide Assessment Tool for Self-harm or OxSATS).
Conclusions OxSATS accurately predicts 12-month risk of suicide. Further validations and linkage to effective interventions are required to examine clinical utility.
Clinical implications Using a clinical prediction score may assist clinical decision-making and resource allocation
Degrees of tenant isolation for cloud-hosted software services : a cross-case analysis
A challenge, when implementing multi-tenancy
in a cloud-hosted software service, is how to ensure that the
performance and resource consumption of one tenant does
not adversely affect other tenants. Software designers and
architects must achieve an optimal degree of tenant isolation
for their chosen application requirements. The objective
of this research is to reveal the trade-offs, commonalities,
and differences to be considered when implementing
the required degree of tenant isolation. This research uses
a cross-case analysis of selected open source cloud-hosted
software engineering tools to empirically evaluate varying
degrees of isolation between tenants. Our research reveals
five commonalities across the case studies: disk space reduction,
use of locking, low cloud resource consumption,
customization and use of plug-in architecture, and choice of
multi-tenancy pattern. Two of these common factors compromise
tenant isolation. The degree of isolation is reduced
when there is no strategy to reduce disk space and customization
and plug-in architecture is not adopted. In contrast,
the degree of isolation improves when careful consideration
is given to how to handle a high workload, locking of
data and processes is used to prevent clashes between multiple
tenants and selection of appropriate multi-tenancy pattern. The research also revealed five case study differences:
size of generated data, cloud resource consumption, sensitivity
to workload changes, the effect of the software process,
client latency and bandwidth, and type of software process.
The degree of isolation is impaired, in our results, by
the large size of generated data, high resource consumption
by certain software processes, high or fluctuating workload,
low client latency, and bandwidth when transferring multiple
files between repositories. Additionally, this research
provides a novel explanatory framework for (i) mapping tenant
isolation to different software development processes,
cloud resources and layers of the cloud stack; and (ii) explaining
the different trade-offs to consider affecting tenant
isolation (i.e. resource sharing, the number of users/requests,
customizability, the size of generated data, the scope of control
of the cloud application stack and business constraints)
when implementing multi-tenant cloud-hosted software services.
This research suggests that software architects have
to pay attention to the trade-offs, commonalities, and differences
we identify to achieve their degree of tenant isolation
requirements
Metrics to evaluate research performance in academic institutions: A critique of ERA 2010 as applied in forestry and the indirect H2 index as a possible alternative
Excellence for Research in Australia (ERA) is an attempt by the Australian
Research Council to rate Australian universities on a 5-point scale within 180
Fields of Research using metrics and peer evaluation by an evaluation
committee. Some of the bibliometric data contributing to this ranking suffer
statistical issues associated with skewed distributions. Other data are
standardised year-by-year, placing undue emphasis on the most recent
publications which may not yet have reliable citation patterns. The
bibliometric data offered to the evaluation committees is extensive, but lacks
effective syntheses such as the h-index and its variants. The indirect H2 index
is objective, can be computed automatically and efficiently, is resistant to
manipulation, and a good indicator of impact to assist the ERA evaluation
committees and to similar evaluations internationally.Comment: 19 pages, 6 figures, 7 tables, appendice
Aging affects attunement in perceiving length by dynamic touch
Earlier studies have revealed age-dependent differences in perception by dynamic touch. In the present study, we examined whether the capacity to learn deteriorates with aging. Adopting an ecological approach to learning, the authors examined the process of attunement—that is, the changes in what informational variable is exploited. Young and elderly adults were trained to perceive the lengths of unseen, handheld rods. It was found that the capacity to attune declines with aging: Contrary to the young adults, the elderly proved unsuccessful in learning to detect the specifying informational variables. The fact that aging affects the capacity to attune sets a new line of research in the study of perception and perceptual-motor skills of elderly. The authors discuss the implications of their findings for the ongoing discussions on the ecological approach to learning
Effects of Test-Driven Development : A Comparative Analysis of Empirical Studies
Test-driven development is a software development practice where small sections of test code are used to direct the development of program units. Writing test code prior to the production code promises several positive effects on the development process itself and on associated products and processes as well. However, there are few comparative studies on the effects of test-driven development. Thus, it is difficult to assess the potential process and product effects when applying test-driven development. In order to get an overview of the observed effects of test-driven development, an in-depth review of existing empirical studies was carried out. The results for ten different internal and external quality attributes indicate that test-driven development can reduce the amount of introduced defects and lead to more maintainable code. Parts of the implemented code may also be somewhat smaller in size and complexity. While maintenance of test-driven code can take less time, initial development may last longer. Besides the comparative analysis, this article sketches related work and gives an outlook on future research.Peer reviewe
- …