128 research outputs found
Assessing evaluation procedures for individual researchers: the case of the Italian National Scientific Qualification
The Italian National Scientific Qualification (ASN) was introduced as a
prerequisite for applying for tenured associate or full professor positions at
state-recognized universities. The ASN is meant to attest that an individual
has reached a suitable level of scientific maturity to apply for professorship
positions. A five member panel, appointed for each scientific discipline, is in
charge of evaluating applicants by means of quantitative indicators of impact
and productivity, and through an assessment of their research profile. Many
concerns were raised on the appropriateness of the evaluation criteria, and in
particular on the use of bibliometrics for the evaluation of individual
researchers. Additional concerns were related to the perceived poor quality of
the final evaluation reports. In this paper we assess the ASN in terms of
appropriateness of the applied methodology, and the quality of the feedback
provided to the applicants. We argue that the ASN is not fully compliant with
the best practices for the use of bibliometric indicators for the evaluation of
individual researchers; moreover, the quality of final reports varies
considerably across the panels, suggesting that measures should be put in place
to prevent sloppy practices in future ASN rounds
Quantitative Analysis of the Italian National Scientific Qualification
The Italian National Scientific Qualification (ASN) was introduced in 2010 as
part of a major reform of the national university system. Under the new
regulation, the scientific qualification for a specific role (associate or full
professor) and field of study is required to apply to a permanent professor
position. The ASN is peculiar since it makes use of bibliometric indicators
with associated thresholds as one of the parameters used to assess applicants.
Overall, more than 59000 applications were submitted, and the results have been
made publicly available for a short period of time, including the values of the
quantitative indicators for each applicant. The availability of this wealth of
information provides an opportunity to draw a fairly detailed picture of a
nation-wide evaluation exercise, and to study the impact of the bibliometric
indicators on the qualification results. In this paper we provide a first
account of the Italian ASN from a quantitative point of view. We show that
significant differences exist among scientific disciplines, in particular with
respect to the fraction of qualified applicants, that can not be easily
explained. Furthermore, we describe some issues related to the definition and
use of the bibliometric indicators and thresholds. Our analysis aims at drawing
attention to potential problems that should be addressed by decision-makers in
future ASN rounds.Comment: ISSN 1751-157
A Framework for QoS-aware Execution of Workflows over the Cloud
The Cloud Computing paradigm is providing system architects with a new
powerful tool for building scalable applications. Clouds allow allocation of
resources on a "pay-as-you-go" model, so that additional resources can be
requested during peak loads and released after that. However, this flexibility
asks for appropriate dynamic reconfiguration strategies. In this paper we
describe SAVER (qoS-Aware workflows oVER the Cloud), a QoS-aware algorithm for
executing workflows involving Web Services hosted in a Cloud environment. SAVER
allows execution of arbitrary workflows subject to response time constraints.
SAVER uses a passive monitor to identify workload fluctuations based on the
observed system response time. The information collected by the monitor is used
by a planner component to identify the minimum number of instances of each Web
Service which should be allocated in order to satisfy the response time
constraint. SAVER uses a simple Queueing Network (QN) model to identify the
optimal resource allocation. Specifically, the QN model is used to identify
bottlenecks, and predict the system performance as Cloud resources are
allocated or released. The parameters used to evaluate the model are those
collected by the monitor, which means that SAVER does not require any
particular knowledge of the Web Services and workflows being executed. Our
approach has been validated through numerical simulations, whose results are
reported in this paper
Parallel Sort-Based Matching for Data Distribution Management on Shared-Memory Multiprocessors
In this paper we consider the problem of identifying intersections between
two sets of d-dimensional axis-parallel rectangles. This is a common problem
that arises in many agent-based simulation studies, and is of central
importance in the context of High Level Architecture (HLA), where it is at the
core of the Data Distribution Management (DDM) service. Several realizations of
the DDM service have been proposed; however, many of them are either
inefficient or inherently sequential. These are serious limitations since
multicore processors are now ubiquitous, and DDM algorithms -- being
CPU-intensive -- could benefit from additional computing power. We propose a
parallel version of the Sort-Based Matching algorithm for shared-memory
multiprocessors. Sort-Based Matching is one of the most efficient serial
algorithms for the DDM problem, but is quite difficult to parallelize due to
data dependencies. We describe the algorithm and compute its asymptotic running
time; we complete the analysis by assessing its performance and scalability
through extensive experiments on two commodity multicore systems based on a
dual socket Intel Xeon processor, and a single socket Intel Core i7 processor.Comment: Proceedings of the 21-th ACM/IEEE International Symposium on
Distributed Simulation and Real Time Applications (DS-RT 2017). Best Paper
Award @DS-RT 201
Parallel Discrete Event Simulation with Erlang
Discrete Event Simulation (DES) is a widely used technique in which the state
of the simulator is updated by events happening at discrete points in time
(hence the name). DES is used to model and analyze many kinds of systems,
including computer architectures, communication networks, street traffic, and
others. Parallel and Distributed Simulation (PADS) aims at improving the
efficiency of DES by partitioning the simulation model across multiple
processing elements, in order to enabling larger and/or more detailed studies
to be carried out. The interest on PADS is increasing since the widespread
availability of multicore processors and affordable high performance computing
clusters. However, designing parallel simulation models requires considerable
expertise, the result being that PADS techniques are not as widespread as they
could be. In this paper we describe ErlangTW, a parallel simulation middleware
based on the Time Warp synchronization protocol. ErlangTW is entirely written
in Erlang, a concurrent, functional programming language specifically targeted
at building distributed systems. We argue that writing parallel simulation
models in Erlang is considerably easier than using conventional programming
languages. Moreover, ErlangTW allows simulation models to be executed either on
single-core, multicore and distributed computing architectures. We describe the
design and prototype implementation of ErlangTW, and report some preliminary
performance results on multicore and distributed architectures using the well
known PHOLD benchmark.Comment: Proceedings of ACM SIGPLAN Workshop on Functional High-Performance
Computing (FHPC 2012) in conjunction with ICFP 2012. ISBN: 978-1-4503-1577-
Fault Tolerant Adaptive Parallel and Distributed Simulation through Functional Replication
This paper presents FT-GAIA, a software-based fault-tolerant parallel and
distributed simulation middleware. FT-GAIA has being designed to reliably
handle Parallel And Distributed Simulation (PADS) models, which are needed to
properly simulate and analyze complex systems arising in any kind of scientific
or engineering field. PADS takes advantage of multiple execution units run in
multicore processors, cluster of workstations or HPC systems. However, large
computing systems, such as HPC systems that include hundreds of thousands of
computing nodes, have to handle frequent failures of some components. To cope
with this issue, FT-GAIA transparently replicates simulation entities and
distributes them on multiple execution nodes. This allows the simulation to
tolerate crash-failures of computing nodes. Moreover, FT-GAIA offers some
protection against Byzantine failures, since interaction messages among the
simulated entities are replicated as well, so that the receiving entity can
identify and discard corrupted messages. Results from an analytical model and
from an experimental evaluation show that FT-GAIA provides a high degree of
fault tolerance, at the cost of a moderate increase in the computational load
of the execution units.Comment: arXiv admin note: substantial text overlap with arXiv:1606.0731
- …