83,060 research outputs found
A Model-Driven Approach for Business Process Management
The Business Process Management is a common mechanism recommended by a high number of standards for the management of companies and organizations. In software companies this practice is every day more accepted and companies have to assume it, if they want to be competitive. However, the effective definition of these processes and mainly their maintenance and execution are not always easy tasks. This paper presents an approach based on the Model-Driven paradigm for Business Process Management in software companies. This solution offers a suitable mechanism that was implemented successfully in different companies with a tool case named NDTQ-Framework.Ministerio de Educación y Ciencia TIN2010-20057-C03-02Junta de Andalucía TIC-578
Horizontal Integration of Warfighter Intelligence Data: A Shared Semantic Resource for the Intelligence Community
We describe a strategy that is being used for the horizontal integration of warfighter intelligence data within the framework of the US Army’s Distributed Common Ground System Standard Cloud (DSC) initiative. The strategy rests on the development of a set of ontologies that are being incrementally applied to bring about what we call the ‘semantic enhancement’ of data models used within each intelligence discipline. We show how the strategy can help to overcome familiar tendencies to stovepiping of intelligence data, and describe how it can be applied in an agile fashion to new data resources in ways that address immediate needs of intelligence analysts
The evolving landscape of learning technology
This paper provides an overview of the current and emerging issues in learning technology research, concentrating on structural issues such as infrastructure, policy and organizational context. It updates the vision of technology outlined by Squires’ (1999) concept of peripatetic electronic teachers (PETs) where Information and Communication Technologies (ICT) provide an enabling medium to allow teachers to act as freelance agents in a virtual world and reflects to what extent this vision has been realized The paper begins with a survey of some of the key areas of ICT development and provides a contextualizing framework for the area in terms of external agendas and policy drivers. It then focuses upon learning technology developments which have occurred in the last five years in the UK and offers a number of alternative taxonomies to describe this. The paper concludes with a discussion of the issues which arise from this work
Predicting regression test failures using genetic algorithm-selected dynamic performance analysis metrics
A novel framework for predicting regression test failures is proposed. The basic principle embodied in the framework is to use performance analysis tools to capture the runtime behaviour of a program as it executes each test in a regression suite. The performance information is then used to build a dynamically predictive model of test outcomes. Our framework is evaluated using a genetic algorithm for dynamic metric selection in combination with state-of-the-art machine learning classifiers. We show that if a program is modified and some tests subsequently fail, then it is possible to predict with considerable accuracy which of the remaining tests will also fail which can be used to help prioritise tests in time constrained testing environments
MLPerf Inference Benchmark
Machine-learning (ML) hardware and software system demand is burgeoning.
Driven by ML applications, the number of different ML inference systems has
exploded. Over 100 organizations are building ML inference chips, and the
systems that incorporate existing models span at least three orders of
magnitude in power consumption and five orders of magnitude in performance;
they range from embedded devices to data-center solutions. Fueling the hardware
are a dozen or more software frameworks and libraries. The myriad combinations
of ML hardware and ML software make assessing ML-system performance in an
architecture-neutral, representative, and reproducible manner challenging.
There is a clear need for industry-wide standard ML benchmarking and evaluation
criteria. MLPerf Inference answers that call. In this paper, we present our
benchmarking method for evaluating ML inference systems. Driven by more than 30
organizations as well as more than 200 ML engineers and practitioners, MLPerf
prescribes a set of rules and best practices to ensure comparability across
systems with wildly differing architectures. The first call for submissions
garnered more than 600 reproducible inference-performance measurements from 14
organizations, representing over 30 systems that showcase a wide range of
capabilities. The submissions attest to the benchmark's flexibility and
adaptability.Comment: ISCA 202
DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution
Recent research has demonstrated the vulnerability of fingerprint recognition
systems to dictionary attacks based on MasterPrints. MasterPrints are real or
synthetic fingerprints that can fortuitously match with a large number of
fingerprints thereby undermining the security afforded by fingerprint systems.
Previous work by Roy et al. generated synthetic MasterPrints at the
feature-level. In this work we generate complete image-level MasterPrints known
as DeepMasterPrints, whose attack accuracy is found to be much superior than
that of previous methods. The proposed method, referred to as Latent Variable
Evolution, is based on training a Generative Adversarial Network on a set of
real fingerprint images. Stochastic search in the form of the Covariance Matrix
Adaptation Evolution Strategy is then used to search for latent input variables
to the generator network that can maximize the number of impostor matches as
assessed by a fingerprint recognizer. Experiments convey the efficacy of the
proposed method in generating DeepMasterPrints. The underlying method is likely
to have broad applications in fingerprint security as well as fingerprint
synthesis.Comment: 8 pages; added new verification systems and diagrams. Accepted to
conference Biometrics: Theory, Applications, and Systems 201
Report on the Standardization Project ``Formal Methods in Conformance Testing''
This paper presents the latest developments in the “Formal Methods in Conformance
Testing” (FMCT) project of ISO and ITU–T. The project has been initiated to study
the role of formal description techniques in the conformance testing process. The goal
is to develop a standard that defines the meaning of conformance in the context of formal
description techniques. We give an account of the current status of FMCT in the
standardization process as well as an overview of the technical status of the proposed
standard. Moreover, we indicate some of its strong and weak points, and we give some
directions for future work on FMCT
Training telescope operators and support astronomers at Paranal
The operations model of the Paranal Observatory relies on the work of
efficient staff to carry out all the daytime and nighttime tasks. This is
highly dependent on adequate training. The Paranal Science Operations
department (PSO) has a training group that devises a well-defined and
continuously evolving training plan for new staff, in addition to broadening
and reinforcing courses for the whole department. This paper presents the
training activities for and by PSO, including recent astronomical and quality
control training for operators, as well as adaptive optics and interferometry
training of all staff. We also present some future plans.Comment: Paper 9910-123 presented at SPIE 201
- …