5,330 research outputs found
Inferring the photometric and size evolution of galaxies from image simulations
Current constraints on models of galaxy evolution rely on morphometric
catalogs extracted from multi-band photometric surveys. However, these catalogs
are altered by selection effects that are difficult to model, that correlate in
non trivial ways, and that can lead to contradictory predictions if not taken
into account carefully. To address this issue, we have developed a new approach
combining parametric Bayesian indirect likelihood (pBIL) techniques and
empirical modeling with realistic image simulations that reproduce a large
fraction of these selection effects. This allows us to perform a direct
comparison between observed and simulated images and to infer robust
constraints on model parameters. We use a semi-empirical forward model to
generate a distribution of mock galaxies from a set of physical parameters.
These galaxies are passed through an image simulator reproducing the
instrumental characteristics of any survey and are then extracted in the same
way as the observed data. The discrepancy between the simulated and observed
data is quantified, and minimized with a custom sampling process based on
adaptive Monte Carlo Markov Chain methods. Using synthetic data matching most
of the properties of a CFHTLS Deep field, we demonstrate the robustness and
internal consistency of our approach by inferring the parameters governing the
size and luminosity functions and their evolutions for different realistic
populations of galaxies. We also compare the results of our approach with those
obtained from the classical spectral energy distribution fitting and
photometric redshift approach.Our pipeline infers efficiently the luminosity
and size distribution and evolution parameters with a very limited number of
observables (3 photometric bands). When compared to SED fitting based on the
same set of observables, our method yields results that are more accurate and
free from systematic biases.Comment: 24 pages, 12 figures, accepted for publication in A&
Evolving models in Model-Driven Engineering : State-of-the-art and future challenges
The artefacts used in Model-Driven Engineering (MDE) evolve as a matter of course: models are modified and updated as part of the engineering process; metamodels change as a result of domain analysis and standardisation efforts; and the operations applied to models change as engineering requirements change. MDE artefacts are inter-related, and simultaneously constrain each other, making evolution a challenge to manage. We discuss some of the key problems of evolution in MDE, summarise the key state-of-the-art, and look forward to new challenges in research in this area
Handling High-Level Model Changes Using Search Based Software Engineering
Model-Driven Engineering (MDE) considers models as first-class artifacts during the software
lifecycle. The number of available tools, techniques, and approaches for MDE is increasing as its
use gains traction in driving quality, and controlling cost in evolution of large software systems.
Software models, defined as code abstractions, are iteratively refined, restructured, and evolved.
This is due to many reasons such as fixing defects in design, reflecting changes in requirements,
and modifying a design to enhance existing features.
In this work, we focus on four main problems related to the evolution of software models: 1) the
detection of applied model changes, 2) merging parallel evolved models, 3) detection of design
defects in merged model, and 4) the recommendation of new changes to fix defects in software
models.
Regarding the first contribution, a-posteriori multi-objective change detection approach has been
proposed for evolved models. The changes are expressed in terms of atomic and composite
refactoring operations. The majority of existing approaches detects atomic changes but do not
adequately address composite changes which mask atomic operations in intermediate models.
For the second contribution, several approaches exist to construct a merged model by
incorporating all non-conflicting operations of evolved models. Conflicts arise when the
application of one operation disables the applicability of another one. The essence of the problem
is to identify and prioritize conflicting operations based on importance and context – a gap in
existing approaches. This work proposes a multi-objective formulation of model merging that
aims to maximize the number of successfully applied merged operations.
For the third and fourth contributions, the majority of existing works focuses on refactoring at
source code level, and does not exploit the benefits of software design optimization at model
level. However, refactoring at model level is inherently more challenging due to difficulty in
assessing the potential impact on structural and behavioral features of the software system. This requires analysis of class and activity diagrams to appraise the overall system quality, feasibility,
and inter-diagram consistency. This work focuses on designing, implementing, and evaluating a
multi-objective refactoring framework for detection and fixing of design defects in software
models.Ph.D.Information Systems Engineering, College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/136077/1/Usman Mansoor Final.pdfDescription of Usman Mansoor Final.pdf : Dissertatio
Learning Fast and Slow: PROPEDEUTICA for Real-time Malware Detection
In this paper, we introduce and evaluate PROPEDEUTICA, a novel methodology
and framework for efficient and effective real-time malware detection,
leveraging the best of conventional machine learning (ML) and deep learning
(DL) algorithms. In PROPEDEUTICA, all software processes in the system start
execution subjected to a conventional ML detector for fast classification. If a
piece of software receives a borderline classification, it is subjected to
further analysis via more performance expensive and more accurate DL methods,
via our newly proposed DL algorithm DEEPMALWARE. Further, we introduce delays
to the execution of software subjected to deep learning analysis as a way to
"buy time" for DL analysis and to rate-limit the impact of possible malware in
the system. We evaluated PROPEDEUTICA with a set of 9,115 malware samples and
877 commonly used benign software samples from various categories for the
Windows OS. Our results show that the false positive rate for conventional ML
methods can reach 20%, and for modern DL methods it is usually below 6%.
However, the classification time for DL can be 100X longer than conventional ML
methods. PROPEDEUTICA improved the detection F1-score from 77.54% (conventional
ML method) to 90.25%, and reduced the detection time by 54.86%. Further, the
percentage of software subjected to DL analysis was approximately 40% on
average. Further, the application of delays in software subjected to ML reduced
the detection time by approximately 10%. Finally, we found and discussed a
discrepancy between the detection accuracy offline (analysis after all traces
are collected) and on-the-fly (analysis in tandem with trace collection). Our
insights show that conventional ML and modern DL-based malware detectors in
isolation cannot meet the needs of efficient and effective malware detection:
high accuracy, low false positive rate, and short classification time.Comment: 17 pages, 7 figure
- …