7,746 research outputs found
Quantifying software architecture attributes
Software architecture holds the promise of advancing the state of the art in software engineering. The architecture is emerging as the focal point of many modem reuse/evolutionary paradigms, such as Product Line Engineering, Component Based Software Engineering, and COTS-based software development.
The author focuses his research work on characterizing some properties of a software architecture. He tries to use software metrics to represent the error propagation probabilities, change propagation probabilities, and requirements change propagation probabilities of a software architecture. Error propagation probability reflects the probability that an error that arises in one component of the architecture will propagate to other components of the architecture at run-time. Change propagation probability reflects, for a given pair of components A and B, the probability that if A is changed in a corrective/perfective maintenance operation, B has to be changed to maintain the overall function the system. Requirements change propagation probability reflects the likelihood that a requirement change that arises in one component of the architecture propagates to other components. For each case, the author presents the analytical formulas which mainly based on statistical theory and empirical studies. Then the author studies the correlations between analytical results and empirical results.
The author also uses several metrics to quantify the properties of a Product Line Architecture, such as scoping, variability, commonality, and applicability. He presents his proposed means to measure the properties and the results of the case studies
Enhancing Exploration and Safety in Deep Reinforcement Learning
A Deep Reinforcement Learning (DRL) agent tries to learn a policy maximizing a long-term objective by trials and errors in large state spaces. However, this learning paradigm requires a non-trivial amount of interactions in the environment to achieve good performance. Moreover, critical applications, such as robotics, typically involve safety criteria to consider while designing novel DRL solutions. Hence, devising safe learning approaches with efficient exploration is crucial to avoid getting stuck in local optima, failing to learn properly, or causing damages to the surrounding environment. This thesis focuses on developing Deep Reinforcement Learning algorithms to foster efficient exploration and safer behaviors in simulation and real domains of interest, ranging from robotics to multi-agent systems. To this end, we rely both on standard benchmarks, such as SafetyGym, and robotic tasks widely adopted in the literature (e.g., manipulation, navigation). This variety of problems is crucial to assess the statistical significance of our empirical studies and the generalization skills of our approaches. We initially benchmark the sample efficiency versus performance trade-off between value-based and policy-gradient algorithms. This part highlights the benefits of using non-standard simulation environments (i.e., Unity), which also facilitates the development of further optimization for DRL. We also discuss the limitations of standard evaluation metrics (e.g., return) in characterizing the actual behaviors of a policy, proposing the use of Formal Verification (FV) as a practical methodology to evaluate behaviors over desired specifications. The second part introduces Evolutionary Algorithms (EAs) as a gradient-free complimentary optimization strategy. In detail, we combine population-based and gradient-based DRL to diversify exploration and improve performance both in single and multi-agent applications. For the latter, we discuss how prior Multi-Agent (Deep) Reinforcement Learning (MARL) approaches hinder exploration, proposing an architecture that favors cooperation without affecting exploration
Forecasting inflation with thick models and neural networks
This paper applies linear and neural network-based āthickā models for forecasting inflation based on Phillipsācurve formulations in the USA, Japan and the euro area. Thick models represent ātrimmed meanā forecasts from several neural network models. They outperform the best performing linear models for āreal-timeā and ābootstrapā forecasts for service indices for the euro area, and do well, sometimes better, for the more general consumer and producer price indices across a variety of countries. JEL Classification: C12, E31bootstrap, Neural Networks, Phillips Curves, real-time forecasting, Thick Models
Recommended from our members
A Dose Relationship Between Brain Functional Connectivity and Cumulative Head Impact Exposure in Collegiate Water Polo Players.
A growing body of evidence suggests that chronic, sport-related head impact exposure can impair brain functional integration and brain structure and function. Evidence of a robust inverse relationship between the frequency and magnitude of repeated head impacts and disturbed brain network function is needed to strengthen an argument for causality. In pursuing such a relationship, we used cap-worn inertial sensors to measure the frequency and magnitude of head impacts sustained by eighteen intercollegiate water polo athletes monitored over a single season of play. Participants were evaluated before and after the season using computerized cognitive tests of inhibitory control and resting electroencephalography. Greater head impact exposure was associated with increased phase synchrony [r (16) > 0.626, p < 0.03 corrected], global efficiency [r (16) > 0.601, p < 0.04 corrected], and mean clustering coefficient [r (16) > 0.625, p < 0.03 corrected] in the functional networks formed by slow-wave (delta, theta) oscillations. Head impact exposure was not associated with changes in performance on the inhibitory control tasks. However, those with the greatest impact exposure showed an association between changes in resting-state connectivity and a dissociation between performance on the tasks after the season [r (16) = 0.481, p = 0.043] that could also be attributed to increased slow-wave synchrony [F (4, 135) = 113.546, p < 0.001]. Collectively, our results suggest that athletes sustaining the greatest head impact exposure exhibited changes in whole-brain functional connectivity that were associated with altered information processing and inhibitory control
Horseshoe-based Bayesian nonparametric estimation of effective population size trajectories
Phylodynamics is an area of population genetics that uses genetic sequence
data to estimate past population dynamics. Modern state-of-the-art Bayesian
nonparametric methods for recovering population size trajectories of unknown
form use either change-point models or Gaussian process priors. Change-point
models suffer from computational issues when the number of change-points is
unknown and needs to be estimated. Gaussian process-based methods lack local
adaptivity and cannot accurately recover trajectories that exhibit features
such as abrupt changes in trend or varying levels of smoothness. We propose a
novel, locally-adaptive approach to Bayesian nonparametric phylodynamic
inference that has the flexibility to accommodate a large class of functional
behaviors. Local adaptivity results from modeling the log-transformed effective
population size a priori as a horseshoe Markov random field, a recently
proposed statistical model that blends together the best properties of the
change-point and Gaussian process modeling paradigms. We use simulated data to
assess model performance, and find that our proposed method results in reduced
bias and increased precision when compared to contemporary methods. We also use
our models to reconstruct past changes in genetic diversity of human hepatitis
C virus in Egypt and to estimate population size changes of ancient and modern
steppe bison. These analyses show that our new method captures features of the
population size trajectories that were missed by the state-of-the-art methods.Comment: 36 pages, including supplementary informatio
An overview of Mirjam and WeaveC
In this chapter, we elaborate on the design of an industrial-strength aspectoriented programming language and weaver for large-scale software development. First, we present an analysis on the requirements of a general purpose aspect-oriented language that can handle crosscutting concerns in ASML software. We also outline a strategy on working with aspects in large-scale software development processes. In our design, we both re-use existing aspect-oriented language abstractions and propose new ones to address the issues that we identified in our analysis. The quality of the code ensured by the realized language and weaver has a positive impact both on maintenance effort and lead-time in the first line software development process. As evidence, we present a short evaluation of the language and weaver as applied today in the software development process of ASML
AN INTEGRATED SYSTEMS ENGINEERING METHODOLOGY FOR DESIGN OF VEHICLE HANDLING DYNAMICS
The primary objective of this research is to develop an integrated system engineering methodology for the conceptual design of vehicle handling dynamics early on in the product development process. A systems engineering-based simulation framework is developed that connects subjective, customer-relevant handling expectations and manufacturers\u27 brand attributes to higher-level objective vehicle engineering targets and consequently breaks these targets down into subsystem-level requirements and component-level design specifications. Such an integrated systems engineering approach will guide the engineering development process and provide insight into the compromises involved in the vehicle-handling layout, ultimately saving product development time and costs and helping to achieve a higher level of product maturity early on in the design phase. The proposed simulation-based design methodology for the conceptual design of vehicle handling characteristics is implemented using decomposition-based Analytical Target Cascading (ATC) techniques and evolutionary, multi-objective optimization algorithms coupled within the systems engineering framework. The framework is utilized in a two-layer optimization schedule. The first layer is used to derive subsystem-level requirements from overall vehicle-level targets. These subsystem-level requirements are passed on as targets to the second layer of optimization, and the second layer derives component-level specifications from the subsystem-level requirements obtained from the first step. The second layer optimization utilizes component-level design variables and analysis models to minimize the difference between the targets transferred from the vehicle level and responses generated from the component-level analysis. An iterative loop is set up with an objective to minimize the target/response consistency constraints (i.e., the targets at the vehicle level are constantly rebalanced to achieve a consistent and feasible solution). Genetic Algorithms (GAs) are used at each layer of the framework. This work has contributed towards development of a unique approach to integrate market research into the vehicle handling design process. The framework developed for this dissertation uses Original Equipment Manufacturer\u27s (OEM\u27s) brand essence information derived from market research for the derivation and balancing of vehicle-level targets, and guides the chassis design direction using relative brand attribute weights. Other contributions from this research include development of empirical relationships between key customer-relevant vehicle handling attributes selected from market survey and the various scenarios and objective metrics of vehicle handling, development of a goal programming based approach for the selection of the best solution from a set of Pareto-optimal solutions obtained from genetic algorithms and development of Vehicle Handling Bandwidth Diagrams
- ā¦