648 research outputs found

    Neutron-powered precursors of kilonovae

    Full text link
    The merger of binary neutron stars (NSs) ejects a small quantity of neutron rich matter, the radioactive decay of which powers a day to week long thermal transient known as a kilonova. Most of the ejecta remains sufficiently dense during its expansion that all neutrons are captured into nuclei during the r-process. However, recent general relativistic merger simulations by Bauswein and collaborators show that a small fraction of the ejected mass (a few per cent, or ~1e-4 Msun) expands sufficiently rapidly for most neutrons to avoid capture. This matter originates from the shocked-heated interface between the merging NSs. Here we show that the beta-decay of these free neutrons in the outermost ejecta powers a `precursor' to the main kilonova emission, which peaks on a timescale of a few hours following merger at U-band magnitude ~22 (for an assumed distance of 200 Mpc). The high luminosity and blue colors of the neutron precursor render it a potentially important counterpart to the gravitational wave source, that may encode valuable information on the properties of the merging binary (e.g. NS-NS versus NS-black hole) and the NS equation of state. Future work is necessary to assess the robustness of the fast moving ejecta and the survival of free neutrons in the face of neutrino absorptions, although the precursor properties are robust to a moderate amount of leptonization. Our results provide additional motivation for short latency gravitational wave triggers and rapid follow-up searches with sensitive ground based telescopes.Comment: 6 pages, 5 figures, accepted to MNRAS main journa

    Self-Learning Cloud Controllers: Fuzzy Q-Learning for Knowledge Evolution

    Get PDF
    Cloud controllers aim at responding to application demands by automatically scaling the compute resources at runtime to meet performance guarantees and minimize resource costs. Existing cloud controllers often resort to scaling strategies that are codified as a set of adaptation rules. However, for a cloud provider, applications running on top of the cloud infrastructure are more or less black-boxes, making it difficult at design time to define optimal or pre-emptive adaptation rules. Thus, the burden of taking adaptation decisions often is delegated to the cloud application. Yet, in most cases, application developers in turn have limited knowledge of the cloud infrastructure. In this paper, we propose learning adaptation rules during runtime. To this end, we introduce FQL4KE, a self-learning fuzzy cloud controller. In particular, FQL4KE learns and modifies fuzzy rules at runtime. The benefit is that for designing cloud controllers, we do not have to rely solely on precise design-time knowledge, which may be difficult to acquire. FQL4KE empowers users to specify cloud controllers by simply adjusting weights representing priorities in system goals instead of specifying complex adaptation rules. The applicability of FQL4KE has been experimentally assessed as part of the cloud application framework ElasticBench. The experimental results indicate that FQL4KE outperforms our previously developed fuzzy controller without learning mechanisms and the native Azure auto-scaling

    Variance of ML-based software fault predictors: are we really improving fault prediction?

    Full text link
    Software quality assurance activities become increasingly difficult as software systems become more and more complex and continuously grow in size. Moreover, testing becomes even more expensive when dealing with large-scale systems. Thus, to effectively allocate quality assurance resources, researchers have proposed fault prediction (FP) which utilizes machine learning (ML) to predict fault-prone code areas. However, ML algorithms typically make use of stochastic elements to increase the prediction models' generalizability and efficiency of the training process. These stochastic elements, also known as nondeterminism-introducing (NI) factors, lead to variance in the training process and as a result, lead to variance in prediction accuracy and training time. This variance poses a challenge for reproducibility in research. More importantly, while fault prediction models may have shown good performance in the lab (e.g., often-times involving multiple runs and averaging outcomes), high variance of results can pose the risk that these models show low performance when applied in practice. In this work, we experimentally analyze the variance of a state-of-the-art fault prediction approach. Our experimental results indicate that NI factors can indeed cause considerable variance in the fault prediction models' accuracy. We observed a maximum variance of 10.10% in terms of the per-class accuracy metric. We thus, also discuss how to deal with such variance

    An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-oriented Systems

    Full text link
    Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the open-world assumption in service-oriented systems. Deep RL was successfully applied to problems such as dynamic service composition, job scheduling, and offloading, as well as service adaptation. While Deep RL offers many benefits, understanding the decision-making of Deep RL is challenging because its learned decision-making policy essentially appears as a black box. Yet, understanding the decision-making of Deep RL is key to help service developers perform debugging, support service providers to comply with relevant legal frameworks, and facilitate service users to build trust. We introduce Chat4XAI to facilitate the understanding of the decision-making of Deep RL by providing natural-language explanations. Compared with visual explanations, the reported benefits of natural-language explanations include better understandability for non-technical users, increased user acceptance and trust, as well as more efficient explanations. Chat4XAI leverages modern AI chatbot technology and dedicated prompt engineering. Compared to earlier work on natural-language explanations using classical software-based dialogue systems, using an AI chatbot eliminates the need for eliciting and defining potential questions and answers up-front. We prototypically realize Chat4XAI using OpenAI's ChatGPT API and evaluate the fidelity and stability of its explanations using an adaptive service exemplar.Comment: To be published at 21st Int'l Conference on Service-Oriented Computing (ICSOC 2023), Rome, Italy, November 28-December 1, 2023, ser. LNCS, F. Monti, S. Rinderle-Ma, A. Ruiz Cortes, Z. Zheng, M. Mecella, Eds., Springer, 202

    On contact between curves and rigid surfaces – from verification of the euler-eytelwein problem to knots

    Get PDF
    A general theory for the Curve-To-Curve contact is applied to develop a special contact algorithm between curves and rigid surfaces. In this case contact kinematics are formulated in the local coordinate system attached to the curve, however, contact is defined at integration points of the curve line (Mortar type contact). The corresponding Closest Point Projection (CPP) procedure is used to define then a shortest distance between the integration point on a curve and the rigid surface. For some simple approximations of the rigid surface closed form solutions are possible. Within the finite element implementation the isogeometric approach is used to model curvilinear cables and the rigid surfaces can be defined in general via NURB surface splines. Verification of the finite element algorithm is given using the well-known analytical solution of the Euler-Eytelwein problem – a rope on a cylindrical surface. The original 2D formula is generalized into the 3D case considering an additional parameter H-pitch for the helix. Finally, applications to knot mechanics are shown

    Third International Workshop on Variability Modelling of Software-intensive Systems. Proceedings

    Full text link
    This ICB Research Report constitutes the proceedings of the Third International Workshop on Variability Modelling of Software-intensive Systems (VaMoS'09), which was held from January 28-30, 2009 at the University of Sevilla, Spain

    Towards the decentralized coordination of multiple self-adaptive systems

    Full text link
    When multiple self-adaptive systems share the same environment and have common goals, they may coordinate their adaptations at runtime to avoid conflicts and to satisfy their goals. There are two approaches to coordination. (1) Logically centralized, where a supervisor has complete control over the individual self-adaptive systems. Such approach is infeasible when the systems have different owners or administrative domains. (2) Logically decentralized, where coordination is achieved through direct interactions. Because the individual systems have control over the information they share, decentralized coordination accommodates multiple administrative domains. However, existing techniques do not account simultaneously for both local concerns, e.g., preferences, and shared concerns, e.g., conflicts, which may lead to goals not being achieved as expected. Our idea to address this shortcoming is to express both types of concerns within the same constraint optimization problem. We propose CoADAPT, a decentralized coordination technique introducing two types of constraints: preference constraints, expressing local concerns, and consistency constraints, expressing shared concerns. At runtime, the problem is solved in a decentralized way using distributed constraint optimization algorithms implemented by each self-adaptive system. As a first step in realizing CoADAPT, we focus in this work on the coordination of adaptation planning strategies, traditionally addressed only with centralized techniques. We show the feasibility of CoADAPT in an exemplar from cloud computing and analyze experimentally its scalability

    Workshop on Service Monitoring, Adaptation and Beyond: A workshop held at the ServiceWave 2008 conference - Proceedings

    Full text link
    This ICB Research Report constitutes the proceedings of the first workshop on Monitoring, Adaptation and Beyond (MONA+ 2008), which was held on December 13, 2008 in Madrid, Spain. MONA+ has been collocated with the ServiceWave 2008 conference

    A User Study on Explainable Online Reinforcement Learning for Adaptive Systems

    Full text link
    Online reinforcement learning (RL) is increasingly used for realizing adaptive systems in the presence of design time uncertainty. Online RL facilitates learning from actual operational data and thereby leverages feedback only available at runtime. However, Online RL requires the definition of an effective and correct reward function, which quantifies the feedback to the RL algorithm and thereby guides learning. With Deep RL gaining interest, the learned knowledge is no longer explicitly represented, but is represented as a neural network. For a human, it becomes practically impossible to relate the parametrization of the neural network to concrete RL decisions. Deep RL thus essentially appears as a black box, which severely limits the debugging of adaptive systems. We previously introduced the explainable RL technique XRL-DINE, which provides visual insights into why certain decisions were made at important time points. Here, we introduce an empirical user study involving 54 software engineers from academia and industry to assess (1) the performance of software engineers when performing different tasks using XRL-DINE and (2) the perceived usefulness and ease of use of XRL-DINE.Comment: arXiv admin note: substantial text overlap with arXiv:2210.0593

    Second International Workshop on Variability Modelling of Software-intensive Systems. Proceedings

    Full text link
    This ICB Research Report constitutes the proceedings of the Second International Workshop on Variability Modelling of Software-intensive Systems (VaMoS'08), which was held from January 16-18, 2008 at the University of Duisburg-Essen
    corecore