58,933 research outputs found

    Dissecting magnetar variability with Bayesian hierarchical models

    Get PDF
    Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behaviour, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favoured models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture aftershocks. Using Markov Chain Monte Carlo (MCMC) sampling augmented with reversible jumps between models with different numbers of parameters, we characterise the posterior distributions of the model parameters and the number of components per burst. We relate these model parameters to physical quantities in the system, and show for the first time that the variability within a burst does not conform to predictions from ideas of self-organised criticality. We also examine how well the properties of the spikes fit the predictions of simplified cascade models for the different trigger mechanisms.Comment: accepted for publication in The Astrophysical Journal; code available at https://bitbucket.org/dhuppenkothen/magnetron, data products at http://figshare.com/articles/SGR_J1550_5418_magnetron_data/129242

    Software Product Line

    Get PDF
    The Software Product Line (SPL) is an emerging methodology for developing software products. Currently, there are two hot issues in the SPL: modelling and the analysis of the SPL. Variability modelling techniques have been developed to assist engineers in dealing with the complications of variability management. The principal goal of modelling variability techniques is to configure a successful software product by managing variability in domain-engineering. In other words, a good method for modelling variability is a prerequisite for a successful SPL. On the other hand, analysis of the SPL aids the extraction of useful information from the SPL and provides a control and planning strategy mechanism for engineers or experts. In addition, the analysis of the SPL provides a clear view for users. Moreover, it ensures the accuracy of the SPL. This book presents new techniques for modelling and new methods for SPL analysis

    Configuration Analysis for Large Scale Feature Models: Towards Speculative-Based Solutions

    Get PDF
    Los sistemas de alta variabilidad son sistemas de software en los que la gestión de la variabilidad es una actividad central. Algunos ejemplos actuales de sistemas de alta variabilidad son el sistema web de gesión de contenidos Drupal, el núcleo de Linux, y las distribuciones Debian de Linux. La configuración en sistemas de alta variabilidad es la selección de opciones de configuración según sus restricciones de configuración y los requerimientos de usuario. Los modelos de características son un estándar “de facto” para modelar las funcionalidades comunes y variables de sistemas de alta variabilidad. No obstante, el elevado número de componentes y configuraciones que un modelo de características puede contener hacen que el análisis manual de estos modelos sea una tarea muy costosa y propensa a errores. Así nace el análisis automatizado de modelos de características con mecanismos y herramientas asistidas por computadora para extraer información de estos modelos. Las soluciones tradicionales de análisis automatizado de modelos de características siguen un enfoque de computación secuencial para utilizar una unidad central de procesamiento y memoria. Estas soluciones son adecuadas para trabajar con sistemas de baja escala. Sin embargo, dichas soluciones demandan altos costos de computación para trabajar con sistemas de gran escala y alta variabilidad. Aunque existan recusos informáticos para mejorar el rendimiento de soluciones de computación, todas las soluciones con un enfoque de computación secuencial necesitan ser adaptadas para el uso eficiente de estos recursos y optimizar su rendimiento computacional. Ejemplos de estos recursos son la tecnología de múltiples núcleos para computación paralela y la tecnología de red para computación distribuida. Esta tesis explora la adaptación y escalabilidad de soluciones para el analisis automatizado de modelos de características de gran escala. En primer lugar, nosotros presentamos el uso de programación especulativa para la paralelización de soluciones. Además, nosotros apreciamos un problema de configuración desde otra perspectiva, para su solución mediante la adaptación y aplicación de una solución no tradicional. Más tarde, nosotros validamos la escalabilidad y mejoras de rendimiento computacional de estas soluciones para el análisis automatizado de modelos de características de gran escala. Concretamente, las principales contribuciones de esta tesis son: • Programación especulativa para la detección de un conflicto mínimo y 1 2 preferente. Los algoritmos de detección de conflictos mínimos determinan el conjunto mínimo de restricciones en conflicto que son responsables de comportamiento defectuoso en el modelo en análisis. Nosotros proponemos una solución para, mediante programación especulativa, ejecutar en paralelo y reducir el tiempo de ejecución de operaciones de alto costo computacional que determinan el flujo de acción en la detección de conflicto mínimo y preferente en modelos de características de gran escala. • Programación especulativa para un diagnóstico mínimo y preferente. Los algoritmos de diagnóstico mínimo determinan un conjunto mínimo de restricciones que, por una adecuada adaptación de su estado, permiten conseguir un modelo consistente o libre de conflictos. Este trabajo presenta una solución para el diagnóstico mínimo y preferente en modelos de características de gran escala mediante la ejecución especulativa y paralela de operaciones de alto costo computacional que determinan el flujo de acción, y entonces disminuir el tiempo de ejecución de la solución. • Completar de forma mínima y preferente una configuración de modelo por diagnóstico. Las soluciones para completar una configuración parcial determinan un conjunto no necesariamente mínimo ni preferente de opciones para obtener una completa configuración. Esta tesis soluciona el completar de forma mínima y preferente una configuración de modelo mediante técnicas previamente usadas en contexto de diagnóstico de modelos de características. Esta tesis evalua que todas nuestras soluciones preservan los valores de salida esperados, y también presentan mejoras de rendimiento en el análisis automatizado de modelos de características con modelos de gran escala en las operaciones descrita

    Using Constraint Programming to Verify DOPLER Variability Models

    No full text
    Software product lines are typically developed using model-based approaches. Models are used to guide and automate key activities such as the derivation of products. The verification of product line models is thus essential to ensure the consistency of the derived products. While many authors have proposed approaches for verifying feature models there is so far no such approach for decision models. We discuss challenges of analyzing and verifying decision-oriented DOPLER variability models. The manual verification of these models is an error-prone, tedious, and sometimes infeasible task. We present a preliminary approach that converts DOPLER variability models into constraint programs to support their verification. We assess the feasibility of our approach by identifying defects in two existing variability models

    Defects in Product Line Models and How to Identify Them

    Get PDF
    This chapter is about generic (language-independent) verification criteria of product line models, its identification, formalisation, categorization, implementation with constraint programming techniques and its evaluation on several industrial and academic product line models represented with several languages

    Interacting Supernovae: Types IIn and Ibn

    Full text link
    Supernovae (SNe) that show evidence of strong shock interaction between their ejecta and pre-existing, slower circumstellar material (CSM) constitute an interesting, diverse, and still poorly understood category of explosive transients. The chief reason that they are extremely interesting is because they tell us that in a subset of stellar deaths, the progenitor star may become wildly unstable in the years, decades, or centuries before explosion. This is something that has not been included in standard stellar evolution models, but may significantly change the end product and yield of that evolution, and complicates our attempts to map SNe to their progenitors. Another reason they are interesting is because CSM interaction is an efficient engine for making bright transients, allowing super-luminous transients to arise from normal SN explosion energies, and allowing transients of normal SN luminosities to arise from sub-energetic explosions or low radioactivity yield. CSM interaction shrouds the fast ejecta in bright shock emission, obscuring our normal view of the underlying explosion, and the radiation hydrodynamics of the interaction is challenging to model. The CSM interaction may also be highly non-spherical, perhaps linked to binary interaction in the progenitor system. In some cases, these complications make it difficult to definitively tell the difference between a core-collapse or thermonuclear explosion, or to discern between a non-terminal eruption, failed SN, or weak SN. Efforts to uncover the physical parameters of individual events and connections to possible progenitor stars make this a rapidly evolving topic that continues to challenge paradigms of stellar evolution.Comment: Final draft of a chapter in the "SN Handbook". Accepted. 25 pages, 3 fig

    Automated analysis of feature models: Quo vadis?

    Get PDF
    Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186

    A Tidal Flare Candidate in Abell 1795

    Full text link
    As part of our ongoing archival X-ray survey of galaxy clusters for tidal flares, we present evidence of an X-ray transient source within 1 arcmin of the core of Abell 1795. The extreme variability (a factor of nearly 50), luminosity (> 2 x 10^42 erg s^{-1}), long duration (> 5 years) and supersoft X-ray spectrum (< 0.1 keV) are characteristic signatures of a stellar tidal disruption event according to theoretical predictions and to existing X-ray observations, implying a massive >~10^5 M_sun black hole at the centre of that galaxy. The large number of X-ray source counts (~700) and long temporal baseline (~12 years with Chandra and XMM-Newton) make this one of the best-sampled examples of any tidal flare candidate to date. The transient may be the same EUV source originally found contaminating the diffuse ICM observations of Bowyer et al. (1999), which would make it the only tidal flare candidate with reported EUV observations and implies an early source luminosity 1-2 orders of magnitude greater. If the host galaxy is a cluster member then it must be a dwarf galaxy, an order of magnitude less massive than the quiescent galaxy Henize 2-10 which hosts a massive black hole that is difficult to reconcile with its low mass. The unusual faintness of the host galaxy may be explained by tidal stripping in the cluster core.Comment: Accepted by MNRAS 2013 July 23. 27 pages, 10 figure

    User-centric product derivation in software product lines

    Get PDF
    Software Product Line (SPL) engineering aims at achieving efficient development of software products in a specific domain. New products are obtained via a process which entails creating a new configuration specifying the desired product’s features. This configuration must necessarily conform to a variability model, that describes the scope of the SPL, or else it is not viable. To ensure this, configuration tools are used that do not allow invalid configurations to be expressed. A different concern, however, is making sure that a product addresses the stakeholders’ needs as best as possible. The stakeholders may not be experts on the domain, so they may have unrealistic expectations. Also, the scope of the SPL is determined not only by the domain but also by limitations of the development platforms. It is therefore possible that the desired set of features goes beyond what is possible to currently create with the SPL. This means that configuration tools should provide support not only for creating valid products, but also for improving satisfaction of user concerns. We address this goal by providing a user-centric configuration process that offers suggestions during the configuration process, based on the use of soft constraints, and identifying and explaining potential conflicts that may arise. Suggestions help mitigating stakeholder uncertainty and poor domain knowledge, by helping them address well known and desirable domain-related concerns. On the other hand, automated conflict identification and explanation helps the stakeholders to understand the trade-offs required for realizing their vision, allowing informed resolution of conflicts. Additionally, we propose a prototype-based approach to configuration, that addresses the order-dependency issues by allowing the complete (or partial) specification of the features in a single step. A subsequent resolution process will then identify possible repairs, or trade-offs, that may be required for viabilization

    FLAME: a Formal Framework for the Automated Analysis of Software Product Lines Validated by Automated Specification Testing

    Get PDF
    Artículo publicado on-line el 14/12/2015.In a literature review on the last 20 years of automated analysis of feature models, the formalization of analysis operations was identified as the most relevant challenge in the field. This formalization could provide very valuable assets for tool developers such as a precise definition of the analysis operations and, what is more, a reference implementation, i.e. a trustworthy, not necessarily efficient implementation to compare different tools outputs. In this article, we present the FLAME framework as the result of facing this challenge. FLAME is a formal framework that can be used to formally specify not only feature models, but other variability modeling languages (VMLs) as well. This reusability is achieved by its two-layered architecture. The abstract foundation layer is the bottom layer in which all VML-independent analysis operations and concepts are specified. On top of the foundation layer, a family of characteristic model layers-one for each VML to be formally specified-can be developed by redefining some abstract types and relations. The verification and validation of FLAME has followed a process in which formal verification has been performed traditionally by manual theorem proving, but validation has been performed by integrating our experience on metamorphic testing of variability analysis tools, something that has shown to be much more effective than manually-designed test cases. To follow this automated, test-based validation approach, the specification of FLAME, written in Z, was translated into Prolog and 20,000 random tests were automatically generated and executed. Tests results helped to discover some inconsistencies not only in the formal specification, but also in the previous informal definitions of the analysis operations and in current analysis tools. After this process, the Prolog implementation of FLAME is being used as a reference implementation for some tool developers, some analysis operations have been formally specified for the first time with more generic semantics, and more VMLs are being formally specified using FLAME.Junta de Andalucía P12-TIC-1867Ministerio de Economía y Competitividad TIN2012-32273Junta de Andalucía TIC-5906Ministerio de Economía y Competitividad IPT-2012-0890-
    corecore