110,209 research outputs found

    Requirements and Tools for Variability Management

    Get PDF
    Explicit and software-supported Business Process Management has become the core infrastructure of any medium and large organization that has a need to be efficient and effective. The number of processes of a single organization can be very high, furthermore, they might be very similar, be in need of momentary change, or evolve frequently. If the ad-hoc adaptation and customization of processes is currently the dominant way, it clearly is not the best. In fact, providing tools for supporting the explicit management of variation in processes (due to customization or evolution needs) has a profound impact on the overall life-cycle of processes in organizations. Additionally, with the increasing adoption of Service-Oriented Architectures, the infrastructure to support automatic reconfiguration and adaptation of business process is solid. In this paper, after defining variability in business process management, we consider the requirements for explicit variation handling for (service based) business process systems. eGovernment serves as an illustrative example of reuse. In this case study, all local municipalities need to implement the same general legal process while adapting it to the local business practices and IT infrastructure needs. Finally, an evaluation of existing tools for explicit variability management is provided with respect to the requirements identified.

    A Systematic Review of Tracing Solutions in Software Product Lines

    Get PDF
    Software Product Lines are large-scale, multi-unit systems that enable massive, customized production. They consist of a base of reusable artifacts and points of variation that provide the system with flexibility, allowing generating customized products. However, maintaining a system with such complexity and flexibility could be error prone and time consuming. Indeed, any modification (addition, deletion or update) at the level of a product or an artifact would impact other elements. It would therefore be interesting to adopt an efficient and organized traceability solution to maintain the Software Product Line. Still, traceability is not systematically implemented. It is usually set up for specific constraints (e.g. certification requirements), but abandoned in other situations. In order to draw a picture of the actual conditions of traceability solutions in Software Product Lines context, we decided to address a literature review. This review as well as its findings is detailed in the present article.Comment: 22 pages, 9 figures, 7 table

    Probabilistic prediction of rupture length, slip and seismic ground motions for an ongoing rupture: implications for early warning for large earthquakes

    Get PDF
    Earthquake EarlyWarning (EEW) predicts future ground shaking based on presently available data. Long ruptures present the best opportunities for EEW since many heavily shaken areas are distant from the earthquake epicentre and may receive long warning times. Predicting the shaking from large earthquakes, however, requires some estimate of the likelihood of the future evolution of an ongoing rupture. An EEW system that anticipates future rupture using the present magnitude (or rupture length) together with the Gutenberg-Richter frequencysize statistics will likely never predict a large earthquake, because of the rare occurrence of ‘extreme events’. However, it seems reasonable to assume that large slip amplitudes increase the probability for evolving into a large earthquake. To investigate the relationship between the slip and the eventual size of an ongoing rupture, we simulate suites of 1-D rupture series from stochastic models of spatially heterogeneous slip. We find that while large slip amplitudes increase the probability for the continuation of a rupture and the possible evolution into a ‘Big One’, the recognition that rupture is occurring on a spatially smooth fault has an even stronger effect.We conclude that anEEWsystem for large earthquakes needs some mechanism for the rapid recognition of the causative fault (e.g., from real-time GPS measurements) and consideration of its ‘smoothness’. An EEW system for large earthquakes on smooth faults, such as the San Andreas Fault, could be implemented in two ways: the system could issue a warning, whenever slip on the fault exceeds a few metres, because the probability for a large earthquake is high and strong shaking is expected to occur in large areas around the fault. A more sophisticated EEW system could use the present slip on the fault to estimate the future slip evolution and final rupture dimensions, and (using this information) could provide probabilistic predictions of seismic ground motions along the evolving rupture. The decision on whether an EEW system should be realized in the first or in the second way (or in a combination of both) is user-specific

    Self-adaptive exploration in evolutionary search

    Full text link
    We address a primary question of computational as well as biological research on evolution: How can an exploration strategy adapt in such a way as to exploit the information gained about the problem at hand? We first introduce an integrated formalism of evolutionary search which provides a unified view on different specific approaches. On this basis we discuss the implications of indirect modeling (via a ``genotype-phenotype mapping'') on the exploration strategy. Notions such as modularity, pleiotropy and functional phenotypic complex are discussed as implications. Then, rigorously reflecting the notion of self-adaptability, we introduce a new definition that captures self-adaptability of exploration: different genotypes that map to the same phenotype may represent (also topologically) different exploration strategies; self-adaptability requires a variation of exploration strategies along such a ``neutral space''. By this definition, the concept of neutrality becomes a central concern of this paper. Finally, we present examples of these concepts: For a specific grammar-type encoding, we observe a large variability of exploration strategies for a fixed phenotype, and a self-adaptive drift towards short representations with highly structured exploration strategy that matches the ``problem's structure''.Comment: 24 pages, 5 figure

    Accretion and ejection in black-hole X-ray transients

    Get PDF
    Aims: We summarize the current observational picture of the outbursts of black-hole X-ray transients (BHTs), based on the evolution traced in a hardness-luminosity diagram (HLD), and we offer a physical interpretation. Methods: The basic ingredient in our interpretation is the Poynting-Robertson Cosmic Battery (PRCB, Contopoulos & Kazanas 1998), which provides locally the poloidal magnetic field needed for the ejection of the jet. In addition, we make two assumptions, easily justifiable. The first is that the mass-accretion rate to the black hole in a BHT outburst has a generic bell-shaped form. This is guaranteed by the observational fact that all BHTs start their outburst and end it at the quiescent state. The second assumption is that at low accretion rates the accretion flow is geometrically thick, ADAF-like, while at high accretion rates it is geometrically thin. Results: Both, at the beginning and the end of an outburst, the PRCB establishes a strong poloidal magnetic field in the ADAF-like part of the accretion flow, and this explains naturally why a jet is always present in the right part of the HLD. In the left part of the HLD, the accretion flow is in the form of a thin disk, and such a disk cannot sustain a strong poloidal magnetic filed. Thus, no jet is expected in this part of the HLD. The counterclockwise traversal of the HLD is explained as follows: the poloidal magnetic field in the ADAF forces the flow to remain ADAF and the source to move upwards in the HLD rather than to turn left. Thus, the history of the system determines the counterclockwise traversal of the HLD. As a result, no BHT is expected to ever traverse the entire HLD curve in the clockwise direction. Conclusions: We offer a physical interpretation of accretion and ejection in BHTs with only one parameter, the mass transfer rate.Comment: Accepted for publication in A&

    Level and length of cyclic solar activity during the Maunder minimum as deduced from the active day statistics

    Full text link
    The Maunder minimum (MM) of greatly reduced solar activity took place in 1645-1715, but the exact level of sunspot activity is uncertain as based, to a large extent, on historical generic statements of the absence of spots on the Sun. Here we aim, using a conservative approach, to assess the level and length of solar cycle during the Maunder minimum, on the basis of direct historical records by astronomers of that time. A database of the active and inactive days (days with and without recorded sunspots on the solar disc respectively) is constructed for three models of different levels of conservatism (loose ML, optimum MO and strict MS models) regarding generic no-spot records. We have used the active day fraction to estimate the group sunspot number during the MM. A clear cyclic variability is found throughout the MM with peaks at around 1655--1657, 1675, 1684 and 1705, and possibly 1666, with the active day fraction not exceeding 0.2, 0.3 or 0.4 during the core MM, for the three models. Estimated sunspot numbers are found very low in accordance with a grand minimum of solar activity. We have found, for the core MM (1650-1700), that: (1) A large fraction of no-spot records, corresponding to the solar meridian observations, may be unreliable in the conventional database. (2) The active day fraction remained low (below 0.3-0.4) throughout the MM, indicating the low level of sunspot activity. (3) The solar cycle appears clearly during the core MM. (4) The length of the solar cycle during the core MM appears 9±19\pm 1 years, but there is an uncertainty in that. (5) The magnitude of the sunspot cycle during MM is assessed to be below 5-10 in sunspot numbers; A hypothesis of the high solar cycles during the MM is not confirmed.Comment: Accepted to Astron. Astrophy

    From physics to biology by extending criticality and symmetry breakings

    Get PDF
    Symmetries play a major role in physics, in particular since the work by E. Noether and H. Weyl in the first half of last century. Herein, we briefly review their role by recalling how symmetry changes allow to conceptually move from classical to relativistic and quantum physics. We then introduce our ongoing theoretical analysis in biology and show that symmetries play a radically different role in this discipline, when compared to those in current physics. By this comparison, we stress that symmetries must be understood in relation to conservation and stability properties, as represented in the theories. We posit that the dynamics of biological organisms, in their various levels of organization, are not just processes, but permanent (extended, in our terminology) critical transitions and, thus, symmetry changes. Within the limits of a relative structural stability (or interval of viability), variability is at the core of these transitions
    • 

    corecore