8,166 research outputs found

    On the computational complexity of dynamic slicing problems for program schemas

    Get PDF
    This is the preprint version of the Article - Copyright @ 2011 Cambridge University PressGiven a program, a quotient can be obtained from it by deleting zero or more statements. The field of program slicing is concerned with computing a quotient of a program that preserves part of the behaviour of the original program. All program slicing algorithms take account of the structural properties of a program, such as control dependence and data dependence, rather than the semantics of its functions and predicates, and thus work, in effect, with program schemas. The dynamic slicing criterion of Korel and Laski requires only that program behaviour is preserved in cases where the original program follows a particular path, and that the slice/quotient follows this path. In this paper we formalise Korel and Laski's definition of a dynamic slice as applied to linear schemas, and also formulate a less restrictive definition in which the path through the original program need not be preserved by the slice. The less restrictive definition has the benefit of leading to smaller slices. For both definitions, we compute complexity bounds for the problems of establishing whether a given slice of a linear schema is a dynamic slice and whether a linear schema has a non-trivial dynamic slice, and prove that the latter problem is NP-hard in both cases. We also give an example to prove that minimal dynamic slices (whether or not they preserve the original path) need not be unique.This work was partly supported by the Engineering and Physical Sciences Research Council, UK, under grant EP/E002919/1

    Equivalent block transmissivity in an irregular 2D polygonal grid for one-phase flow: a sensitivity analysis

    Get PDF
    International audienceUpscaling is needed to transform the representation of non additive space-dependent variables, such as permeability, from the fine grid of geostatistical simulations (to simulate small scale spatial variability) to the coarser, generally irregular grids for hydrodynamic transport codes. A new renormalisation method is proposed, based on the geometric properties of a VoronoĂŻ grid. It is compared to other classic methods by a sensitivity analysis (grid, range and sill of the variogram, random realisation of a simulation); the criterion is the flux of a tracer at the outlet. The effect of the upscaling technique on the results appears to be of second order compared to the spatial discretisation, the choice of variogram, and the realisation

    Group-finding with photometric redshifts: The Photo-z Probability Peaks algorithm

    Get PDF
    We present a galaxy group-finding algorithm, the Photo-z Probability Peaks (P3) algorithm, optimized for locating small galaxy groups using photometric redshift data by searching for peaks in the signal-to-noise of the local overdensity of galaxies in a three-dimensional grid. This method is an improvement over similar two-dimensional matched-filter methods in reducing background contamination through the use of redshift information, allowing it to accurately detect groups at lower richness. We present the results of tests of our algorithm on galaxy catalogues from the Millennium Simulation. Using a minimum S/N of 3 for detected groups, a group aperture size of 0.25 Mpc/h, and assuming photometric redshift accuracy of sigma_z = 0.05 it attains a purity of 84% and detects ~295 groups/deg.^2 with an average group richness of 8.6 members. Assuming photometric redshift accuracy of sigma_z = 0.02, it attains a purity of 97% and detects ~143 groups/deg.^2 with an average group richness of 12.5 members. We also test our algorithm on data available for the COSMOS field and the presently-available fields from the CFHTLS-Wide survey, presenting preliminary results of this analysis.Comment: Accepted for publication by MNRAS, 16 pages, 11 color figure

    Developing Legacy System Migration Methods and Tools for Technology Transfer

    Get PDF
    This paper presents the research results of an ongoing technology transfer project carried out in coopera- tion between the University of Salerno and a small software company. The project is aimed at developing and transferring migration technology to the industrial partner. The partner should be enabled to migrate monolithic multi-user COBOL legacy systems to a multi-tier Web-based architecture. The assessment of the legacy systems of the partner company revealed that these systems had a very low level of decompos- ability with spaghetti-like code and embedded control flow and database accesses within the user interface descriptions. For this reason, it was decided to adopt an incremental migration strategy based on the reengineering of the user interface using Web technology, on the transformation of interactive legacy programs into batch programs, and the wrapping of the legacy programs. A middleware framework links the new Web-based user interface with the Wrapped Legacy System. An Eclipse plug-in, named MELIS (migration environment for legacy information systems), was also developed to support the migration process. Both the migration strategy and the tool have been applied to two essential subsystems of the most business critical legacy system of the partner company

    Identifying Cloned Navigational Patterns in Web Applications

    Get PDF
    Web Applications are subject to continuous and rapid evolution. Often programmers indiscriminately duplicate Web pages without considering systematic development and maintenance methods. This practice creates code clones that make Web Applications hard to maintain and reuse. We present an approach to identify duplicated functionalities in Web Applications through cloned navigational pattern analysis. Cloned patterns can be generalized in a reengineering process, thus to simplify the structure and future maintenance of the Web Applications. The proposed method first identifies pairs of cloned pages by analyzing similarity at structure, content, and scripting code. Two pages are considered clones if their similarity is greater than a given threshold. Cloned pages are then grouped into clusters and the links connecting pages of two clusters are grouped too. An interconnection metric has been defined on the links between two clusters to express the effort required to reengineer them as well as to select the patterns of interest. To further reduce the comprehension effort, we filter out links and nodes of the clustered navigational schema that do not contribute to the identification of cloned navigational patterns. A tool supporting the proposed approach has been developed and validated in a case study

    Improving Multi-Objective Test Case Selection by Injecting Diversity in Genetic Algorithms

    Get PDF
    A way to reduce the cost of regression testing consists of selecting or prioritizing subsets of test cases from a test suite according to some criteria. Besides greedy algorithms, cost cognizant additional greedy algorithms, multi-objective optimization algorithms, and Multi-Objective Genetic Algorithms (MOGAs), have also been proposed to tackle this problem. However, previous studies have shown that there is no clear winner between greedy and MOGAs, and that their combination does not necessarily produce better results. In this paper we show that the optimality of MOGAs can be significantly improved by diversifying the solutions (sub-sets of the test suite) generated during the search process. Specifically, we introduce a new MOGA, coined as DIV-GA (DIversity based Genetic Algorithm), based on the mechanisms of orthogonal design and orthogonal evolution that increase diversity by injecting new orthogonal individuals during the search process. Results of an empirical study conducted on eleven programs show that DIV-GA outperforms both greedy algorithms and the traditional MOGAs from the optimality point of view. Moreover, the solutions (sub-sets of the test suite) provided by DIV-GA are able to detect more faults than the other algorithms, while keeping the same test execution cost

    Sulle tracce del terremoto del 20 febbraio 1743 nei comuni danneggiati del Salento (Puglia meridionale, Italia)

    Get PDF
    L’area del Salento (Puglia meridionale) è considerata l’avampaese stabile della catena appenninica (Cinque et al., 1993). La sismicità strumentale, registrata dagli anni Settanta a oggi, è scarsa e di bassa energia, prevalentemente concentrata ad ovest della penisola salentina e nel canale d’Otranto, dove il massimo evento registrato è stato quello del 20 ottobre del 1974 di Mw = 5.0 (CPTI11, 2011) (Fig.1). I terremoti storici più forti degli ultimi 1.000 anni, riportati dai cataloghi disponibili in letteratura, sono stati quelli del 10 settembre 1087 di Bari (Imax = 6-7), (CPTI11, 2011), del 20 febbraio 1743 del basso Ionio (Imax=IX), (CFTIMED04, 2007; CPTI11, 2011) e del 26 ottobre 1826 di Manduria (Imax = 6-7, CPTI11, 2011). Tra questi l’evento a maggiore energia è stato il terremoto del 1743, che ha colpito la Puglia e le coste occidentali della Grecia, ma è stato avvertito anche nelle regioni dell’Italia meridionale e in alcune località dell’Italia Centrale e Settentrionale, fino a Trento e a Udine, e finanche nell’isola di Malta. É stato un evento sismico complesso, percepito come una sequenza di tre violente scosse, prodotte probabilmente dall’attivazione di diversi segmenti di faglia (CFTIMED04, 2007). Sono state formulate due ipotesi di localizzazione di questo evento: secondo la prima, l’epicentro è riportato a mare, a est di S. Maria di Leuca, ipotesi avvalorata anche dalla distribuzione dei depositi da tsunami, attribuiti a questo terremoto, lungo le coste adriatiche meridionali del Salento (Torre Sasso e Torre S. Emiliano) (Mastronuzzi et al., 2007) fino a Brindisi; per la seconda, come revisionato nel catalogo CFTIMED04 (2007), l’epicentro è riportato a terra, tra Nardò e Galatina. In Italia i danni maggiori si sono registrati in Salento, nelle cittadine di Nardò, in provincia di Lecce, e Francavilla Fontana, in provincia di Brindisi; in Grecia a Levkas e nelle isole Ionie. I morti furono circa 180, 150 nella sola Nardò. L’evento è descritto in alcune centinaia di documenti storici, da cui si evince che furono oltre 86 le località interessate. Lo studio degli effetti prodotti ha permesso di attribuire all’evento una intensità massima di Imax = 9 (per Nardò e per Levkas) e Me = 6.9 (CFTIMED04, 2007). Nonostante ci siano stati danni notevoli in tutto il Salento, la mappa di pericolosità sismica di riferimento per il territorio nazionale (MPSO4 - Ordinanza PCM 3519/2006) attribuisce bassi valori di pericolosità nell’area del Salento e alti valori nell’area a mare, nel canale di Otranto. Questo lavoro si propone di andare alla scoperta delle evidenze architettoniche distrutte e ricostruite in seguito all’evento, con l’obiettivo di creare un itinerario geoturistico sulle “tracce” di questo terremoto nel tessuto urbano delle città salentine coinvolte

    COVID-19 infection and rheumatoid arthritis: Faraway, so close!

    Get PDF
    The outbreak of the new coronavirus infections COVID-19 in December 2019 in China has quickly become a global health emergency. Given the lack of specific anti-viral therapies, the current management of severe acute respiratory syndrome coronaviruses (SARS-CoV-2) is mainly supportive, even though several compounds are now under investigation for the treatment of this life-threatening disease. COVID-19 pandemic is certainly conditioning the treatment strategy of a complex disorder as rheumatoid arthritis (RA), whose infectious risk is increased compared to the general population because of an overall impairment of immune system typical of autoimmune diseases combined with the iatrogenic effect generated by corticosteroids and immunosuppressive drugs. However, the increasing knowledge about the pathophysiology of SARS-CoV-2 infection is leading to consider some anti-rheumatic drugs as potential treatment options for the management of COVID-19. In this review we will critically analyse the evidences on either positive or negative effect of drugs commonly used to treat RA in this particular scenario, in order to optimize the current approach to RA patients
    • …
    corecore