5,072 research outputs found

    Evolutionary algorithm-based analysis of gravitational microlensing lightcurves

    Full text link
    A new algorithm developed to perform autonomous fitting of gravitational microlensing lightcurves is presented. The new algorithm is conceptually simple, versatile and robust, and parallelises trivially; it combines features of extant evolutionary algorithms with some novel ones, and fares well on the problem of fitting binary-lens microlensing lightcurves, as well as on a number of other difficult optimisation problems. Success rates in excess of 90% are achieved when fitting synthetic though noisy binary-lens lightcurves, allowing no more than 20 minutes per fit on a desktop computer; this success rate is shown to compare very favourably with that of both a conventional (iterated simplex) algorithm, and a more state-of-the-art, artificial neural network-based approach. As such, this work provides proof of concept for the use of an evolutionary algorithm as the basis for real-time, autonomous modelling of microlensing events. Further work is required to investigate how the algorithm will fare when faced with more complex and realistic microlensing modelling problems; it is, however, argued here that the use of parallel computing platforms, such as inexpensive graphics processing units, should allow fitting times to be constrained to under an hour, even when dealing with complicated microlensing models. In any event, it is hoped that this work might stimulate some interest in evolutionary algorithms, and that the algorithm described here might prove useful for solving microlensing and/or more general model-fitting problems.Comment: 14 pages, 3 figures; accepted for publication in MNRA

    How Time-Fault Ratio helps in Test Case Prioritization for Regression Testing

    Get PDF
    Regression testing analyzes whether the maintenance of the software has adversely affected its normal functioning. Regression testing is generally performed under the strict time constraints. Due to limited time budget, it is not possible to test the software with all available test cases. Thus, the reordering of the test cases, on the basis of their effectiveness, is always needed. A test prioritization technique, which prioritizes the test cases on the basis of their Time -Fault Ratio (TFR), has been proposed in this paper. The technique tends to maximize the fault detection as the faults are exposed in the ascending order of their detection times. The proposed technique may be used at any stage of software development

    Correlation of Field and Laboratory Electrical Resistivity with Strength Properties of Soil

    Get PDF
    This Final Year Project involves the correlation of field and laboratory electrical resistivity with strength properties of soil. In general, this report embraces the use of electrical resistivity (ER) and geotechnical laboratory soil testing methods to obtain soil resistivity and soil strength properties. The objective of this study is to find correlation between field and laboratory electrical resistivity with strength properties of soil such as cohesion, Internal Angle ofFriction, moisture content, unit weight, and plasticity index. Field electrical resistivity was conducted at a particular location at University Teknologi PETRONAS (UTP) in the vicinity Block 13. From the same location two boreholes were drilled and soil samples were extracted using Percussion Gouges Gasoline Driven Hammer. Laboratory electrical resistivity and geotechnical laboratory tests were further carried out on the soil samples. The results obtained were compared and correlated with soil strength properties obtained from the two boreholes. Results from both the boreholes and electrical resistivity survey located at the boreholes locations indicated that there were consistent in the correlation between the resistivity and soil strength properties. The results indicated that there are some correlation between moisture content, internal angle of friction, and plasticity index with electrical resistivity (Field and laboratory). However, it shows lack of correlation between unit weight and cohesion with electrical resistivity. This final year project report covers background study, literature review, methodology & tools, results & correlations, and conclusion & recommendations on the title of this study

    Empirical comparison of four Java-based regression test selection techniques, An

    Get PDF
    2020 Fall.Includes bibliographical references.Regression testing is crucial to ensure that previously tested functionality is not broken by additions, modifications, and deletions to the program code. Since regression testing is an expensive process, researchers have developed regression test selection (RTS) techniques, which select and execute only those test cases that are impacted by the code changes. In general, an RTS technique has two main activities, which are (1) determining dependencies between the source code and test cases, and (2) identifying the code changes. Different approaches exist in the research literature to compute dependencies statically or dynamically at different levels of granularity. Also, code changes can be identified at different levels of granularity using different techniques. As a result, RTS techniques possess different characteristics related to the amount of reduction in the test suite size, time to select and run the test cases, test selection accuracy, and fault detection ability of the selected subset of test cases. Researchers have empirically evaluated the RTS techniques, but the evaluations were generally conducted using different experimental settings. This thesis compares four recent Java-based RTS techniques, Ekstazi, HyRTS, OpenClover, and STARTS, with respect to the above-mentioned characteristics using multiple revisions from five open source projects. It investigates the relationship between four program features and the performance of RTS techniques: total (program and test suite) size in KLOC, total number of classes, percentage of test classes over the total number of classes, and the percentage of classes that changed between revisions. The results show that STARTS, a static RTS technique, over-estimates dependencies between test cases and program code, and thus, selects more test cases than the dynamic RTS techniques Ekstazi and HyRTS, even though all three identify code changes in the same way. OpenClover identifies code changes differently from Ekstazi, HyRTS, and STARTS, and selects more test cases. STARTS achieved the lowest safety violation with respect to Ekstazi, and HyRTS achieved the lowest precision violation with respect to both STARTS and Ekstazi. Overall, the average fault detection ability of the RTS techniques was 8.75% lower than that of the original test suite. STARTS, Ekstazi, and HyRTS achieved higher test suite size reduction on the projects with over 100 KLOC than those with less than 100 KLOC. OpenClover achieved a higher test suite size reduction in the subjects that had a fewer total number of classes. The time reduction of OpenClover is affected by the combination of the number of source classes and the number of test cases in the subjects. The higher the number of test cases and source classes, the lower the time reduction

    Natural and Technological Hazards in Urban Areas

    Get PDF
    Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events

    Machine Learning Architectures for Modelling International Roughness in Cold Region Pavements

    Get PDF
    One of the most commonly used pavement performance indicators is the International Roughness Index (IRI). Currently used IRI models are often developed using regression analysis with little emphasis on climate. Recent studies have started using Machine Learning (ML) for IRI model development; however, the studies' scope is limited and often restricted to algorithms such as neural networks. Additionally, a systematic comparison between different ML algorithms in modelling IRI cannot be found in the literature. This study develops and systematically compares IRI models using regression analysis and ML methods. The economic and environmental implications of using site-specific models over general models are also examined in this study. This study also analyzes the impacts of climate change on pavement roughness for pavements with different subgrade soil types. This study's results support the use of ML, especially gradient-boosted ensemble algorithms, in developing IRI models as they have superior predicting capabilities and can provide much more value than traditional regression methods, such as regression analysis. The results also found that ML was able to produce meaningful results when regression analysis failed to do so

    Modelling of a generalized thermal conductivity for granular multiphase geomaterial design purposes

    Get PDF
    Soil thermal conductivity has an important role in geo-energy applications such as high voltage buried power cables, oil and gas pipelines, shallow geo-energy storage systems and heat transfer modelling. Hence, improvement of thermal conductivity of geomaterials is important in many engineering applications. In this thesis, an extensive experimental investigation was performed to enhance the thermal conductivity of geomaterials by modifying particle size distribution into fuller curve gradation, and by adding fine particles in an appropriate ratio as fillers. A significant improvement in the thermal conductivity was achieved with the newly developed geomaterials. An adaptive model based on artificial neural networks (ANNs) was developed to generalize the different conditions and soil types for estimating the thermal conductivity of geomaterials. After a corresponding training phase of the model based on the experimental data, the ANN model was able to predict the thermal conductivity of the independent experimental data very well. In perspective, the model can be supplemented with data of further soil types and conditions, so that a comprehensive representation of the saturation-dependent thermal conductivity of any materials can be prepared. The numerical 'black box' model developed in this way can generalize the relationships between different materials for later added amounts of data and soil types. In addition to the model development, a detailed validation was carried out using different geomaterials and boundary conditions to reinforce the applicability and superiority of the prediction models

    Regression test selection: theory and practice

    Get PDF
    Software affects every aspect of our lives, and software developers write tests to check software correctness. Software also rapidly evolves due to never-ending requirement changes, and software developers practice regression testing – running tests against the latest project revision to check that project changes did not break any functionality. While regression testing is important, it is also time-consuming due to the number of both tests and revisions. Regression test selection (RTS) speeds up regression testing by selecting to run only tests that are affected by project changes. RTS is efficient if the time to select tests is smaller than the time to run unselected tests; RTS is safe if it guarantees that unselected tests cannot be affected by the changes; and RTS is precise if tests that are not affected are also unselected. Although many RTS techniques have been proposed in research, these techniques have not been adopted in practice because they do not provide efficiency and safety at once. This dissertation presents three main bodies of research to motivate, introduce, and improve a novel, efficient, and safe RTS technique, called Ekstazi. Ekstazi is the first RTS technique being adopted by popular open-source projects. First, this dissertation reports on the first field study of test selection. The study of logs, recorded in real time from a diverse group of developers, finds that almost all developers perform manual RTS, i.e., manually select to run a subset of tests at each revision, and they select these tests in mostly ad hoc ways. Specifically, the study finds that manual RTS is not safe 74% of the time and not precise 73% of the time. These findings showed the urgent need for a better automated RTS techniques that could be adopted in practice. Second, this dissertation introduces Ekstazi, a novel RTS technique that is efficient and safe. Ekstazi tracks dynamic dependencies of tests on files, and unlike most prior RTS techniques, Ekstazi requires no integration with version-control systems. Ekstazi computes for each test what files it depends on; the files can be either executable code or external resources. A test need not be run in the new project revision if none of its dependent files changed. This dissertation also describes an implementation of Ekstazi for the Java programming language and the JUnit testing framework, and presents an extensive evaluation of Ekstazi on 615 revisions of 32 open-source projects (totaling almost 5M lines of code) with shorter- and longer-running test suites. The results show that Ekstazi reduced the testing time by 32% on average (and by 54% for longer-running test suites) compared to executing all tests. Ekstazi also yields lower testing time than the existing RTS techniques, despite the fact that Ekstazi may select more tests. Ekstazi is the first RTS tool adopted by several popular open-source projects, including Apache Camel, Apache Commons Math, and Apache CXF. Third, this dissertation presents a novel approach that improves precision of any RTS technique for projects with distributed software histories. The approach considers multiple old revisions, unlike all prior RTS techniques that reasoned about changes between two revisions – an old revision and a new revision – when selecting tests, effectively assuming a development process where changes occur in a linear sequence (as was common for CVS and SVN). However, most projects nowadays follow a development process that uses distributed version-control systems (such as Git). Software histories are generally modeled as directed graphs; in addition to changes occurring linearly, multiple revisions can be related by other commands such as branch, merge, rebase, cherry-pick, revert, etc. The novel approach reasons about commands that create each revision and selects tests for a new revision by considering multiple old revisions. This dissertation also proves the safety of the approach and presents evaluation on several open-source projects. The results show that the approach can reduce the number of selected tests over an order of magnitude for merge revisions
    • …
    corecore