76 research outputs found

    Investigation of methane adsorption mechanism on Longmaxi shale by combining the micropore filling and monolayer coverage theories

    Get PDF
    Understanding the methane adsorption mechanism is critical for studying shale gas storage and transport in shale nanopores. In this work, we conducted low-pressure nitrogen adsorption (LPNA), scanning electron microscopy (SEM), and high-pressure methane adsorption experiments on seven shale samples from the Longmaxi formation in Sichuan basin. LPNA and SEM results show that pores in the shale samples are mainly nanometer-sized and have a broad size distribution. We have also shown that methane should be not only adsorbed in micropores (< 2 nm) but also in mesopores (2-50 nm) by two hypotheses. Therefore, we established a novel DA-LF model by combining the micropore filling and monolayer coverage theories to describe the methane adsorption process in shale. This new model can fit the high-pressure isotherms quite well, and the fitting error of this new model is slightly smaller than the commonly used D-A and L-F models. The absolute adsorption isotherms and the capacities for micropores and mesopores can be calculated using this new model separately, showing that 77% to 97% of methane molecules are adsorbed in micropores. In addition, we conclude that the methane adsorption mechanism in shale is: the majority of methane molecules are filled in micropores, and the remainder are monolayer-adsorbed in mesopores. It is anticipated that our results provide a more accurate explanation of the shale gas adsorption mechanism in shale formations.Cited as: Zhou, S., Ning, Y., Wang, H., Liu, H., Xue, H. Investigation of methane adsorption mechanism on Longmaxi shale by combining the micropore filling and monolayer coverage theories. Advances in Geo-Energy Research, 2018, 2(3): 269-281, doi: 10.26804/ager.2018.03.0

    On the Impact of Flaky Tests in Automated Program Repair

    Get PDF
    The literature of Automated Program Repair is largely dominated by approaches that leverage test suites not only to expose bugs but also to validate the generated patches. Unfortunately, beyond the widely-discussed concern that test suites are an imperfect oracle because they can be incomplete, they can include tests that are flaky. A flaky test is one that can be passed or failed by a program in a non-deterministic way. Such tests are generally carefully removed from the repair benchmarks. In practice, however, flaky tests are available test suite of software repositories. To the best of our knowledge, no study has discussed this threat to validity for evaluation of program repair. In this work, we highlight this threat and further investigate the impact of flaky tests by reverting their removal from the Defects4J benchmark. Our study aims to characterize the impact of flaky tests for localizing bugs and the eventual influence on the repair performance. Among other insights, we find that (1) although flaky tests are few (≈0.3%) of total tests, they affect experiments related to a large proportion (98.9%) of Defects4J real-world faults; (2) most flaky tests (98%) actually provide deterministic results under specific environment configurations (with the jdk version influencing the results); (3) flaky tests drastically hinder the effectiveness of spectrum-based fault localization (e.g., the rankings of 90 bugs drop down while none of the bugs obtains better location results compared with results achieved without flaky tests); and (4) the repairability of APR tools is greatly affected by the presence of flaky tests (e.g., 10 state of the art APR tools can now fix significantly fewer bugs than when the benchmark is manually curated to remove flaky tests). Given that the detection of flaky tests is still nascent, we call for the program repair community to relax the artificial assumption that the test suite is free from flaky tests. One direction that we propose is to consider developing strategies where patches that partially-fix bugs are considered worthwhile: a patch may make the program pass some test cases but fail some (which may actually be the flaky ones)

    On the Efficiency of Test Suite based Program Repair: A Systematic Assessment of 16 Automated Repair Systems for Java Programs

    Get PDF
    Test-based automated program repair has been a prolific field of research in software engineering in the last decade. Many approaches have indeed been proposed, which leverage test suites as a weak, but affordable, approximation to program specifications. Although the literature regularly sets new records on the number of benchmark bugs that can be fixed, several studies increasingly raise concerns about the limitations and biases of state-of-the-art approaches. For example, the correctness of generated patches has been questioned in a number of studies, while other researchers pointed out that evaluation schemes may be misleading with respect to the processing of fault localization results. Nevertheless, there is little work addressing the efficiency of patch generation, with regard to the practicality of program repair. In this paper, we fill this gap in the literature, by providing an extensive review on the efficiency of test suite based program repair. Our objective is to assess the number of generated patch candidates, since this information is correlated to (1) the strategy to traverse the search space efficiently in order to select sensical repair attempts, (2) the strategy to minimize the test effort for identifying a plausible patch, (3) as well as the strategy to prioritize the generation of a correct patch. To that end, we perform a large-scale empirical study on the efficiency, in terms of quantity of generated patch candidates of the 16 open-source repair tools for Java programs. The experiments are carefully conducted under the same fault localization configurations to limit biases. Eventually, among other findings, we note that: (1) many irrelevant patch candidates are generated by changing wrong code locations; (2) however, if the search space is carefully triaged, fault localization noise has little impact on patch generation efficiency; (3) yet, current template-based repair systems, which are known to be most effective in fixing a large number of bugs, are actually least efficient as they tend to generate majoritarily irrelevant patch candidates

    PEELER: Learning to Effectively Predict Flakiness without Running Tests

    Get PDF
    —Regression testing is a widely adopted approach to expose change-induced bugs as well as to verify the correctness/robustness of code in modern software development settings. Unfortunately, the occurrence of flaky tests leads to a significant increase in the cost of regression testing and eventually reduces the productivity of developers (i.e., their ability to find and fix real problems). State-of-the-art approaches leverage dynamic test information obtained through expensive re-execution of test cases to effectively identify flaky tests. Towards accounting for scalability constraints, some recent approaches have built on static test case features, but fall short on effectiveness. In this paper, we introduce PEELER, a new fully static approach for predicting flaky tests through exploring a representation of test cases based on the data dependency relations. The predictor is then trained as a neural network based model, which achieves at the same time scalability (because it does not require any test execution), effectiveness (because it exploits relevant test dependency features), and practicality (because it can be applied in the wild to find new flaky tests). Experimental validation on 17,532 test cases from 21 Java projects shows that PEELER outperforms the state-of-the-art FlakeFlagger by around 20 percentage points: we catch 22% more flaky tests while yielding 51% less false positives. Finally, in a live study with projects in-the-wild, we reported to developers 21 flakiness cases, among which 12 have already been confirmed by developers as being indeed flaky

    Natural Language to Code: How Far Are We?

    Get PDF
    peer reviewedA longstanding dream in software engineering research is to devise e ective approaches for automating development tasks based on developers’ informally-speci ed intentions. Such intentions are generally in the form of natural language descriptions. In recent literature, a number of approaches have been proposed to automate tasks such as code search and even code generation based on natural language inputs. While these approaches vary in terms of technical designs, their objective is the same: transforming a developer’s intention into source code. The literature, however, lacks a comprehensive understanding towards the e ectiveness of existing techniques as well as their complementarity to each other. We propose to ll this gap through a large-scale empirical study where we systematically evaluate natural language to code techniques. Speci cally, we consider six state-of-the-art techniques targeting code search, and four targeting code generation. Through extensive evaluations on a dataset of 22K+ natural language queries, our study reveals the following major ndings: (1) code search techniques based on model pre-training are so far the most e ective while code generation techniques can also provide promising results; (2) complementarity widely exists among the existing techniques; and (3) combining the ten techniques together can enhance the performance for 35% compared with the most e ective standalone technique. Finally, we propose a post-processing strategy to automatically integrate di erent techniques based on their generated code. Experimental results show that our devised strategy is both e ective and extensible

    High-Level Expression of Notch1 Increased the Risk of Metastasis in T1 Stage Clear Cell Renal Cell Carcinoma

    Get PDF
    Background: Although metastasis of clear cell renal cell carcinoma (ccRCC) is basically observed in late stage tumors, T1 stage metastasis of ccRCC can also be found with no definite molecular cause resulting inappropriate selection of surgery method and poor prognosis. Notch signaling is a conserved, widely expressed signal pathway that mediates various cellular processes in normal development and tumorigenesis. This study aims to explore the potential role and mechanism of Notch signaling in the metastasis of T1 stage ccRCC. Methodology/Principal Findings: The expression of Notch1 and Jagged1 were analyzed in tumor tissues and matched normal adjacent tissues obtained from 51 ccRCC patients. Compared to non-tumor tissues, Notch1 and Jagged1 expression was significantly elevated both in mRNA and protein levels in tumors. Tissue samples of localized and metastatic tumors were divided into three groups based on their tumor stages and the relative mRNA expression of Notch1 and Jagged1 were analyzed. Compared to localized tumors, Notch1 expression was significantly elevated in metastatic tumors in T1 stage while Jagged1 expression was not statistically different between localized and metastatic tumors of all stages. The average size of metastatic tumors was significantly larger than localized tumors in T1 stage ccRCC and the elevated expression of Notch1 was significantly positive correlated with the tumor diameter. The functional significance of Notch signaling was studied by transfection of 786-O, Caki-1 and HKC cell lines with full-length expression plasmids of Notch1 and Jagged1
    • 

    corecore