7,676 research outputs found

    Perfect is the enemy of test oracle

    Full text link
    Automation of test oracles is one of the most challenging facets of software testing, but remains comparatively less addressed compared to automated test input generation. Test oracles rely on a ground-truth that can distinguish between the correct and buggy behavior to determine whether a test fails (detects a bug) or passes. What makes the oracle problem challenging and undecidable is the assumption that the ground-truth should know the exact expected, correct, or buggy behavior. However, we argue that one can still build an accurate oracle without knowing the exact correct or buggy behavior, but how these two might differ. This paper presents SEER, a learning-based approach that in the absence of test assertions or other types of oracle, can determine whether a unit test passes or fails on a given method under test (MUT). To build the ground-truth, SEER jointly embeds unit tests and the implementation of MUTs into a unified vector space, in such a way that the neural representation of tests are similar to that of MUTs they pass on them, but dissimilar to MUTs they fail on them. The classifier built on top of this vector representation serves as the oracle to generate "fail" labels, when test inputs detect a bug in MUT or "pass" labels, otherwise. Our extensive experiments on applying SEER to more than 5K unit tests from a diverse set of open-source Java projects show that the produced oracle is (1) effective in predicting the fail or pass labels, achieving an overall accuracy, precision, recall, and F1 measure of 93%, 86%, 94%, and 90%, (2) generalizable, predicting the labels for the unit test of projects that were not in training or validation set with negligible performance drop, and (3) efficient, detecting the existence of bugs in only 6.5 milliseconds on average.Comment: Published in ESEC/FSE 202

    Towards A Graphene Chip System For Blood Clotting Disease Diagnostics

    Get PDF
    Point of care diagnostics (POCD) allows the rapid, accurate measurement of analytes near to a patient. This enables faster clinical decision making and can lead to earlier diagnosis and better patient monitoring and treatment. However, despite many prospective POCD devices being developed for a wide range of diseases this promised technology is yet to be translated to a clinical setting due to the lack of a cost-eļ¬€ective biosensing platform.This thesis focuses on the development of a highly sensitive, low cost and scalable biosensor platform that combines graphene with semiconductor fabrication tech-niques to create graphene ļ¬eld-eļ¬€ect transistors biosensor. The key challenges of designing and fabricating a graphene-based biosensor are addressed. This work fo-cuses on a speciļ¬c platform for blood clotting disease diagnostics, but the platform has the capability of being applied to any disease with a detectable biomarker.Multiple sensor designs were tested during this work that maximised sensor ef-ļ¬ciency and costs for diļ¬€erent applications. The multiplex design enabled diļ¬€erent graphene channels on the same chip to be functionalised with unique chemistry. The Inverted MOSFET design was created, which allows for back gated measurements to be performed whilst keeping the graphene channel open for functionalisation. The Shared Source and Matrix design maximises the total number of sensing channels per chip, resulting in the most cost-eļ¬€ective fabrication approach for a graphene-based sensor (decreasing cost per channel from Ā£9.72 to Ā£4.11).The challenge of integrating graphene into a semiconductor fabrication process is also addressed through the development of a novel vacuum transfer method-ology that allows photoresist free transfer. The two main fabrication processes; graphene supplied on the wafer ā€œPre-Transferā€ and graphene transferred after met-allisation ā€œPost-Transferā€ were compared in terms of graphene channel resistance and graphene end quality (defect density and photoresist). The Post-Transfer pro-cess higher quality (less damage, residue and doping, conļ¬rmed by Raman spec-troscopy).Following sensor fabrication, the next stages of creating a sensor platform involve the passivation and packaging of the sensor chip. Diļ¬€erent approaches using dielec-tric deposition approaches are compared for passivation. Molecular Vapour Deposi-tion (MVD) deposited Al2O3 was shown to produce graphene channels with lower damage than unprocessed graphene, and also improves graphene doping bringing the Dirac point of the graphene close to 0 V. The packaging integration of microļ¬‚uidics is investigated comparing traditional soft lithography approaches and the new 3D printed microļ¬‚uidic approach. Speciļ¬c microļ¬‚uidic packaging for blood separation towards a blood sampling point of care sensor is examined to identify the laminar approach for lower blood cell count, as a method of pre-processing the blood sample before sensing.To test the sensitivity of the Post-Transfer MVD passivated graphene sensor de-veloped in this work, real-time IV measurements were performed to identify throm-bin protein binding in real-time on the graphene surface. The sensor was function-alised using a thrombin speciļ¬c aptamer solution and real-time IV measurements were performed on the functionalised graphene sensor with a range of biologically relevant protein concentrations. The resulting sensitivity of the graphene sensor was in the 1-100 pg/ml concentration range, producing a resistance change of 0.2% per pg/ml. Speciļ¬city was conļ¬rmed using a non-thrombin speciļ¬c aptamer as the neg-ative control. These results indicate that the graphene sensor platform developed in this thesis has the potential as a highly sensitive POCD. The processes developed here can be used to develop graphene sensors for multiple biomarkers in the future

    Foundations for programming and implementing effect handlers

    Get PDF
    First-class control operators provide programmers with an expressive and efficient means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and control idioms as shareable libraries. Effect handlers provide a particularly structured approach to programming with first-class control by naming control reifying operations and separating from their handling. This thesis is composed of three strands of work in which I develop operational foundations for programming and implementing effect handlers as well as exploring the expressive power of effect handlers. The first strand develops a fine-grain call-by-value core calculus of a statically typed programming language with a structural notion of effect types, as opposed to the nominal notion of effect types that dominates the literature. With the structural approach, effects need not be declared before use. The usual safety properties of statically typed programming are retained by making crucial use of row polymorphism to build and track effect signatures. The calculus features three forms of handlers: deep, shallow, and parameterised. They each offer a different approach to manipulate the control state of programs. Traditional deep handlers are defined by folds over computation trees, and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are defined by case splits (rather than folds) over computation trees. Parameterised handlers are deep handlers extended with a state value that is threaded through the folds over computation trees. To demonstrate the usefulness of effects and handlers as a practical programming abstraction I implement the essence of a small UNIX-style operating system complete with multi-user environment, time-sharing, and file I/O. The second strand studies continuation passing style (CPS) and abstract machine semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The CPS translation is obtained through a series of refinements of a basic first-order CPS translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually arriving at the notion of generalised continuation, which admit simultaneous support for deep, shallow, and parameterised handlers. The initial refinement adds support for deep handlers by representing stacks of continuations and handlers as a curried sequence of arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the CPS translation is refined once more to obtain an uncurried representation of stacks of continuations and handlers. Finally, the translation is made higher-order in order to contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for deep, shallow, and parameterised effect handlers. kinds of effect handlers. The third strand explores the expressiveness of effect handlers. First, I show that deep, shallow, and parameterised notions of handlers are interdefinable by way of typed macro-expressiveness, which provides a syntactic notion of expressiveness that affirms the existence of encodings between handlers, but it provides no information about the computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control

    Unraveling the effect of sex on human genetic architecture

    Get PDF
    Sex is arguably the most important differentiating characteristic in most mammalian species, separating populations into different groups, with varying behaviors, morphologies, and physiologies based on their complement of sex chromosomes, amongst other factors. In humans, despite males and females sharing nearly identical genomes, there are differences between the sexes in complex traits and in the risk of a wide array of diseases. Sex provides the genome with a distinct hormonal milieu, differential gene expression, and environmental pressures arising from gender societal roles. This thus poses the possibility of observing gene by sex (GxS) interactions between the sexes that may contribute to some of the phenotypic differences observed. In recent years, there has been growing evidence of GxS, with common genetic variation presenting different effects on males and females. These studies have however been limited in regards to the number of traits studied and/or statistical power. Understanding sex differences in genetic architecture is of great importance as this could lead to improved understanding of potential differences in underlying biological pathways and disease etiology between the sexes and in turn help inform personalised treatments and precision medicine. In this thesis we provide insights into both the scope and mechanism of GxS across the genome of circa 450,000 individuals of European ancestry and 530 complex traits in the UK Biobank. We found small yet widespread differences in genetic architecture across traits through the calculation of sex-specific heritability, genetic correlations, and sex-stratified genome-wide association studies (GWAS). We further investigated whether sex-agnostic (non-stratified) efforts could potentially be missing information of interest, including sex-specific trait-relevant loci and increased phenotype prediction accuracies. Finally, we studied the potential functional role of sex differences in genetic architecture through sex biased expression quantitative trait loci (eQTL) and gene-level analyses. Overall, this study marks a broad examination of the genetics of sex differences. Our findings parallel previous reports, suggesting the presence of sexual genetic heterogeneity across complex traits of generally modest magnitude. Furthermore, our results suggest the need to consider sex-stratified analyses in future studies in order to shed light into possible sex-specific molecular mechanisms

    A productive response to legacy system petrification

    Get PDF
    Requirements change. The requirements of a legacy information system change, often in unanticipated ways, and at a more rapid pace than the rate at which the information system itself can be evolved to support them. The capabilities of a legacy system progressively fall further and further behind their evolving requirements, in a degrading process termed petrification. As systems petrify, they deliver diminishing business value, hamper business effectiveness, and drain organisational resources. To address legacy systems, the first challenge is to understand how to shed their resistance to tracking requirements change. The second challenge is to ensure that a newly adaptable system never again petrifies into a change resistant legacy system. This thesis addresses both challenges. The approach outlined herein is underpinned by an agile migration process - termed Productive Migration - that homes in upon the specific causes of petrification within each particular legacy system and provides guidance upon how to address them. That guidance comes in part from a personalised catalogue of petrifying patterns, which capture recurring themes underlying petrification. These steer us to the problems actually present in a given legacy system, and lead us to suitable antidote productive patterns via which we can deal with those problems one by one. To prevent newly adaptable systems from again degrading into legacy systems, we appeal to a follow-on process, termed Productive Evolution, which embraces and keeps pace with change rather than resisting and falling behind it. Productive Evolution teaches us to be vigilant against signs of system petrification and helps us to nip them in the bud. The aim is to nurture systems that remain supportive of the business, that are adaptable in step with ongoing requirements change, and that continue to retain their value as significant business assets

    SYSTEMS METHODS FOR ANALYSIS OF HETEROGENEOUS GLIOBLASTOMA DATASETS TOWARDS ELUCIDATION OF INTER-TUMOURAL RESISTANCE PATHWAYS AND NEW THERAPEUTIC TARGETS

    Get PDF
    In this PhD thesis is described an endeavour to compile litterature about Glioblastoma key molecular mechanisms into a directed network followin Disease Maps standards, analyse its topology and compare results with quantitative analysis of multi-omics datasets in order to investigate Glioblastoma resistance mechanisms. The work also integrated implementation of Data Management good practices and procedures

    Chinese Benteng Womenā€™s Participation in Local Development Affairs in Indonesia: Appropriate means for struggle and a pathway to claim citizenā€™ right?

    Get PDF
    It had been more than two decades passing by aftermath the devastating Asiaā€™s Financial Crisis in 1997, subsequently followed by Suhartoā€™s step down from his presidential throne which he occupied for more than three decades. The financial turmoil turned to a political disaster furthermore has led to massive looting that severely impacted Indonesians of Chinese descendant, including unresolved mystery of the most atrocious sexual violation against women and covert killings of students and democracy activists in this country. Since then, precisely aftermath May 1998, which publicly known as ā€œReformasiā€1, Indonesia underwent political reform that eventually corresponded positively to its macroeconomic growth. Twenty years later, in 2018, Indonesia captured worldwide attention because it has successfully hosted two internationally renowned events, namely the Asian Games 2018 ā€“ the most prestigious sport events in Asia ā€“ conducted in Jakarta and Palembang; and the IMF/World Bank Annual Meeting 2018 in Bali. Particularly in the IMF/World Bank Annual Meeting, this event has significantly elevated Indonesiaā€™s credibility and international prestige in the global economic powerplay as one of the nations with promising growth and openness. However, the narrative about poverty and inequality, including increasing racial tension, religious conservatism, and sexual violation against women are superseded by friendly climate for foreign investment and eventually excessive glorification of the nationā€™s economic growth. By portraying the image of promising new economic power, as rhetorically promised by President Joko Widodo during his presidential terms, Indonesia has swept the growing inequality in this highly stratified society that historically compounded with religious and racial tension under the carpet of digital economy.Arte y Humanidade

    Developing automated meta-research approaches in the preclinical Alzheimer's disease literature

    Get PDF
    Alzheimerā€™s disease is a devastating neurodegenerative disorder for which there is no cure. A crucial part of the drug development pipeline involves testing therapeutic interventions in animal disease models. However, promising findings in preclinical experiments have not translated into clinical trial success. Reproducibility has often been cited as a major issue affecting biomedical research, where experimental results in one laboratory cannot be replicated in another. By using meta-research (research on research) approaches such as systematic reviews, researchers aim to identify and summarise all available evidence relating to a specific research question. By conducting a meta-analysis, researchers can also combine the results from different experiments statistically to understand the overall effect of an intervention and to explore reasons for variations seen across different publications. Systematic reviews of the preclinical Alzheimerā€™s disease literature could inform decision making, encourage research improvement, and identify gaps in the literature to guide future research. However, due to the vast amount of potentially useful evidence from animal models of Alzheimerā€™s disease, it remains difficult to make sense of and utilise this data effectively. Systematic reviews are common practice within evidence based medicine, yet their application to preclinical research is often limited by the time and resources required. In this thesis, I develop, build-upon, and implement automated meta-research approaches to collect, curate, and evaluate the preclinical Alzheimerā€™s literature. I searched several biomedical databases to obtain all research relevant to Alzheimerā€™s disease. I developed a novel deduplication tool to automatically identify and remove duplicate publications identified across different databases with minimal human effort. I trained a crowd of reviewers to annotate a subset of the publications identified and used this data to train a machine learning algorithm to screen through the remaining publications for relevance. I developed text-mining tools to extract model, intervention, and treatment information from publications and I improved existing automated tools to extract reported measures to reduce the risk of bias. Using these tools, I created a categorised database of research in transgenic Alzheimerā€™s disease animal models and created a visual summary of this dataset on an interactive, openly accessible online platform. Using the techniques described, I also identified relevant publications within the categorised dataset to perform systematic reviews of two key outcomes of interest in transgenic Alzheimerā€™s disease models: (1) synaptic plasticity and transmission in hippocampal slices and (2) motor activity in the open field test. Over 400,000 publications were identified across biomedical research databases, with 230,203 unique publications. In a performance evaluation across different preclinical datasets, the automated deduplication tool I developed could identify over 97% of duplicate citations and a had an error rate similar to that of human performance. When evaluated on a test set of publications, the machine learning classifier trained to identify relevant research in transgenic models performed was highly sensitive (captured 96.5% of relevant publications) and excluded 87.8% of irrelevant publications. Tools to identify the model(s) and outcome measure(s) within the full-text of publications may reduce the burden on reviewers and were found to be more sensitive than searching only the title and abstract of citations. Automated tools to assess risk of bias reporting were highly sensitive and could have the potential to monitor research improvement over time. The final dataset of categorised Alzheimerā€™s disease research contained 22,375 publications which were then visualised in the interactive web application. Within the application, users can see how many publications report measures to reduce the risk of bias and how many have been classified as using each transgenic model, testing each intervention, and measuring each outcome. Users can also filter to obtain curated lists of relevant research, allowing them to perform systematic reviews at an accelerated pace with reduced effort required to search across databases, and a reduced number of publications to screen for relevance. Both systematic reviews and meta-analyses highlighted failures to report key methodological information within publications. Poor transparency of reporting limited the statistical power I had to understand the sources of between-study variation. However, some variables were found to explain a significant proportion of the heterogeneity. Transgenic animal model had a significant impact on results in both reviews. For certain open field test outcomes, wall colour of the open field arena and the reporting of measures to reduce the risk of bias were found to impact results. For in vitro electrophysiology experiments measuring synaptic plasticity, several electrophysiology parameters, including magnesium concentration of the recording solution, were found to explain a significant proportion of the heterogeneity. Automated meta-research approaches and curated web platforms summarising preclinical research could have the potential to accelerate the conduct of systematic reviews and maximise the potential of existing evidence to inform translation
    • ā€¦
    corecore