666 research outputs found

    “Big Tales of Indians Ahead:” The Reproduction of Settler Colonial Discourse in the American West

    Get PDF
    “Big Tales of Indians Ahead” traces the reproduction of settler colonial discourses—sentiments narrated by a settler society about themselves and about the Native American societies that predated them—from the period of colonial history of the seventeenth century to the present day in the twenty-first century. This study argues that the anti-Indian rhetoric that could be found in early colonial EuroAmerican writings, particularly Indian captivity narratives, were reproduced by subsequent settler societies throughout the eighteenth and nineteenth centuries in the form of settler narratives from the overland trail migrations and various forms of popular culture. In the twentieth century these discourses, heavily influenced by past settler discourses, reached wider audiences through new forms of popular culture—particularly Western genre films and mass-produced works of fiction aimed at younger audiences. Finally, this dissertation tracks the ways in which these discourses are still reproduced and present in contemporary popular culture media and political identities in the American West. From Mary Rowlandson’s Indian captivity narrative of the late-seventeenth century to the overland trail settler narratives of the Oregon Trail and the wildly-popular Western films of the mid-twentieth century, Native Americans had consistently been tied to reductive and derogatory depictions in American collective cultural discourses that has tied stereotypes of so-called “Indians” to inherently-racial traits such as savagery, depravity, and violence. This study not only shows that these assertions from a settler population, and their descendants, has been falsely (and thus unfairly) attributed to racialized notions of “Indianness,” but also provides a clear and consistent historical timeline that tracks these depictions across centuries and various forms of settler discourses

    On the real world practice of Behaviour Driven Development

    Get PDF
    Surveys of industry practice over the last decade suggest that Behaviour Driven Development is a popular Agile practice. For example, 19% of respondents to the 14th State of Agile annual survey reported using BDD, placing it in the top 13 practices reported. As well as potential benefits, the adoption of BDD necessarily involves an additional cost of writing and maintaining Gherkin features and scenarios, and (if used for acceptance testing,) the associated step functions. Yet there is a lack of published literature exploring how BDD is used in practice and the challenges experienced by real world software development efforts. This gap is significant because without understanding current real world practice, it is hard to identify opportunities to address and mitigate challenges. In order to address this research gap concerning the challenges of using BDD, this thesis reports on a research project which explored: (a) the challenges of applying agile and undertaking requirements engineering in a real world context; (b) the challenges of applying BDD specifically and (c) the application of BDD in open-source projects to understand challenges in this different context. For this purpose, we progressively conducted two case studies, two series of interviews, four iterations of action research, and an empirical study. The first case study was conducted in an avionics company to discover the challenges of using an agile process in a large scale safety critical project environment. Since requirements management was found to be one of the biggest challenges during the case study, we decided to investigate BDD because of its reputation for requirements management. The second case study was conducted in the company with an aim to discover the challenges of using BDD in real life. The case study was complemented with an empirical study of the practice of BDD in open source projects, taking a study sample from the GitHub open source collaboration site. As a result of this Ph.D research, we were able to discover: (i) challenges of using an agile process in a large scale safety-critical organisation, (ii) current state of BDD in practice, (iii) technical limitations of Gherkin (i.e., the language for writing requirements in BDD), (iv) challenges of using BDD in a real project, (v) bad smells in the Gherkin specifications of open source projects on GitHub. We also presented a brief comparison between the theoretical description of BDD and BDD in practice. This research, therefore, presents the results of lessons learned from BDD in practice, and serves as a guide for software practitioners planning on using BDD in their projects

    Software Design Change Artifacts Generation through Software Architectural Change Detection and Categorisation

    Get PDF
    Software is solely designed, implemented, tested, and inspected by expert people, unlike other engineering projects where they are mostly implemented by workers (non-experts) after designing by engineers. Researchers and practitioners have linked software bugs, security holes, problematic integration of changes, complex-to-understand codebase, unwarranted mental pressure, and so on in software development and maintenance to inconsistent and complex design and a lack of ways to easily understand what is going on and what to plan in a software system. The unavailability of proper information and insights needed by the development teams to make good decisions makes these challenges worse. Therefore, software design documents and other insightful information extraction are essential to reduce the above mentioned anomalies. Moreover, architectural design artifacts extraction is required to create the developer’s profile to be available to the market for many crucial scenarios. To that end, architectural change detection, categorization, and change description generation are crucial because they are the primary artifacts to trace other software artifacts. However, it is not feasible for humans to analyze all the changes for a single release for detecting change and impact because it is time-consuming, laborious, costly, and inconsistent. In this thesis, we conduct six studies considering the mentioned challenges to automate the architectural change information extraction and document generation that could potentially assist the development and maintenance teams. In particular, (1) we detect architectural changes using lightweight techniques leveraging textual and codebase properties, (2) categorize them considering intelligent perspectives, and (3) generate design change documents by exploiting precise contexts of components’ relations and change purposes which were previously unexplored. Our experiment using 4000+ architectural change samples and 200+ design change documents suggests that our proposed approaches are promising in accuracy and scalability to deploy frequently. Our proposed change detection approach can detect up to 100% of the architectural change instances (and is very scalable). On the other hand, our proposed change classifier’s F1 score is 70%, which is promising given the challenges. Finally, our proposed system can produce descriptive design change artifacts with 75% significance. Since most of our studies are foundational, our approaches and prepared datasets can be used as baselines for advancing research in design change information extraction and documentation

    An Epidemiological and Pharmacokinetic-pharmacodynamic Investigation into the Impact of Carbapenem-resistant Enterobacterales

    Get PDF
    Background: According to the 2019 CDC Antibiotic Resistance Threats Report, more than 2.8 million antibiotic-resistant infections occur in the United States each year, leading to more than 35,000 deaths. Among the most urgent threats identified by the CDC are carbapenem-resistant Enterobacterales (CRE). Despite efforts to control the spread of these organisms, the number of estimated cases between 2012 and 2017 remained stable. In 2017, an estimated 13,100 hospitalized cases of CRE led to approximately 1,100 deaths and $130 million attributable healthcare costs. This dissertation seeks to address this issue from both a pharmacokinetic/pharmacodynamic and epidemiological perspective. Methods: We evaluated the susceptibility of 140 CRE clinical isolates against novel agents eravacycline and plazomicin using techniques standardized by the Clinical and Laboratory Standards Institute. We performed in-vitro static time-kill assays in 8 Verona Integron-encoded metallo-beta-lactamase (VIM)-producing CRE using single and combination exposures of cefepime, meropenem, piperacillin/tazobactam, amikacin, and plazomicin along with aztreonam and aztreonam/avibactam. Additionally, we performed a 10-year, inverse probability of treatment weighting adjusted retrospective cohort study comparing the risk in observing a composite outcome of all-cause mortality or discharge to hospice in patients having CRE vs. carbapenem-susceptible Enterobacterales (CSE) infections after 14 and 30 days. In this cohort, we also reported on the prevalence of CRE across the decade. Additionally, we compared the organism composition and susceptibilities of isolates cultured in both the CRE and CSE groups. Results: Plazomicin showed higher susceptibility than eravacycline against our CRE isolates. In time kill studies, plazomicin was bactericidal against 5/8 isolates as monotherapy. Meropenem/amikacin or meropenem/plazomicin were bactericidal in all experiments, except for one isolate which regrew against meropenem/plazomicin. Aztreonam/avibactam was bactericidal in all experiments tested. Neither cefepime nor piperacillin/tazobactam improved the activity of plazomicin against our isolates. Cefepime with amikacin showed inconsistent activity. In the retrospective cohort study, the overall incidence of CRE infections was 1.8%. CRE isolates exhibited higher resistance across all routinely tested antimicrobials classes compared to CSE. The CRE population appeared to be largely non-carbapenemase-producing given the high susceptibility of meropenem and the high prevalence of E. cloacae, a known AmpC-producer. Overall, the risk of composite outcome only appeared to be increased among patients with a bloodstream infection on the index date and could only be assessed when utilizing an exposure of carbapenem-non-susceptible Enterobacterales (CNSE) due to insufficient sample size. However, the results were inconclusive as they were not statistically significant. Conclusions: Novel antimicrobial agents plazomicin and aztreonam/avibactam were highly active against a collection of CRE including both Klebsiella pneumoniae carbapenemase (KPC) and VIM. Aztreonam/avibactam, meropenem/amikacin, and meropenem/plazomicin all exhibited comparably bactericidal activity. Furthermore, at an academic medical center in a non-endemic region for CRE, it appears that CRE infection may have increased the risk of experiencing the composite outcome after both 14 and 30 days, but definitive conclusions may not be drawn given the lack of statistical significance and imprecision in the estimation of the effect. The difficulties in drawing definitive conclusions from this study owing to limited sample size in the CRE or CNSE group stresses the importance of developing novel strategies and performing larger, multicenter studies when investigating highly resistant infections with low prevalence

    Learning representations for effective and explainable software bug detection and fixing

    Get PDF
    Software has an integral role in modern life; hence software bugs, which undermine software quality and reliability, have substantial societal and economic implications. The advent of machine learning and deep learning in software engineering has led to major advances in bug detection and fixing approaches, yet they fall short of desired precision and recall. This shortfall arises from the absence of a \u27bridge,\u27 known as learning code representations, that can transform information from source code into a suitable representation for effective processing via machine and deep learning. This dissertation builds such a bridge. Specifically, it presents solutions for effectively learning code representations using four distinct methods?context-based, testing results-based, tree-based, and graph-based?thus improving bug detection and fixing approaches, as well as providing developers insight into the foundational reasoning. The experimental results demonstrate that using learning code representations can significantly enhance explainable bug detection and fixing, showcasing the practicability and meaningfulness of the approaches formulated in this dissertation toward improving software quality and reliability

    FlaKat: A Machine Learning-Based Categorization Framework for Flaky Tests

    Get PDF
    Flaky tests can pass or fail non-deterministically, without alterations to a software system. Such tests are frequently encountered by developers and hinder the credibility of test suites. Thus, flaky tests have caught the attention of researchers in recent years. Numerous approaches have been published on defining, locating, and categorizing flaky tests, along with auto-repairing strategies for specific types of flakiness. Practitioners have developed several techniques to detect flaky tests automatically. The most traditional approaches adopt repeated execution of test suites accompanied by techniques such as shuffled execution order, and random distortion of environment. State-of-the-art research also incorporates machine learning solutions into flaky test detection and achieves reasonably good accuracy. Moreover, strategies for repairing flaky tests have also been published for specific flaky test categories and the process has been automated as well. However, there is a research gap between flaky test detection and category-specific flakiness repair. To address the aforementioned gap, this thesis proposes a novel categorization framework, called FlaKat, which uses machine-learning classifiers for fast and accurate categorization of a given flaky test case. FlaKat first parses and converts raw flaky tests into vector embeddings. The dimensionality of embeddings is reduced and then used for training machine learning classifiers. Sampling techniques are applied to address the imbalance between flaky test categories in the dataset. The evaluation of FlaKat was conducted to determine its performance with different combinations of configurations using known flaky tests from 108 open-source Java projects. Notably, Implementation-Dependent and Order-Dependent flaky tests, which represent almost 75% of the total dataset, achieved F1 scores (harmonic mean of precision and recall) of 0.94 and 0.90 respectively while the overall macro average (no weight difference between categories) is at 0.67. This research work also proposes a new evaluation metric, called Flakiness Detection Capacity (FDC), for measuring the accuracy of classifiers from the perspective of information theory and provides proof for its effectiveness. The final obtained results for FDC also aligns with F1 score regarding which classifier yields the best flakiness classification

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Specificity of the innate immune responses to different classes of non-tuberculous mycobacteria

    Get PDF
    Mycobacterium avium is the most common nontuberculous mycobacterium (NTM) species causing infectious disease. Here, we characterized a M. avium infection model in zebrafish larvae, and compared it to M. marinum infection, a model of tuberculosis. M. avium bacteria are efficiently phagocytosed and frequently induce granuloma-like structures in zebrafish larvae. Although macrophages can respond to both mycobacterial infections, their migration speed is faster in infections caused by M. marinum. Tlr2 is conservatively involved in most aspects of the defense against both mycobacterial infections. However, Tlr2 has a function in the migration speed of macrophages and neutrophils to infection sites with M. marinum that is not observed with M. avium. Using RNAseq analysis, we found a distinct transcriptome response in cytokine-cytokine receptor interaction for M. avium and M. marinum infection. In addition, we found differences in gene expression in metabolic pathways, phagosome formation, matrix remodeling, and apoptosis in response to these mycobacterial infections. In conclusion, we characterized a new M. avium infection model in zebrafish that can be further used in studying pathological mechanisms for NTM-caused diseases

    On Making Fiction: Frankenstein and the Life of Stories

    Get PDF
    Fiction is generally understood to be a fascinating, yet somehow deficient affair, merely derivative of reality. What if we could, instead, come up with an affirmative approach that takes stories seriously in their capacity to bring forth a substance of their own? Iconic texts such as Mary Shelley's Frankenstein and its numerous adaptations stubbornly resist our attempts to classify them as mere representations of reality. The author shows how these texts insist that we take them seriously as agents and interlocutors in our world- and culture-making activities. Drawing on this analysis, she develops a theory of narrative fiction as a generative practice

    Intelligent Software Tooling For Improving Software Development

    Get PDF
    Software has eaten the world with many of the necessities and quality of life services people use requiring software. Therefore, tools that improve the software development experience can have a significant impact on the world such as generating code and test cases, detecting bugs, question and answering, etc. The success of Deep Learning (DL) over the past decade has shown huge advancements in automation across many domains, including Software Development processes. One of the main reasons behind this success is the availability of large datasets such as open-source code available through GitHub or image datasets of mobile Graphical User Interfaces (GUIs) with RICO and ReDRAW to be trained on. Therefore, the central research question my dissertation explores is: In what ways can the software development process be improved through leveraging DL techniques on the vast amounts of unstructured software engineering artifacts? We coin the approaches that leverage DL to automate or augment various software development task as Intelligent Software Tools. To guide our research of these intelligent software tools, we performed a systematic literature review to understand the current landscape of research on applying DL techniques to software tasks and any gaps that exist. From this literature review, we found code generation to be one of the most studied tasks with other tasks and artifacts such as impact analysis or tasks involving images and videos to be understudied. Therefore, we set out to explore the application of DL to these understudied tasks and artifacts as well as the limitations of DL models under the well studied task code completion, a subfield in code generation. Specifically, we developed a tool for automatically detecting duplicate mobile bug reports from user submitted videos. We used the popular Convolutional Neural Network (CNN) to learn important features from a large collection of mobile screenshots. Using this model, we could then compute similarity between a newly submitted bug report and existing ones to produce a ranked list of duplicate candidates that can be reviewed by a developer. Next, we explored impact analysis, a critical software maintenance task that identifies potential adverse effects of a given code change on the larger software system. To this end, we created Athena, a novel approach to impact analysis that integrates knowledge of a software system through its call-graph along with high-level representations of the code inside the system to improve impact analysis performance. Lastly, we explored the task of code completion, which has seen heavy interest from industry and academia. Specifically, we explored various methods that modify the positional encoding scheme of the Transformer architecture for allowing these models to incorporate longer sequences of tokens when predicting completions than seen during their training as this can significantly improve training times
    • …
    corecore