2,448 research outputs found

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    Southern Adventist University Undergraduate Catalog 2023-2024

    Get PDF
    Southern Adventist University\u27s undergraduate catalog for the academic year 2023-2024.https://knowledge.e.southern.edu/undergrad_catalog/1123/thumbnail.jp

    Designing Optimal Behavioral Experiments Using Machine Learning

    Get PDF
    Computational models are powerful tools for understanding human cognition and behavior. They let us express our theories clearly and precisely, and offer predictions that can be subtle and often counter-intuitive. However, this same richness and ability to surprise means our scientific intuitions and traditional tools are ill-suited to designing experiments to test and compare these models. To avoid these pitfalls and realize the full potential of computational modeling, we require tools to design experiments that provide clear answers about what models explain human behavior and the auxiliary assumptions those models must make. Bayesian optimal experimental design (BOED) formalizes the search for optimal experimental designs by identifying experiments that are expected to yield informative data. In this work, we provide a tutorial on leveraging recent advances in BOED and machine learning to find optimal experiments for any kind of model that we can simulate data from, and show how by-products of this procedure allow for quick and straightforward evaluation of models and their parameters against real experimental data. As a case study, we consider theories of how people balance exploration and exploitation in multi-armed bandit decision-making tasks. We validate the presented approach using simulations and a real-world experiment. As compared to experimental designs commonly used in the literature, we show that our optimal designs more efficiently determine which of a set of models best account for individual human behavior, and more efficiently characterize behavior given a preferred model. At the same time, formalizing a scientific question such that it can be adequately addressed with BOED can be challenging and we discuss several potential caveats and pitfalls that practitioners should be aware of. We provide code and tutorial notebooks to replicate all analyses

    Choreographing tragedy into the twenty-first century

    Get PDF
    What makes a tragedy? In the fifth century BCE this question found an answer through the conjoined forms of song and dance. Since the mid-twentieth century, and the work of the Tanztheater Wuppertal Pina Bausch, tragedy has been variously articulated as form coming apart at the seams. This thesis approaches tragedy through the work of five major choreographers and a director who each, in some way, turn back to Bausch. After exploring the Tanztheater Wuppertal’s techniques for choreographing tragedy in chapter one, I dedicate a chapter each to Dimitris Papaioannou, Akram Khan, Trajal Harrell, Ivo van Hove with Wim Vandekeybus, and Gisèle Vienne. Bringing together work in Queer and Trans* studies, Performance studies, Classics, Dance, and Classical Reception studies I work towards an understanding of the ways in which these choreographers articulate tragedy through embodiment and relation. I consider how tragedy transforms into the twenty-first century, how it shapes what it might mean to live and die with(out) one another. This includes tragic acts of mythic construction, attempts to describe a sense of the world as it collapses, colonial claims to ownership over the earth, and decolonial moves to enact new ways of being human. By developing an expanded sense of both choreography and the tragic one of my main contributions is a re-theorisation of tragedy that brings together two major pre-existing schools, to understand tragedy not as an event, but as a process. Under these conditions, and the shifting conditions of the world around us, I argue that the choreography of tragedy has and might continue to allow us to think about, name, and embody ourselves outside of the ongoing catastrophes we face

    Software Design Change Artifacts Generation through Software Architectural Change Detection and Categorisation

    Get PDF
    Software is solely designed, implemented, tested, and inspected by expert people, unlike other engineering projects where they are mostly implemented by workers (non-experts) after designing by engineers. Researchers and practitioners have linked software bugs, security holes, problematic integration of changes, complex-to-understand codebase, unwarranted mental pressure, and so on in software development and maintenance to inconsistent and complex design and a lack of ways to easily understand what is going on and what to plan in a software system. The unavailability of proper information and insights needed by the development teams to make good decisions makes these challenges worse. Therefore, software design documents and other insightful information extraction are essential to reduce the above mentioned anomalies. Moreover, architectural design artifacts extraction is required to create the developer’s profile to be available to the market for many crucial scenarios. To that end, architectural change detection, categorization, and change description generation are crucial because they are the primary artifacts to trace other software artifacts. However, it is not feasible for humans to analyze all the changes for a single release for detecting change and impact because it is time-consuming, laborious, costly, and inconsistent. In this thesis, we conduct six studies considering the mentioned challenges to automate the architectural change information extraction and document generation that could potentially assist the development and maintenance teams. In particular, (1) we detect architectural changes using lightweight techniques leveraging textual and codebase properties, (2) categorize them considering intelligent perspectives, and (3) generate design change documents by exploiting precise contexts of components’ relations and change purposes which were previously unexplored. Our experiment using 4000+ architectural change samples and 200+ design change documents suggests that our proposed approaches are promising in accuracy and scalability to deploy frequently. Our proposed change detection approach can detect up to 100% of the architectural change instances (and is very scalable). On the other hand, our proposed change classifier’s F1 score is 70%, which is promising given the challenges. Finally, our proposed system can produce descriptive design change artifacts with 75% significance. Since most of our studies are foundational, our approaches and prepared datasets can be used as baselines for advancing research in design change information extraction and documentation

    Fictocritical Cyberfeminism: A Paralogical Model for Post-Internet Communication

    Get PDF
    This dissertation positions the understudied and experimental writing practice of fictocriticism as an analog for the convergent and indeterminate nature of “post-Internet” communication as well a cyberfeminist technology for interfering and in-tervening in metanarratives of technoscience and technocapitalism that structure contemporary media. Significant theoretical valences are established between twen-tieth century literary works of fictocriticism and the hybrid and ephemeral modes of writing endemic to emergent, twenty-first century forms of networked communica-tion such as social media. Through a critical theoretical understanding of paralogy, or that countercultural logic of deploying language outside legitimate discourses, in-volving various tactics of multivocity, mimesis and metagraphy, fictocriticism is ex-plored as a self-referencing linguistic machine which exists intentionally to occupy those liminal territories “somewhere in among/between criticism, autobiography and fiction” (Hunter qtd. in Kerr 1996). Additionally, as a writing practice that orig-inated in Canada and yet remains marginal to national and international literary scholarship, this dissertation elevates the origins and ongoing relevance of fictocriti-cism by mapping its shared aims and concerns onto proximal discourses of post-structuralism, cyberfeminism, network ecology, media art, the avant-garde, glitch feminism, and radical self-authorship in online environments. Theorized in such a matrix, I argue that fictocriticism represents a capacious framework for writing and reading media that embodies the self-reflexive politics of second-order cybernetic theory while disrupting the rhetoric of technoscientific and neoliberal economic forc-es with speech acts of calculated incoherence. Additionally, through the inclusion of my own fictocritical writing as works of research-creation that interpolate the more traditional chapters and subchapters, I theorize and demonstrate praxis of this dis-tinctively indeterminate form of criticism to empirically and meaningfully juxtapose different modes of knowing and speaking about entangled matters of language, bod-ies, and technologies. In its conclusion, this dissertation contends that the “creative paranoia” engendered by fictocritical cyberfeminism in both print and digital media environments offers a pathway towards a more paralogical media literacy that can transform the terms and expectations of our future media ecology

    Replicability Study: Corpora For Understanding Simulink Models & Projects

    Full text link
    Background: Empirical studies on widely used model-based development tools such as MATLAB/Simulink are limited despite the tools' importance in various industries. Aims: The aim of this paper is to investigate the reproducibility of previous empirical studies that used Simulink model corpora and to evaluate the generalizability of their results to a newer and larger corpus, including a comparison with proprietary models. Method: The study reviews methodologies and data sources employed in prior Simulink model studies and replicates the previous analysis using SLNET. In addition, we propose a heuristic for determining code-generating Simulink models and assess the open-source models' similarity to proprietary models. Results: Our analysis of SLNET confirms and contradicts earlier findings and highlights its potential as a valuable resource for model-based development research. We found that open-source Simulink models follow good modeling practices and contain models comparable in size and properties to proprietary models. We also collected and distribute 208 git repositories with over 9k commits, facilitating studies on model evolution. Conclusions: The replication study offers actionable insights and lessons learned from the reproduction process, including valuable information on the generalizability of research findings based on earlier open-source corpora to the newer and larger SLNET corpus. The study sheds light on noteworthy attributes of SLNET, which is self-contained and redistributable

    Mapping the Focal Points of WordPress: A Software and Critical Code Analysis

    Get PDF
    Programming languages or code can be examined through numerous analytical lenses. This project is a critical analysis of WordPress, a prevalent web content management system, applying four modes of inquiry. The project draws on theoretical perspectives and areas of study in media, software, platforms, code, language, and power structures. The applied research is based on Critical Code Studies, an interdisciplinary field of study that holds the potential as a theoretical lens and methodological toolkit to understand computational code beyond its function. The project begins with a critical code analysis of WordPress, examining its origins and source code and mapping selected vulnerabilities. An examination of the influence of digital and computational thinking follows this. The work also explores the intersection of code patching and vulnerability management and how code shapes our sense of control, trust, and empathy, ultimately arguing that a rhetorical-cultural lens can be used to better understand code\u27s controlling influence. Recurring themes throughout these analyses and observations are the connections to power and vulnerability in WordPress\u27 code and how cultural, processual, rhetorical, and ethical implications can be expressed through its code, creating a particular worldview. Code\u27s emergent properties help illustrate how human values and practices (e.g., empathy, aesthetics, language, and trust) become encoded in software design and how people perceive the software through its worldview. These connected analyses reveal cultural, processual, and vulnerability focal points and the influence these entanglements have concerning WordPress as code, software, and platform. WordPress is a complex sociotechnical platform worthy of further study, as is the interdisciplinary merging of theoretical perspectives and disciplines to critically examine code. Ultimately, this project helps further enrich the field by introducing focal points in code, examining sociocultural phenomena within the code, and offering techniques to apply critical code methods

    Untersuchung von Performanzveränderungen auf Quelltextebene

    Get PDF
    Änderungen am Quelltext einer Software können zu veränderter Performanz führen. Um das Auftreten von Regressionen zu verhindern und die Effekte von Quelltextänderungen, von denen eine Verbesserung erwartet wird, zu überprüfen, ist die Messung der Auswirkungen von Quelltextänderungen auf die Performanz sowie das tiefgehende Verständnis des Laufzeitverhaltens der beteiligten Quelltextkonstrukte notwendig. Die Spezifikation von Benchmarks oder Lasttests, um Regressionen zu erkennen, erfordert immensen manuellen Aufwand. Für das Verständnis der Änderungen sind anschließend oft weitere Experimente notwendig. In der vorliegenden Arbeit wird der Ansatz Performanzanalyse von Softwaresystemen (Peass) entwickelt. Peass beruht auf der Annahme, dass Performanzänderungen durch Messung der Performanz von Unittests erkennbar ist. Peass besteht aus (1) einer Methode zur Regressionstestselektion, d. h. zur Bestimmung, zwischen welchen Commits sich die Performanz geändert haben kann basierend auf statischer Quelltextanalyse und Analyse des Laufzeitverhaltens, (2) einer Methode zur Umwandlung von Unittests in Performanztests und zur statistisch zuverlässigen und reproduzierbaren Messung der Performanz und (3) einer Methode zur Unterstützung des Verstehens von Ursachen von Performanzänderungen. Der Peass-Ansatzes ermöglicht es somit, durch den Workload von Unittests messbare Performanzänderungen automatisiert zu untersuchen. Die Validität des Ansatzes wird geprüft, indem gezeigt wird, dass (1) typische Performanzprobleme in künstlichen Testfällen und (2) reale, durch Entwickler markierte Performanzänderungen durch Peass gefunden werden können. Durch eine Fallstudie in einem laufenden Softwareentwicklungsprojekt wird darüber hinaus gezeigt, dass Peass in der Lage ist, relevante Performanzänderungen zu erkennen.:1 Einleitung 1.1 Motivation 1.2 Ansatz 1.3 Forschungsfragen 1.4 Beiträge 1.5 Aufbau der Arbeit 2 Grundlagen 2.1 Software Performance Engineering 2.2 Modellbasierter Ansatz 2.2.1 Überblick 2.2.2 Performanzantipattern 2.3 Messbasierter Ansatz 2.3.1 Messprozess 2.3.2 Messwertanalyse 2.4 Messung in künstlichen Umgebungen 2.4.1 Benchmarking 2.4.2 Lasttests 2.4.3 Performanztests 2.5 Messung in realen Umgebungen: Monitoring 2.5.1 Überblick 2.5.2 Umsetzung 2.5.3 Werkzeuge 3 Regressionstestselektion 3.1 Ansatz 3.1.1 Grundidee 3.1.2 Voraussetzungen 3.1.3 Zweistufiger Prozess 3.2 Statische Testselektion 3.2.1 Selektierte Änderungen 3.2.2 Prozess 3.2.3 Implementierung 3.3 Tracevergleich 3.3.1 Selektierte Änderungen 3.3.2 Prozess 3.3.3 Implementierung 3.3.4 Kombination mit statischer Analyse 3.4 Evaluation 3.4.1 Implementierung 3.4.2 Exaktheit 3.4.3 Korrektheit 3.4.4 Diskussion der Validität 3.5 Verwandte Arbeiten 3.5.1 Funktionale Regressionstestbestimmung 3.5.2 Regressionstestbestimmung für Performanztests 4 Messprozess 4.1 Vergleich von Mess- und Analysemethoden 4.1.1 Vorgehen 4.1.2 Fehlerbetrachtung 4.1.3 Workloadgröße der künstlichen Unittestpaare 4.2 Messmethode 4.2.1 Aufbau einer Iteration 4.2.2 Beenden von Messungen 4.2.3 Garbage Collection je Iteration 4.2.4 Umgang mit Standardausgabe 4.2.5 Zusammenfassung der Messmethode 4.3 Analysemethode 4.3.1 Auswahl des statistischen Tests 4.3.2 Ausreißerentfernung 4.3.3 Parallelisierung 4.4 Evaluierung 4.4.1 Vergleich mit JMH 4.4.2 Reproduzierbarkeit der Ergebnisse 4.4.3 Fazit 4.5 Verwandte Arbeiten 4.5.1 Beenden von Messungen 4.5.2 Änderungserkennung 4.5.3 Anomalieerkennung 5 Ursachenanalyse 5.1 Reduktion des Overheads der Messung einzelner Methoden 5.1.1 Generierung von Beispielprojekten 5.1.2 Messung von Methodenausführungsdauern 5.1.3 Optionen zur Overheadreduktion 5.1.4 Messergebnisse 5.1.5 Überprüfung mit MooBench 5.2 Messkonfiguration der Ursachenanalyse 5.2.1 Grundlagen 5.2.2 Fehlerbetrachtung 5.2.3 Ansatz 5.2.4 Messergebnisse 5.3 Verwandte Arbeiten 5.3.1 Monitoringoverhead 5.3.2 Ursachenanalyse für Performanzänderungen 5.3.3 Ursachenanalyse für Performanzprobleme 6 Evaluation 6.1 Validierung durch künstliche Performanzprobleme 6.1.1 Reproduktion durch Benchmarks 6.1.2 Umwandlung der Benchmarks 6.1.3 Überprüfen von Problemen mit Peass 6.2 Evaluation durch reale Performanzprobleme 6.2.1 Untersuchung dokumentierter Performanzänderungen offenen Projekten 6.2.2 Untersuchung der Performanzänderungen in GeoMap 7 Zusammenfassung und Ausblick 7.1 Zusammenfassung 7.2 AusblickChanges to the source code of a software may result in varied performance. In order to prevent the occurance of regressions and check the effect of source changes, which are expected to result in performance improvements, both the measurement of the impact of source code changes and a deep understanding of the runtime behaviour of the used source code elements are necessary. The specification of benchmarks and load tests, which are able to detect performance regressions, requires immense manual effort. To understand the changes, often additional experiments are necessary. This thesis develops the Peass approach (Performance analysis of software systems). Peass is based on the assumption, that performance changes can be identified by unit tests. Therefore, Peass consists of (1) a method for regression test selection, which determines between which commits the performance may have changed based on static code analysis and analysis of the runtime behavior, (2) a method for transforming unit tests into performance tests and for statistically reliable and reproducible measurement of the performance and (3) a method for aiding the diagnosis of root causes of performance changes. The Peass approach thereby allows to automatically examine performance changes that are measurable by the workload of unit tests. The validity of the approach is evaluated by showing that (1) typical performance problems in artificial test cases and (2) real, developer-tagged performance changes can be found by Peass. Furthermore, a case study in an ongoing software development project shows that Peass is able to detect relevant performance changes.:1 Einleitung 1.1 Motivation 1.2 Ansatz 1.3 Forschungsfragen 1.4 Beiträge 1.5 Aufbau der Arbeit 2 Grundlagen 2.1 Software Performance Engineering 2.2 Modellbasierter Ansatz 2.2.1 Überblick 2.2.2 Performanzantipattern 2.3 Messbasierter Ansatz 2.3.1 Messprozess 2.3.2 Messwertanalyse 2.4 Messung in künstlichen Umgebungen 2.4.1 Benchmarking 2.4.2 Lasttests 2.4.3 Performanztests 2.5 Messung in realen Umgebungen: Monitoring 2.5.1 Überblick 2.5.2 Umsetzung 2.5.3 Werkzeuge 3 Regressionstestselektion 3.1 Ansatz 3.1.1 Grundidee 3.1.2 Voraussetzungen 3.1.3 Zweistufiger Prozess 3.2 Statische Testselektion 3.2.1 Selektierte Änderungen 3.2.2 Prozess 3.2.3 Implementierung 3.3 Tracevergleich 3.3.1 Selektierte Änderungen 3.3.2 Prozess 3.3.3 Implementierung 3.3.4 Kombination mit statischer Analyse 3.4 Evaluation 3.4.1 Implementierung 3.4.2 Exaktheit 3.4.3 Korrektheit 3.4.4 Diskussion der Validität 3.5 Verwandte Arbeiten 3.5.1 Funktionale Regressionstestbestimmung 3.5.2 Regressionstestbestimmung für Performanztests 4 Messprozess 4.1 Vergleich von Mess- und Analysemethoden 4.1.1 Vorgehen 4.1.2 Fehlerbetrachtung 4.1.3 Workloadgröße der künstlichen Unittestpaare 4.2 Messmethode 4.2.1 Aufbau einer Iteration 4.2.2 Beenden von Messungen 4.2.3 Garbage Collection je Iteration 4.2.4 Umgang mit Standardausgabe 4.2.5 Zusammenfassung der Messmethode 4.3 Analysemethode 4.3.1 Auswahl des statistischen Tests 4.3.2 Ausreißerentfernung 4.3.3 Parallelisierung 4.4 Evaluierung 4.4.1 Vergleich mit JMH 4.4.2 Reproduzierbarkeit der Ergebnisse 4.4.3 Fazit 4.5 Verwandte Arbeiten 4.5.1 Beenden von Messungen 4.5.2 Änderungserkennung 4.5.3 Anomalieerkennung 5 Ursachenanalyse 5.1 Reduktion des Overheads der Messung einzelner Methoden 5.1.1 Generierung von Beispielprojekten 5.1.2 Messung von Methodenausführungsdauern 5.1.3 Optionen zur Overheadreduktion 5.1.4 Messergebnisse 5.1.5 Überprüfung mit MooBench 5.2 Messkonfiguration der Ursachenanalyse 5.2.1 Grundlagen 5.2.2 Fehlerbetrachtung 5.2.3 Ansatz 5.2.4 Messergebnisse 5.3 Verwandte Arbeiten 5.3.1 Monitoringoverhead 5.3.2 Ursachenanalyse für Performanzänderungen 5.3.3 Ursachenanalyse für Performanzprobleme 6 Evaluation 6.1 Validierung durch künstliche Performanzprobleme 6.1.1 Reproduktion durch Benchmarks 6.1.2 Umwandlung der Benchmarks 6.1.3 Überprüfen von Problemen mit Peass 6.2 Evaluation durch reale Performanzprobleme 6.2.1 Untersuchung dokumentierter Performanzänderungen offenen Projekten 6.2.2 Untersuchung der Performanzänderungen in GeoMap 7 Zusammenfassung und Ausblick 7.1 Zusammenfassung 7.2 Ausblic
    corecore