1,925 research outputs found

    Management and Service-aware Networking Architectures (MANA) for Future Internet Position Paper: System Functions, Capabilities and Requirements

    Get PDF
    Future Internet (FI) research and development threads have recently been gaining momentum all over the world and as such the international race to create a new generation Internet is in full swing: GENI, Asia Future Internet, Future Internet Forum Korea, European Union Future Internet Assembly (FIA). This is a position paper identifying the research orientation with a time horizon of 10 years, together with the key challenges for the capabilities in the Management and Service-aware Networking Architectures (MANA) part of the Future Internet (FI) allowing for parallel and federated Internet(s)

    A Fortran Kernel Generation Framework for Scientific Legacy Code

    Get PDF
    Quality assurance procedure is very important for software development. The complexity of modules and structure in software impedes the testing procedure and further development. For complex and poorly designed scientific software, module developers and software testers need to put a lot of extra efforts to monitor not related modules\u27 impacts and to test the whole system\u27s constraints. In addition, widely used benchmarks cannot help programmers with accurate and program specific system performance evaluation. In this situation, the generated kernels could provide considerable insight into better performance tuning. Therefore, in order to greatly improve the productivity of various scientific software engineering tasks such as performance tuning, debugging, and verification of simulation results, we developed an automatic compute kernel extraction prototype platform for complex legacy scientific code. In addition, considering that scientific research and experiment require long-term simulation procedure and the huge size of data transfer, we apply message passing based parallelization and I/O behavior optimization to highly improve the performance of the kernel extractor framework and then use profiling tools to give guidance for parallel distribution. Abnormal event detection is another important aspect for scientific research; dealing with huge observational datasets combined with simulation results it becomes not only essential but also extremely difficult. In this dissertation, for the sake of detecting high frequency event and low frequency events, we reconfigured this framework equipped with in-situ data transfer infrastructure. Through the method of combining signal processing data preprocess(decimation) with machine learning detection model to train the stream data, our framework can significantly decrease the amount of transferred data demand for concurrent data analysis (between distributed computing CPU/GPU nodes). Finally, the dissertation presents the implementation of the framework and a case study of the ACME Land Model (ALM) for demonstration. It turns out that the generated compute kernel with lower cost can be used in performance tuning experiments and quality assurance, which include debugging legacy code, verification of simulation results through single point and multiple points of variables tracking, collaborating with compiler vendors, and generating custom benchmark tests

    Compilation Optimizations to Enhance Resilience of Big Data Programs and Quantum Processors

    Get PDF
    Modern computers can experience a variety of transient errors due to the surrounding environment, known as soft faults. Although the frequency of these faults is low enough to not be noticeable on personal computers, they become a considerable concern during large-scale distributed computations or systems in more vulnerable environments like satellites. These faults occur as a bit flip of some value in a register, operation, or memory during execution. They surface as either program crashes, hangs, or silent data corruption (SDC), each of which can waste time, money, and resources. Hardware methods, such as shielding or error correcting memory (ECM), exist, though they can be difficult to implement, expensive, and may be limited to only protecting against errors in specific locations. Researchers have been exploring software detection and correction methods as an alternative, commonly trading either overhead in execution time or memory usage to protect against faults. Quantum computers, a relatively recent advancement in computing technology, experience similar errors on a much more severe scale. The errors are more frequent, costly, and difficult to detect and correct. Error correction algorithms like Shor’s code promise to completely remove errors, but they cannot be implemented on current noisy intermediate-scale quantum (NISQ) systems due to the low number of available qubits. Until the physical systems become large enough to support error correction, researchers instead have been studying other methods to reduce and compensate for errors. In this work, we present two methods for improving the resilience of classical processes, both single- and multi-threaded. We then introduce quantum computing and compare the nature of errors and correction methods to previous classical methods. We further discuss two designs for improving compilation of quantum circuits. One method, focused on quantum neural networks (QNNs), takes advantage of partial compilation to avoid recompiling the entire circuit each time. The other method is a new approach to compiling quantum circuits using graph neural networks (GNNs) to improve the resilience of quantum circuits and increase fidelity. By using GNNs with reinforcement learning, we can train a compiler to provide improved qubit allocation that improves the success rate of quantum circuits

    No Runs, Few Hits and Many Errors: Street Stops, Bias and Proactive Policing

    Get PDF
    Equilibrium models of racial discrimination in law enforcement encounters suggest that in the absence of racial discrimination, the proportion of searches yielding evidence of illegal activity (the hit rate) will be equal across races. Searches that disproportionately target one racial group, resulting in a relatively low hit rate, are inefficient and suggest bias. An unbiased officer who is seeking to maximize her hit rate would reduce the number of unproductive stops toward a group with the lower hit rate. An unbiased policing regime would generate no differences in hit rates between groups. We use this framework to test for racial discrimination in pedestrian stops with data from the contentious “Stop, Question and Frisk” (SQF) program of the New York City Police Department (NYPD). SQF produced nearly five million citizen stops from 2004–2012. The stops are regulated by both Terry (federal) and DeBour (New York) case law on reasonable suspicion. Stops are well-documented, including a structured format for reporting the indicia of reasonable suspicion that motivated the stop. We exploit these data to examine the Floyd court’s claim. We decompose stops on the basis of suspicion, as reported by officers at the time of the stop. We conduct five tests to assess whether racial discrimination characterizes SQF stops: the allocation of officers relative to crime and population in specific areas, the decision to sanction conditional on a stop, the decision to arrest or issue a summons conditional on the decision to sanction, the efficiency of stops in seizing contraband including weapons, and updating processes by officers in their search activity. In each test, we include the reasonable suspicion rationale that officers indicated as the basis of the stop. We find consistent evidence of disparities in police responses to Black, Hispanic, and Black Hispanic civilians, and significant differences by race in the use of specific indicia of reasonable suspicion that motivate stops. The higher error rates for specific indicia of suspicion suggest that rather than individualized bases of suspicion, officers may be activating stereotypes and archetypes to articulate suspicion and justify street seizures

    Efficiently Manifesting Asynchronous Programming Errors in Android Apps

    Full text link
    Android, the #1 mobile app framework, enforces the single-GUI-thread model, in which a single UI thread manages GUI rendering and event dispatching. Due to this model, it is vital to avoid blocking the UI thread for responsiveness. One common practice is to offload long-running tasks into async threads. To achieve this, Android provides various async programming constructs, and leaves developers themselves to obey the rules implied by the model. However, as our study reveals, more than 25% apps violate these rules and introduce hard-to-detect, fail-stop errors, which we term as aysnc programming errors (APEs). To this end, this paper introduces APEChecker, a technique to automatically and efficiently manifest APEs. The key idea is to characterize APEs as specific fault patterns, and synergistically combine static analysis and dynamic UI exploration to detect and verify such errors. Among the 40 real-world Android apps, APEChecker unveils and processes 61 APEs, of which 51 are confirmed (83.6% hit rate). Specifically, APEChecker detects 3X more APEs than the state-of-art testing tools (Monkey, Sapienz and Stoat), and reduces testing time from half an hour to a few minutes. On a specific type of APEs, APEChecker confirms 5X more errors than the data race detection tool, EventRacer, with very few false alarms

    Safe software development for a video-based train detection system in accordance with EN 50128

    Get PDF
    Diese Studienarbeit gibt einen Überblick über ausgewählte Teile des Softwareentwicklungsprozesses für sicherheitsrelevante Applikationen am Beispiel eines videobasierten Zugerkennungssystems. Eine IP-Kamera und ein externer Bildverarbeitungscomputer wurden dazu mit einer speziell entworfenen, verteilten Software ausgestattet. Die in Ada und C geschriebenen Teile kommunizieren dabei über ein dediziertes, UDP-basiertes Netzwerkprotokoll. Beide Programme wurden intensiv anhand verschiedener Techniken analysiert, die in der Norm EN 50128 festgelegt sind, welche sich speziell an Software für Eisenbahnsteuerungs- und überwachungssysteme richtet. Eine an der Norm orientierte Struktur mit Verweisen auf die diskutierten Techniken zu Beginn eines jeden Abschnitts erlaubt einen schnellen Vergleich mit den originalen Anforderungen des Normtexts. Zusammenfassend haben sich die Techniken bis auf wenige Ausnahmen als sehr geeignet für die praktische Entwicklung von sicherer Software erwiesen. Allerdings entbindet die Norm durch ihre teils sehr abstrakten Anforderungen das am Projekt beteiligte Personal in keinster Weise von seiner individuellen Verantwortung. Entsprechend sind die hier vorgestellten Techniken für andere Projekte nicht ohne Anpassungen zu übernehmen.:1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2 Description of the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3 Real-time constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 Safety requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1 Camera type and output format . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Transfer Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 Real-world constrains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Train Detection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3 EN 50128 requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1 Software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.1 Defensive Programming . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.2 Fully Defined Interface . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1.3 Structured Methodology . . . . . . . . . . . . . . . . . . . . . . . 21 3.1.4 Error Detecting and Correcting Codes . . . . . . . . . . . . . . . . 29 3.1.5 Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.1.6 Alternative optionally required measures . . . . . . . . . . . . . . 34 3.2 Software Design and Implementation . . . . . . . . . . . . . . . . . . . . . 35 3.2.1 Structured Methodology . . . . . . . . . . . . . . . . . . . . . . . 35 3.2.2 Modular Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.2.3 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2.4 Design and Coding Standards . . . . . . . . . . . . . . . . . . . . 39 3.2.5 Strongly Typed Programming Languages . . . . . . . . . . . . . . 41 3.2.6 Alternative optionally required measures . . . . . . . . . . . . . . 44 3.3 Unit Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48This paper intends to give an overview of selected parts of the software development process for safety-relevant applications using the example of a video-based train detection. An IP-camera and an external image processing computer were equipped with a custom-built, distributed software system. Written in Ada and C, the system parts communicate via a dedicated UDP-based protocol. Both programs were subject to intense analysis according to measures laid down in the EN 50128 standard specifically targeted at software for railway control and protection systems. Preceding each section, a structure resembling the standard document with references to the discussed measures allows for easy comparison with the original requirements of EN 50128. In summary, the techniques have proven to be very suitable for practical safe software development in all but very few edge-cases. However, the highly abstract descriptive level of the standard requires the staff involved to accept an enormous personal responsibility throughout the entire development process. The specific measures carried out for this project may therefore not be equally applicable elsewhere.:1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2 Description of the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3 Real-time constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 Safety requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1 Camera type and output format . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Transfer Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 Real-world constrains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Train Detection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3 EN 50128 requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1 Software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.1 Defensive Programming . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.2 Fully Defined Interface . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1.3 Structured Methodology . . . . . . . . . . . . . . . . . . . . . . . 21 3.1.4 Error Detecting and Correcting Codes . . . . . . . . . . . . . . . . 29 3.1.5 Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.1.6 Alternative optionally required measures . . . . . . . . . . . . . . 34 3.2 Software Design and Implementation . . . . . . . . . . . . . . . . . . . . . 35 3.2.1 Structured Methodology . . . . . . . . . . . . . . . . . . . . . . . 35 3.2.2 Modular Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.2.3 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2.4 Design and Coding Standards . . . . . . . . . . . . . . . . . . . . 39 3.2.5 Strongly Typed Programming Languages . . . . . . . . . . . . . . 41 3.2.6 Alternative optionally required measures . . . . . . . . . . . . . . 44 3.3 Unit Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    Characterizing and Detecting Hateful Users on Twitter

    Full text link
    Most current approaches to characterize and detect hate speech focus on \textit{content} posted in Online Social Networks. They face shortcomings to collect and annotate hateful speech due to the incompleteness and noisiness of OSN text and the subjectivity of hate speech. These limitations are often aided with constraints that oversimplify the problem, such as considering only tweets containing hate-related words. In this work we partially address these issues by shifting the focus towards \textit{users}. We develop and employ a robust methodology to collect and annotate hateful users which does not depend directly on lexicon and where the users are annotated given their entire profile. This results in a sample of Twitter's retweet graph containing 100,386100,386 users, out of which 4,9724,972 were annotated. We also collect the users who were banned in the three months that followed the data collection. We show that hateful users differ from normal ones in terms of their activity patterns, word usage and as well as network structure. We obtain similar results comparing the neighbors of hateful vs. neighbors of normal users and also suspended users vs. active users, increasing the robustness of our analysis. We observe that hateful users are densely connected, and thus formulate the hate speech detection problem as a task of semi-supervised learning over a graph, exploiting the network of connections on Twitter. We find that a node embedding algorithm, which exploits the graph structure, outperforms content-based approaches for the detection of both hateful (95%95\% AUC vs 88%88\% AUC) and suspended users (93%93\% AUC vs 88%88\% AUC). Altogether, we present a user-centric view of hate speech, paving the way for better detection and understanding of this relevant and challenging issue.Comment: This is an extended version of the homonymous short paper to be presented at ICWSM-18. arXiv admin note: text overlap with arXiv:1801.0031
    • …
    corecore