21,957 research outputs found

    A Domain Analysis to Specify Design Defects and Generate Detection Algorithms

    Get PDF
    Quality experts often need to identify in software systems design defects, which are recurring design problems, that hinder development\ud and maintenance. Consequently, several defect detection approaches\ud and tools have been proposed in the literature. However, we are not\ud aware of any approach that defines and reifies the process of generating\ud detection algorithms from the existing textual descriptions of defects.\ud In this paper, we introduce an approach to automate the generation\ud of detection algorithms from specifications written using a domain-specific\ud language. The domain-specific is defined from a thorough domain analysis.\ud We specify several design defects, generate automatically detection\ud algorithms using templates, and validate the generated detection\ud algorithms in terms of precision and recall on Xerces v2.7.0, an\ud open-source object-oriented system

    DxNAT - Deep Neural Networks for Explaining Non-Recurring Traffic Congestion

    Full text link
    Non-recurring traffic congestion is caused by temporary disruptions, such as accidents, sports games, adverse weather, etc. We use data related to real-time traffic speed, jam factors (a traffic congestion indicator), and events collected over a year from Nashville, TN to train a multi-layered deep neural network. The traffic dataset contains over 900 million data records. The network is thereafter used to classify the real-time data and identify anomalous operations. Compared with traditional approaches of using statistical or machine learning techniques, our model reaches an accuracy of 98.73 percent when identifying traffic congestion caused by football games. Our approach first encodes the traffic across a region as a scaled image. After that the image data from different timestamps is fused with event- and time-related data. Then a crossover operator is used as a data augmentation method to generate training datasets with more balanced classes. Finally, we use the receiver operating characteristic (ROC) analysis to tune the sensitivity of the classifier. We present the analysis of the training time and the inference time separately

    A Library for Pattern-based Sparse Matrix Vector Multiply

    Get PDF
    Pattern-based Representation (PBR) is a novel approach to improving the performance of Sparse Matrix-Vector Multiply (SMVM) numerical kernels. Motivated by our observation that many matrices can be divided into blocks that share a small number of distinct patterns, we generate custom multiplication kernels for frequently recurring block patterns. The resulting reduction in index overhead significantly reduces memory bandwidth requirements and improves performance. Unlike existing methods, PBR requires neither detection of dense blocks nor zero filling, making it particularly advantageous for matrices that lack dense nonzero concentrations. SMVM kernels for PBR can benefit from explicit prefetching and vectorization, and are amenable to parallelization. The analysis and format conversion to PBR is implemented as a library, making it suitable for applications that generate matrices dynamically at runtime. We present sequential and parallel performance results for PBR on two current multicore architectures, which show that PBR outperforms available alternatives for the matrices to which it is applicable, and that the analysis and conversion overhead is amortized in realistic application scenarios

    Enforcement Guide: Near Shore Artisanal Fisheries

    Get PDF
    We need healthy oceans to support our way of life. Unfortunately, fish stocks are under growing pressure and the need to find innovative and pragmatic resource management strategies is more important than ever. Disregard for fisheries and environmental laws is common and if we are to succeed in reversing the declining trend, we must draft relevant regulations, design and fund comprehensive enforcement programs and cultivate a culture of compliance. Historically, marine law enforcement has been the competency of Naval and Coast Guard authorities; however, many fishery and park agencies, who lack training, equipment, and at times controlling legal authority, are tasked with fisheries management and enforcement. Complicating matters, most agencies are understaffed; lack budgetary resources, and possess limited authority (i.e. power of arrest and the ability to use force). WildAid in cooperation with The Nature Conservancy developed this guide to assist managers in designing a cost effective enforcement strategy for near shore artisanal fisheries. This document is not a recompilation of literature, but a practical guide based on our experience in the Eastern and Western Pacific. Generally, an enforcement system is designed to monitor all activities within a given area ranging from tourism, investigation, and transportation to fisheries; however, this guide will focus primarily on near shore artisanal fisheries. The objectives of this guide are three-fold:1. Examine all factors considered for the design and operation of a marine law enforcement system; 2. Illustrate key components of an enforcement system and evaluate surveillance technology and patrol equipment options; 3. Guide managers in the design and implementation of an enforcement system.In summary, it aims to equip managers with the tools needed to strengthen fisheries management and design enforcement systems that are practical, affordable and feasible to implement in a timely manner. Fisheries enforcement requires a holistic approach that accounts for surveillance, interdiction, systematic training, education and outreach and lastly, meaningful sanctions. Although it explores many surveillance technologies and management tools, this guide more importantly provides a blueprint for the capacity building and professionalization of enforcement officers, who truly are the core component of any fisheries enforcement program

    From a Domain Analysis to the Specification and Detection of Code and Design Smells

    Get PDF
    Code and design smells are recurring design problems in software systems that must be identified to avoid their possible negative consequences\ud on development and maintenance. Consequently, several smell detection\ud approaches and tools have been proposed in the literature. However,\ud so far, they allow the detection of predefined smells but the detection\ud of new smells or smells adapted to the context of the analysed systems\ud is possible only by implementing new detection algorithms manually.\ud Moreover, previous approaches do not explain the transition from\ud specifications of smells to their detection. Finally, the validation\ud of the existing approaches and tools has been limited on few proprietary\ud systems and on a reduced number of smells. In this paper, we introduce\ud an approach to automate the generation of detection algorithms from\ud specifications written using a domain-specific language. This language\ud is defined from a thorough domain analysis. It allows the specification\ud of smells using high-level domain-related abstractions. It allows\ud the adaptation of the specifications of smells to the context of\ud the analysed systems.We specify 10 smells, generate automatically\ud their detection algorithms using templates, and validate the algorithms\ud in terms of precision and recall on Xerces v2.7.0 and GanttProject\ud v1.10.2, two open-source object-oriented systems.We also compare\ud the detection results with those of a previous approach, iPlasma

    On the Automated Synthesis of Enterprise Integration Patterns to Adapt Choreography-based Distributed Systems

    Full text link
    The Future Internet is becoming a reality, providing a large-scale computing environments where a virtually infinite number of available services can be composed so to fit users' needs. Modern service-oriented applications will be more and more often built by reusing and assembling distributed services. A key enabler for this vision is then the ability to automatically compose and dynamically coordinate software services. Service choreographies are an emergent Service Engineering (SE) approach to compose together and coordinate services in a distributed way. When mismatching third-party services are to be composed, obtaining the distributed coordination and adaptation logic required to suitably realize a choreography is a non-trivial and error prone task. Automatic support is then needed. In this direction, this paper leverages previous work on the automatic synthesis of choreography-based systems, and describes our preliminary steps towards exploiting Enterprise Integration Patterns to deal with a form of choreography adaptation.Comment: In Proceedings FOCLASA 2015, arXiv:1512.0694

    SkelCL: enhancing OpenCL for high-level programming of multi-GPU systems

    Get PDF
    Application development for modern high-performance systems with Graphics Processing Units (GPUs) currently relies on low-level programming approaches like CUDA and OpenCL, which leads to complex, lengthy and error-prone programs. In this paper, we present SkelCL – a high-level programming approach for systems with multiple GPUs and its implementation as a library on top of OpenCL. SkelCL provides three main enhancements to the OpenCL standard: 1) computations are conveniently expressed using parallel algorithmic patterns (skeletons); 2) memory management is simplified using parallel container data types (vectors and matrices); 3) an automatic data (re)distribution mechanism allows for implicit data movements between GPUs and ensures scalability when using multiple GPUs. We demonstrate how SkelCL is used to implement parallel applications on one- and two-dimensional data. We report experimental results to evaluate our approach in terms of programming effort and performance
    • …
    corecore