367,716 research outputs found

    Population Dynamics P Systems on CUDA

    Get PDF
    Population Dynamics P systems (PDP systems, in short) provide a new formal bio-inspired modeling framework, which has been successfully used by ecologists. These models are validated using software tools against actual measurements. The goal is to use P systems simulations to adopt a priori management strategies for real ecosystems. Software for PDP systems is still in an early stage. The simulation of PDP systems is both computationally and data intensive for large models. Therefore, the development of efficient simulators is needed for this field. In this paper, we introduce a novel simulator for PDP systems accelerated by the use of the computational power of GPUs. We discuss the implementation of each part of the simulator, and show how to achieve up to a 7x speedup on a NVIDA Tesla C1060 compared to an optimized multicore version on a Intel 4-core i5 Xeon for large systems. Other results and testing methodologies are also included.Ministerio de Ciencia e Innovación TIN2009–13192Junta de Andalucía P08-TIC-0420

    Creating high-quality behavioural designs for software-intensive systems

    Get PDF
    In todays industrial practice, behavioral designs of software-intensive systems such as embedded systems are often imprecisely documented as plain text in a natural language such as English, supplemented with ad-hoc diagrams. Lack of quality in behavioral design documents causes poor communication between stake holders, up to 100 times more costly rework during testing and integration, and hard-to-maintain documents of behavioral designs. To address these problems, we present a solution that involves the usage of (a) data-flow diagrams to document the input-output relation between the actions performed by a software-intensive system, (b) control-flow diagrams to document the possible sequences of actions performed by the system, and \ud (c) Vibes diagrams to document temporal or logical constraints on the possible sequences of actions performed by the system. The key benefit of this solution is to improve the separation of concerns within behavioral design documents; hence to improve their understandability, maintainability, and evolvabilit

    Innovative Applications of Artificial Intelligence Techniques in Software Engineering

    Full text link
    International audienceArtificial Intelligence (AI) techniques have been successfully applied in many areas of software engineering. The complexity of software systems has limited the application of AI techniques in many real world applications. This talk provides an insight into applications of AI techniques in software engineering and how innovative application of AI can assist in achieving ever competitive and firm schedules for software development projects as well as Information Technology (IT) management. The pros and cons of using AI techniques are investigated and specifically the application of AI in IT management, software application development and software security is considered. Organisations that build software applications do so in an environment characterised by limited resources, increased pressure to reduce cost and development schedules. Organisations demand to build software applications adequately and quickly. One approach to achieve this is to use automated software development tools from the very initial stage of software design up to the software testing and installation. Considering software testing as an example, automated software systems can assist in most software testing phases. On the hand data security, availability, privacy and integrity are very important issues in the success of a business operation. Data security and privacy policies in business are governed by business requirements and government regulations. AI can also assist in software security, privacy and reliability. Implementing data security using data encryption solutions remain at the forefront for data security. Many solutions to data encryption at this level are expensive, disruptive and resource intensive. AI can be used for data classification in organizations. It can assist in identifying and encrypting only the relevant data thereby saving time and processing power. Without data classification organizations using encryption process would simply encrypt everything and consequently impact users more than necessary. Data classification is essential and can assist organizations with their data security, privacy and accessibility needs. This talk explores the use of AI techniques (such as fuzzy logic) for data classification and suggests a method that can determine requirements for classification of organizations' data for security and privacy based on organizational needs and government policies. Finally the application of FCM in IT management is discussed

    Performance evaluation of a distributed integrative architecture for robotics

    Get PDF
    The eld of robotics employs a vast amount of coupled sub-systems. These need to interact cooperatively and concurrently in order to yield the desired results. Some hybrid algorithms also require intensive cooperative interactions internally. The architecture proposed lends it- self amenable to problem domains that require rigorous calculations that are usually impeded by the capacity of a single machine, and incompatibility issues between software computing elements. Implementations are abstracted away from the physical hardware for ease of de- velopment and competition in simulation leagues. Monolithic developments are complex, and the desire for decoupled architectures arises. Decoupling also lowers the threshold for using distributed and parallel resources. The ability to re-use and re-combine components on de- mand, therefore is essential, while maintaining the necessary degree of interaction. For this reason we propose to build software components on top of a Service Oriented Architecture (SOA) using Web Services. An additional bene t is platform independence regarding both the operating system and the implementation language. The robot soccer platform as well as the associated simulation leagues are the target domain for the development. Furthermore are machine vision and remote process control related portions of the architecture currently in development and testing for industrial environments. We provide numerical data based on the Python frameworks ZSI and SOAPpy undermining the suitability of this approach for the eld of robotics. Response times of signi cantly less than 50 ms even for fully interpreted, dynamic languages provides hard information showing the feasibility of Web Services based SOAs even in time critical robotic applications

    Digital Twins Are Not Monozygotic -- Cross-Replicating ADAS Testing in Two Industry-Grade Automotive Simulators

    Get PDF
    The increasing levels of software- and data-intensive driving automation call for an evolution of automotive software testing. As a recommended practice of the Verification and Validation (V&V) process of ISO/PAS 21448, a candidate standard for safety of the intended functionality for road vehicles, simulation-based testing has the potential to reduce both risks and costs. There is a growing body of research on devising test automation techniques using simulators for Advanced Driver-Assistance Systems (ADAS). However, how similar are the results if the same test scenarios are executed in different simulators? We conduct a replication study of applying a Search-Based Software Testing (SBST) solution to a real-world ADAS (PeVi, a pedestrian vision detection system) using two different commercial simulators, namely, TASS/Siemens PreScan and ESI Pro-SiVIC. Based on a minimalistic scene, we compare critical test scenarios generated using our SBST solution in these two simulators. We show that SBST can be used to effectively and efficiently generate critical test scenarios in both simulators, and the test results obtained from the two simulators can reveal several weaknesses of the ADAS under test. However, executing the same test scenarios in the two simulators leads to notable differences in the details of the test outputs, in particular, related to (1) safety violations revealed by tests, and (2) dynamics of cars and pedestrians. Based on our findings, we recommend future V&V plans to include multiple simulators to support robust simulation-based testing and to base test objectives on measures that are less dependant on the internals of the simulators.Comment: To appear in the Proc. of the IEEE International Conference on Software Testing, Verification and Validation (ICST) 202

    AI-assisted anomaly detection from log data

    Get PDF
    As the production of software continues to increase, the volume of log data being generated is also on the rise. This surge in data has made it impractical for human operators to manually review each log line produced by software systems. This necessity has led to the development of automatic anomaly detection methods. Automatic anomaly detection methods would allow system operators to respond to incidents more quickly and improve the quality of the software. In the past, anomaly detection from log data relied heavily on predefined rules. However, with the complexity of modern software systems, finding experts for every system component to write these rules has become difficult. Additionally, it is very labor-intensive to implement these rules. This has spurred interest in unsupervised anomaly detection methods. The purpose of this thesis is to research which kind of methods can be used for automatic anomaly detection, what is required to use them in a production system, and how well deep learning-based methods would work with log data produced by hundreds of embedded devices. The thesis begins with a literature review to explore the various methods used for anomaly detection from log data. It then outlines the required infrastructure for efficient anomaly detection and concludes by testing the DeepLog Deep Learning method on real log data from a production system. The key findings suggest that the DeepLog model performs effectively for anomaly detection when trained in an unsupervised fashion. However, it is essential to ensure that anomalous samples do not dominate the training data. This can be achieved either by completely excluding them from the training set or by ensuring that no single anomalous sample overwhelms the entire dataset, which could lead to overfitting. Moreover, the training dataset can be continuously refined by eliminating recognized anomalous sequences and subsequently retraining the model

    Wireless Sensing System for Load Testing and Rating of Highway Bridges

    Get PDF
    Structural capacity evaluation of bridges is an increasingly important topic in the effort to deal with the deteriorating infrastructure. Most bridges are evaluated through subjective visual inspection and conservative theoretical rating. Diagnostic load test has been recognized as an effective method to accurately assess the carrying capacity of bridges. Traditional wired sensors and data acquisition (DAQ) systems suffer drawbacks of being labor intensive, high cost, and time consumption in installation and maintenance. For those reasons, very few load tests have been conducted on bridges.;This study aims at developing a low-cost wireless bridge load testing & rating system that can be rapidly deployed on bridges for structural evaluation and load rating. Commercially available wireless hardware is integrated with traditional analogue sensors and the appropriate rating software is developed. The wireless DAQ system can work with traditional strain gages, accelerometers as well as other voltage producing sensors. A wireless truck position indicator (WVPI) is developed and used for measuring the truck position during load testing. The software is capable of calculating the theoretical rating factors based on AASHTO Load Resistance Factor Rating (LRFR) codes, and automatically produces the adjustment factor through load testing data. A simplified finite element model was used to calculate deflection & moment distribution factors in order to reduce the amount of instrumentation used in field tests. The system was used to evaluate the structural capacity of Evansville Bridge in Preston County, WV. The results show that the wireless bridge load testing & rating system can effectively be implemented to evaluate the real capacity of bridges with remarkable advantages: low-cost, fast deployment and smaller crew

    Automated metamorphic testing of variability analysis tools

    Get PDF
    Variability determines the capability of software applications to be configured and customized. A common need during the development of variability–intensive systems is the automated analysis of their underlying variability models, e.g. detecting contradictory configuration options. The analysis operations that are performed on variability models are often very complex, which hinders the testing of the corresponding analysis tools and makes difficult, often infeasible, to determine the correctness of their outputs, i.e. the well–known oracle problem in software testing. In this article, we present a generic approach for the automated detection of faults in variability analysis tools overcoming the oracle problem. Our work enables the generation of random variability models together with the exact set of valid configurations represented by these models. These test data are generated from scratch using step–wise transformations and assuring that certain constraints (a.k.a. metamorphic relations) hold at each step. To show the feasibility and generalizability of our approach, it has been used to automatically test several analysis tools in three variability domains: feature models, CUDF documents and Boolean formulas. Among other results, we detected 19 real bugs in 7 out of the 15 tools under test.CICYT TIN2012-32273CICYT IPT-2012- 0890-390000Junta de Andalucía TIC-5906Junta de Andalucía P12-TIC- 186
    corecore