528,450 research outputs found

    Interval analysis applied to dielectric spectroscopy: a guaranteed parameter estimation

    Get PDF
    Dielectric spectra of materials are often difficult to analyze since the common software algorithms and line shape functions do not always provide unambiguous data for the fitted parameters. In particular, this article deals with epoxy/ceramics nano-prepolymers studied by dielectric spectroscopy. In this situation, both system (the prepolymer with nanofillers) and method (the dielectric spectroscopy) are complex. Taking into account the experimental error of each data point in the measured dielectric spectrum, the sofware based on a global optimization algorithm which uses interval analysis, provides a confidence interval for every parameter of the dielectric function implemented in the software. Then, this software is able to deliver and guarantee the number of relaxation processes even if they are in part masked by other phenomena like conductivity or electrode polarization

    Critical Fault-Detecting Time Evaluation in Software with Discrete Compound Poisson Models

    Get PDF
    Software developers predict their product’s failure rate using reliability growth models that are typically based on nonhomogeneous Poisson (NHP) processes. In this article, we extend that practice to a nonhomogeneous discrete-compound Poisson process that allows for multiple faults of a system at the same time point. Along with traditional reliability metrics such as average number of failures in a time interval, we propose an alternative reliability index called critical fault-detecting time in order to provide more information for software managers making software quality evaluation and critical market policy decisions. We illustrate the significant potential for improved analysis using wireless failure data as well as simulated data

    Reconciling a component and process view

    Full text link
    In many cases we need to represent on the same abstraction level not only system components but also processes within the system, and if for both representation different frameworks are used, the system model becomes hard to read and to understand. We suggest a solution how to cover this gap and to reconcile component and process views on system representation: a formal framework that gives the advantage of solving design problems for large-scale component systems.Comment: Preprint, 7th International Workshop on Modeling in Software Engineering (MiSE) at ICSE 201

    Learning Fast and Slow: PROPEDEUTICA for Real-time Malware Detection

    Full text link
    In this paper, we introduce and evaluate PROPEDEUTICA, a novel methodology and framework for efficient and effective real-time malware detection, leveraging the best of conventional machine learning (ML) and deep learning (DL) algorithms. In PROPEDEUTICA, all software processes in the system start execution subjected to a conventional ML detector for fast classification. If a piece of software receives a borderline classification, it is subjected to further analysis via more performance expensive and more accurate DL methods, via our newly proposed DL algorithm DEEPMALWARE. Further, we introduce delays to the execution of software subjected to deep learning analysis as a way to "buy time" for DL analysis and to rate-limit the impact of possible malware in the system. We evaluated PROPEDEUTICA with a set of 9,115 malware samples and 877 commonly used benign software samples from various categories for the Windows OS. Our results show that the false positive rate for conventional ML methods can reach 20%, and for modern DL methods it is usually below 6%. However, the classification time for DL can be 100X longer than conventional ML methods. PROPEDEUTICA improved the detection F1-score from 77.54% (conventional ML method) to 90.25%, and reduced the detection time by 54.86%. Further, the percentage of software subjected to DL analysis was approximately 40% on average. Further, the application of delays in software subjected to ML reduced the detection time by approximately 10%. Finally, we found and discussed a discrepancy between the detection accuracy offline (analysis after all traces are collected) and on-the-fly (analysis in tandem with trace collection). Our insights show that conventional ML and modern DL-based malware detectors in isolation cannot meet the needs of efficient and effective malware detection: high accuracy, low false positive rate, and short classification time.Comment: 17 pages, 7 figure

    Model-driven performance evaluation for service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Software quality aspects such as performance are of central importance for the integration of heterogeneous, distributed service-based systems. Empirical performance evaluation is a process of measuring and calculating performance metrics of the implemented software. We present an approach for the empirical, model-based performance evaluation of services and service compositions in the context of model-driven service engineering. Temporal databases theory is utilised for the empirical performance evaluation of model-driven developed service systems

    Quality-aware model-driven service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box character of services

    A secondary analyses of Bradac et al. s prototype process-monitoring experiment

    Get PDF
    We report on the secondary analyses of some conjectures and empirical evidence presented in Bradac et al. s prototype process-monitoring experiment, published previously in IEEE Transactions on Software Engineering. We identify 13 conjectures in the original paper, and re-analyse six of these conjectures using the original evidence. Rather than rejecting any of the original conjectures, we identify assumptions underlying those conjectures, identify alternative interpretations of the conjectures, and also propose a number of new conjectures. Bradac et al. s study focused on reducing the project schedule interval. Some of our re-analysis has--considered improving software quality. We note that our analyses were only possible because of the quality and quantity of evidence presented in the original paper. Reflecting on our analyses leads us to speculate about the value of descriptive papers --that seek to present empirical material (together with an explicit statement of goals, assumptions and constraints) separate from the analyses that proceeds from that material. Such descriptive papers could improve the public scrutiny of software engineering research and may respond, in part, to some researchers criticisms concerning the small amount of software engineering research that is actually--evaluated. We also consider opportunities for further research, in particular opportunities for relating individual actions to project outcomes

    Ensuring cost-effective heat exchanger network design for non-continuous processes

    Get PDF
    The variation in stream conditions over time inevitably adds significant complexity to the task of integrating non-continuous processes. The Time Averaging Method (TAM), where stream conditions are simply averaged across the entire time cycle, leads to unrealistic energy targets for direct heat recovery and consequently to Heat Exchanger Network (HEN) designs that are in fact suboptimal. This realisation led to the development of the Time Slice Method (TSM) that instead considers each time interval separately, and can be used to reach accurate targets and to design the appropriate HEN to maximise heat recovery. However, in practise the HENs often require excessive exchanger surface area, which renders them unfeasible when capital costs are taken in to account. An extension of the TSM that reduces the required overall exchanger surface area and systematically distributes it across the stream matches is proposed. The methodology is summarised with the help of a simple case study and further improvement opportunities are discusse
    corecore