2,127,283 research outputs found

    Agent-based simulation of open source evolution

    Get PDF
    We present an agent-based simulation model developed to study how size, complexity and effort relate to each other in the development of open source software (OSS). In the model, many developer agents generate, extend, and re-factor code modules independently and in parallel. This accords with empirical observations of OSS development. To our knowledge, this is the first model of OSS evolution that includes the complexity of software modules as a limiting factor in productivity, the fitness of the software to its requirements, and the motivation of developers. Validation of the model was done by comparing the simulated results against four measures of software evolution (system size, proportion of highly complex modules, level of complexity control work, and distribution of changes) for four large OSS systems. The simulated results resembled the observed data, except for system size: three of the OSS systems showed alternating patterns of super-linear and sub-linear growth, while the simulations produced only super-linear growth. However, the fidelity of the model for the other measures suggests that developer motivation and the limiting effect of complexity on productivity have a significant effect on the development of OSS systems and should be considered in any model of OSS development

    On Model Based Synthesis of Embedded Control Software

    Full text link
    Many Embedded Systems are indeed Software Based Control Systems (SBCSs), that is control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on Formal Model Based Design approaches for control software. Given the formal model of a plant as a Discrete Time Linear Hybrid System and the implementation specifications (that is, number of bits in the Analog-to-Digital (AD) conversion) correct-by-construction control software can be automatically generated from System Level Formal Specifications of the closed loop system (that is, safety and liveness requirements), by computing a suitable finite abstraction of the plant. With respect to given implementation specifications, the automatically generated code implements a time optimal control strategy (in terms of set-up time), has a Worst Case Execution Time linear in the number of AD bits bb, but unfortunately, its size grows exponentially with respect to bb. In many embedded systems, there are severe restrictions on the computational resources (such as memory or computational power) available to microcontroller devices. This paper addresses model based synthesis of control software by trading system level non-functional requirements (such us optimal set-up time, ripple) with software non-functional requirements (its footprint). Our experimental results show the effectiveness of our approach: for the inverted pendulum benchmark, by using a quantization schema with 12 bits, the size of the small controller is less than 6% of the size of the time optimal one.Comment: Accepted for publication by EMSOFT 2012. arXiv admin note: substantial text overlap with arXiv:1107.5638,arXiv:1207.409

    Logarithmic growth dynamics in software networks

    Full text link
    In a recent paper, Krapivsky and Redner (Phys. Rev. E, 71 (2005) 036118) proposed a new growing network model with new nodes being attached to a randomly selected node, as well to all ancestors of the target node. The model leads to a sparse graph with an average degree growing logarithmically with the system size. Here we present compeling evidence for software networks being the result of a similar class of growing dynamics. The predicted pattern of network growth, as well as the stationary in- and out-degree distributions are consistent with the model. Our results confirm the view of large-scale software topology being generated through duplication-rewiring mechanisms. Implications of these findings are outlined.Comment: 7 pages, 3 figures, published in Europhysics Letters (2005

    Numerical simulation of the stress-strain state of the dental system

    Full text link
    We present mathematical models, computational algorithms and software, which can be used for prediction of results of prosthetic treatment. More interest issue is biomechanics of the periodontal complex because any prosthesis is accompanied by a risk of overloading the supporting elements. Such risk can be avoided by the proper load distribution and prediction of stresses that occur during the use of dentures. We developed the mathematical model of the periodontal complex and its software implementation. This model is based on linear elasticity theory and allows to calculate the stress and strain fields in periodontal ligament and jawbone. The input parameters for the developed model can be divided into two groups. The first group of parameters describes the mechanical properties of periodontal ligament, teeth and jawbone (for example, elasticity of periodontal ligament etc.). The second group characterized the geometric properties of objects: the size of the teeth, their spatial coordinates, the size of periodontal ligament etc. The mechanical properties are the same for almost all, but the input of geometrical data is complicated because of their individual characteristics. In this connection, we develop algorithms and software for processing of images obtained by computed tomography (CT) scanner and for constructing individual digital model of the tooth-periodontal ligament-jawbone system of the patient. Integration of models and algorithms described allows to carry out biomechanical analysis on three-dimensional digital model and to select prosthesis design.Comment: 19 pages, 9 figure

    Modelling of artefacts in estimations of particle size of needle-like particles from laser diffraction measurements

    Get PDF
    Manufacturing of particulate products across many industries relies on accurate measurements of particle size distributions in dispersions or powders. Laser diffraction (or small angle light scattering) is commonly used, usually off-line, for particle size measurements. The estimation of particle sizes by this method requires the solution of an inverse problem using a suitable scattering model that takes into account size, shape and optical properties of the particles. However, laser diffraction instruments are usually accompanied by software that employs a default scattering model for spherical particles, which is then used to solve the inverse problem even though a significant number of particulate products occur in strongly non-spherical shapes such as needles. In this work, we demonstrate that using the spherical model for the estimation of sizes of needle-like particles can lead to the appearance of artefacts in the form of multimodal populations of particles with size modes much smaller than those actually present in the sample. This effect can result in a significant under-estimation of the mean particle size and in false modes in estimated particles size distributions.Comment: 28 pages 8 figures accepted in the journal of Chemical Engineering Scienc

    Towards Guidelines for Preventing Critical Requirements Engineering Problems

    Get PDF
    Context] Problems in Requirements Engineering (RE) can lead to serious consequences during the software development lifecycle. [Goal] The goal of this paper is to propose empirically-based guidelines that can be used by different types of organisations according to their size (small, medium or large) and process model (agile or plan-driven) to help them in preventing such problems. [Method] We analysed data from a survey on RE problems answered by 228 organisations in 10 different countries. [Results] We identified the most critical RE problems, their causes and mitigation actions, organizing this information by clusters of size and process model. Finally, we analysed the causes and mitigation actions of the critical problems of each cluster to get further insights into how to prevent them. [Conclusions] Based on our results, we suggest preliminary guidelines for preventing critical RE problems in response to context characteristics of the companies.Comment: Proceedings of the 42th Euromicro Conference on Software Engineering and Advanced Applications, 201

    HardIDX: Practical and Secure Index with SGX

    Get PDF
    Software-based approaches for search over encrypted data are still either challenged by lack of proper, low-leakage encryption or slow performance. Existing hardware-based approaches do not scale well due to hardware limitations and software designs that are not specifically tailored to the hardware architecture, and are rarely well analyzed for their security (e.g., the impact of side channels). Additionally, existing hardware-based solutions often have a large code footprint in the trusted environment susceptible to software compromises. In this paper we present HardIDX: a hardware-based approach, leveraging Intel's SGX, for search over encrypted data. It implements only the security critical core, i.e., the search functionality, in the trusted environment and resorts to untrusted software for the remainder. HardIDX is deployable as a highly performant encrypted database index: it is logarithmic in the size of the index and searches are performed within a few milliseconds rather than seconds. We formally model and prove the security of our scheme showing that its leakage is equivalent to the best known searchable encryption schemes. Our implementation has a very small code and memory footprint yet still scales to virtually unlimited search index sizes, i.e., size is limited only by the general - non-secure - hardware resources
    • …
    corecore