47,860 research outputs found

    Attack-Surface Metrics, OSSTMM and Common Criteria Based Approach to “Composable Security” in Complex Systems

    Get PDF
    In recent studies on Complex Systems and Systems-of-Systems theory, a huge effort has been put to cope with behavioral problems, i.e. the possibility of controlling a desired overall or end-to-end behavior by acting on the individual elements that constitute the system itself. This problem is particularly important in the “SMART” environments, where the huge number of devices, their significant computational capabilities as well as their tight interconnection produce a complex architecture for which it is difficult to predict (and control) a desired behavior; furthermore, if the scenario is allowed to dynamically evolve through the modification of both topology and subsystems composition, then the control problem becomes a real challenge. In this perspective, the purpose of this paper is to cope with a specific class of control problems in complex systems, the “composability of security functionalities”, recently introduced by the European Funded research through the pSHIELD and nSHIELD projects (ARTEMIS-JU programme). In a nutshell, the objective of this research is to define a control framework that, given a target security level for a specific application scenario, is able to i) discover the system elements, ii) quantify the security level of each element as well as its contribution to the security of the overall system, and iii) compute the control action to be applied on such elements to reach the security target. The main innovations proposed by the authors are: i) the definition of a comprehensive methodology to quantify the security of a generic system independently from the technology and the environment and ii) the integration of the derived metrics into a closed-loop scheme that allows real-time control of the system. The solution described in this work moves from the proof-of-concepts performed in the early phase of the pSHIELD research and enrich es it through an innovative metric with a sound foundation, able to potentially cope with any kind of pplication scenarios (railways, automotive, manufacturing, ...)

    From software APIs to web service ontologies: a semi-automatic extraction method

    Get PDF
    Successful employment of semantic web services depends on the availability of high quality ontologies to describe the domains of these services. As always, building such ontologies is difficult and costly, thus hampering web service deployment. Our hypothesis is that since the functionality offered by a web service is reflected by the underlying software, domain ontologies could be built by analyzing the documentation of that software. We verify this hypothesis in the domain of RDF ontology storage tools.We implemented and fine-tuned a semi-automatic method to extract domain ontologies from software documentation. The quality of the extracted ontologies was verified against a high quality hand-built ontology of the same domain. Despite the low linguistic quality of the corpus, our method allows extracting a considerable amount of information for a domain ontology

    Rigorously assessing software reliability and safety

    Get PDF
    This paper summarises the state of the art in the assessment of software reliability and safety ("dependability"), and describes some promising developments. A sound demonstration of very high dependability is still impossible before operation of the software; but research is finding ways to make rigorous assessment increasingly feasible. While refined mathematical techniques cannot take the place of factual knowledge, they can allow the decision-maker to draw more accurate conclusions from the knowledge that is available

    A Quality Model for Actionable Analytics in Rapid Software Development

    Get PDF
    Background: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by IEEE in the 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2018. The final authenticated version will be available onlin

    Localized Mobility Management for SDN-Integrated LTE Backhaul Networks

    Get PDF
    Small cell (SCell) and Software Define Network (SDN) are two key enablers to meet the evolutional requirements of future telecommunication networks, but still on the initial study stage with lots of challenges faced. In this paper, the problem of mobility management in SDN-integrated LTE (Long Term Evolution) mobile backhaul network is investigated. An 802.1ad double tagging scheme is designed for traffic forwarding between Serving Gateway (S-GW) and SCell with QoS (Quality of Service) differentiation support. In addition, a dynamic localized forwarding scheme is proposed for packet delivery of the ongoing traffic session to facilitate the mobility of UE within a dense SCell network. With this proposal, the data packets of an ongoing session can be forwarded from the source SCell to the target SCell instead of switching the whole forwarding path, which can drastically save the path-switch signalling cost in this SDN network. Numerical results show that compared with traditional path switch policy, more than 50 signalling cost can be reduced, even considering the impact on the forwarding path deletion when session ceases. The performance of data delivery is also analysed, which demonstrates the introduced extra delivery cost is acceptable and even negligible in case of short forwarding chain or large backhaul latency

    Applications of Temporal Graph Metrics to Real-World Networks

    Get PDF
    Real world networks exhibit rich temporal information: friends are added and removed over time in online social networks; the seasons dictate the predator-prey relationship in food webs; and the propagation of a virus depends on the network of human contacts throughout the day. Recent studies have demonstrated that static network analysis is perhaps unsuitable in the study of real world network since static paths ignore time order, which, in turn, results in static shortest paths overestimating available links and underestimating their true corresponding lengths. Temporal extensions to centrality and efficiency metrics based on temporal shortest paths have also been proposed. Firstly, we analyse the roles of key individuals of a corporate network ranked according to temporal centrality within the context of a bankruptcy scandal; secondly, we present how such temporal metrics can be used to study the robustness of temporal networks in presence of random errors and intelligent attacks; thirdly, we study containment schemes for mobile phone malware which can spread via short range radio, similar to biological viruses; finally, we study how the temporal network structure of human interactions can be exploited to effectively immunise human populations. Through these applications we demonstrate that temporal metrics provide a more accurate and effective analysis of real-world networks compared to their static counterparts.Comment: 25 page

    Architecture, design and source code comparison of ns-2 and ns-3 network simulators

    Get PDF
    Ns-2 and its successor ns-3 are discrete-event simulators. Ns- 3 is still under development, but offers some interesting characteristics for developers while ns-2 still has a big user base. This paper remarks current differences between both tools from developers point of view. Leaving performance and resources consumption aside, technical issues described in the present paper might help to choose one or another alternative depending of simulation and project management requirements.Ministerio de EducaciĂłn y Ciencia TIN2006-15617-C03-03Junta de AndalucĂ­a P06-TIC-229

    Observing the clouds : a survey and taxonomy of cloud monitoring

    Get PDF
    This research was supported by a Royal Society Industry Fellowship and an Amazon Web Services (AWS) grant. Date of Acceptance: 10/12/2014Monitoring is an important aspect of designing and maintaining large-scale systems. Cloud computing presents a unique set of challenges to monitoring including: on-demand infrastructure, unprecedented scalability, rapid elasticity and performance uncertainty. There are a wide range of monitoring tools originating from cluster and high-performance computing, grid computing and enterprise computing, as well as a series of newer bespoke tools, which have been designed exclusively for cloud monitoring. These tools express a number of common elements and designs, which address the demands of cloud monitoring to various degrees. This paper performs an exhaustive survey of contemporary monitoring tools from which we derive a taxonomy, which examines how effectively existing tools and designs meet the challenges of cloud monitoring. We conclude by examining the socio-technical aspects of monitoring, and investigate the engineering challenges and practices behind implementing monitoring strategies for cloud computing.Publisher PDFPeer reviewe
    • …
    corecore