6 research outputs found

    The Knowledge Application and Utilization Framework Applied to Defense COTS: A Research Synthesis for Outsourced Innovation

    Get PDF
    Purpose -- Militaries of developing nations face increasing budget pressures, high operations tempo, a blitzing pace of technology, and adversaries that often meet or beat government capabilities using commercial off-the-shelf (COTS) technologies. The adoption of COTS products into defense acquisitions has been offered to help meet these challenges by essentially outsourcing new product development and innovation. This research summarizes extant research to develop a framework for managing the innovative and knowledge flows. Design/Methodology/Approach – A literature review of 62 sources was conducted with the objectives of identifying antecedents (barriers and facilitators) and consequences of COTS adoption. Findings – The DoD COTS literature predominantly consists of industry case studies, and there’s a strong need for further academically rigorous study. Extant rigorous research implicates the importance of the role of knowledge management to government innovative thinking that relies heavily on commercial suppliers. Research Limitations/Implications – Extant academically rigorous studies tend to depend on measures derived from work in information systems research, relying on user satisfaction as the outcome. Our findings indicate that user satisfaction has no relationship to COTS success; technically complex governmental purchases may be too distant from users or may have socio-economic goals that supersede user satisfaction. The knowledge acquisition and utilization framework worked well to explain the innovative process in COTS. Practical Implications – Where past research in the commercial context found technological knowledge to outweigh market knowledge in terms of importance, our research found the opposite. Managers either in government or marketing to government should be aware of the importance of market knowledge for defense COTS innovation, especially for commercial companies that work as system integrators. Originality/Value – From the literature emerged a framework of COTS product usage and a scale to measure COTS product appropriateness that should help to guide COTS product adoption decisions and to help manage COTS product implementations ex post

    Finding the Optimal Balance between Over and Under Approximation of Models Inferred from Execution Logs

    Full text link
    Models inferred from execution traces (logs) may admit more behaviours than those possible in the real system (over-approximation) or may exclude behaviours that can indeed occur in the real system (under-approximation). Both problems negatively affect model based testing. In fact, over-approximation results in infeasible test cases, i.e., test cases that cannot be activated by any input data. Under-approximation results in missing test cases, i.e., system behaviours that are not represented in the model are also never tested. In this paper we balance over- and under-approximation of inferred models by resorting to multi-objective optimization achieved by means of two search-based algorithms: A multi-objective Genetic Algorithm (GA) and the NSGA-II. We report the results on two open-source web applications and compare the multi-objective optimization to the state-of-the-art KLFA tool. We show that it is possible to identify regions in the Pareto front that contain models which violate fewer application constraints and have a higher bug detection ratio. The Pareto fronts generated by the multi-objective GA contain a region where models violate on average 2% of an application's constraints, compared to 2.8% for NSGA-II and 28.3% for the KLFA models. Similarly, it is possible to identify a region on the Pareto front where the multi-objective GA inferred models have an average bug detection ratio of 110: 3 and the NSGA-II inferred models have an average bug detection ratio of 101: 6. This compares to a bug detection ratio of 310928: 13 for the KLFA tool. © 2012 IEEE

    Models, Techniques, and Metrics for Managing Risk in Software Engineering

    Get PDF
    The field of Software Engineering (SE) is the study of systematic and quantifiable approaches to software development, operation, and maintenance. This thesis presents a set of scalable and easily implemented techniques for quantifying and mitigating risks associated with the SE process. The thesis comprises six papers corresponding to SE knowledge areas such as software requirements, testing, and management. The techniques for risk management are drawn from stochastic modeling and operational research. The first two papers relate to software testing and maintenance. The first paper describes and validates novel iterative-unfolding technique for filtering a set of execution traces relevant to a specific task. The second paper analyzes and validates the applicability of some entropy measures to the trace classification described in the previous paper. The techniques in these two papers can speed up problem determination of defects encountered by customers, leading to improved organizational response and thus increased customer satisfaction and to easing of resource constraints. The third and fourth papers are applicable to maintenance, overall software quality and SE management. The third paper uses Extreme Value Theory and Queuing Theory tools to derive and validate metrics based on defect rediscovery data. The metrics can aid the allocation of resources to service and maintenance teams, highlight gaps in quality assurance processes, and help assess the risk of using a given software product. The fourth paper characterizes and validates a technique for automatic selection and prioritization of a minimal set of customers for profiling. The minimal set is obtained using Binary Integer Programming and prioritized using a greedy heuristic. Profiling the resulting customer set leads to enhanced comprehension of user behaviour, leading to improved test specifications and clearer quality assurance policies, hence reducing risks associated with unsatisfactory product quality. The fifth and sixth papers pertain to software requirements. The fifth paper both models the relation between requirements and their underlying assumptions and measures the risk associated with failure of the assumptions using Boolean networks and stochastic modeling. The sixth paper models the risk associated with injection of requirements late in development cycle with the help of stochastic processes

    Design and development of protocol log analyzer for cellular modem

    Get PDF
    Abstract. Telecommunications protocols and cellular modems are used in devices to facilitate wireless communication. Cellular modems produce log files, which have to be analyzed by engineers when issues occur. Performing the analysis for large logs manually can be very time consuming, thus different approaches for trying to automate or simplify the process exist. This thesis presents design and development for a cellular modem log analysis tool. The tool is designed to take into account peculiarities of telecommunications protocols and cellular modems, especially of 5G New Radio Radio Resource Control protocol. A notation for defining analysis rules used by the tool is presented to be used alongside the tool. The developed tool is a proof-of-concept, with focus being on how the tool performs the analysis and how the notation can be used to define the wanted analysis rules. The features of the notation include defining expected content of protocol messages and order of log message sequences. The tool performs well with artificial modem logs, though some flaws in the notation are recognized. In the future, the tool and the notation should be updated with support for real cellular modem logs and evaluated in field use cases by cellular modem engineers.Matkapuhelinmodeemien lokitiedostojen analysointityökalun suunnittelu ja toteutus. Tiivistelmä. Tietoliikenneprotokollia ja matkapuhelinmodeemeja käytetään laitteissa langattoman tiedonsiirron mahdollistamiseksi. Matkapuhelinmodeemit tuottavat lokitiedostoja, joita insinöörien täytyy analysoida ongelmatilanteissa. Suurten lokitiedostojen analysointi manuaalisesti on työlästä, joten on olemassa keinoja prosessin automatisointiin tai yksinkertaistamiseen. Tämä työ esittelee suunnitelman ja toteutuksen matkapuhelinmodeemin lokitiedostojen analysointityökalulle. Työkalun suunnittelussa on otettu huomioon tietoliikenneprotokollien, erityisesti 5G New Radion radioresurssien hallintaprotokollan (RRC), ja matkapuhelinmodeemien erikoisuudet. Merkintäsäännöstö, jolla voidaan määritellä analyysisäännöt, esitellään työkalulle. Kehitetty työkalu on karkea prototyyppi. Kehityksessä keskitytään työkalun analyysiominaisuuksiin ja mahdollisuuksiin käyttää merkintäsäännöstöä määrittämään halutut analyysisäännöt. Merkintäsäännöstön ominaisuuksiin kuuluu odotettujen lokiviestien sisällön ja järjestyksen määrittely. Työkalu suoriutuu keinotekoisien modeemilokitiedostojen kanssa hyvin, mutta joitain vikoja merkintäsäännöstöstä havaittiin. Tulevaisuuden kehitystä ajatellen työkalu kannattaisi päivittää toimimaan aitojen matkapuhelinmodeemien lokitiedostojen kanssa, että sen kykyä suoriutua aidoista käyttötilanteista voitaisiin arvioida

    Log-based software monitoring: a systematic mapping study

    Full text link
    Modern software development and operations rely on monitoring to understand how systems behave in production. The data provided by application logs and runtime environment are essential to detect and diagnose undesired behavior and improve system reliability. However, despite the rich ecosystem around industry-ready log solutions, monitoring complex systems and getting insights from log data remains a challenge. Researchers and practitioners have been actively working to address several challenges related to logs, e.g., how to effectively provide better tooling support for logging decisions to developers, how to effectively process and store log data, and how to extract insights from log data. A holistic view of the research effort on logging practices and automated log analysis is key to provide directions and disseminate the state-of-the-art for technology transfer. In this paper, we study 108 papers (72 research track papers, 24 journals, and 12 industry track papers) from different communities (e.g., machine learning, software engineering, and systems) and structure the research field in light of the life-cycle of log data. Our analysis shows that (1) logging is challenging not only in open-source projects but also in industry, (2) machine learning is a promising approach to enable a contextual analysis of source code for log recommendation but further investigation is required to assess the usability of those tools in practice, (3) few studies approached efficient persistence of log data, and (4) there are open opportunities to analyze application logs and to evaluate state-of-the-art log analysis techniques in a DevOps context
    corecore