86,616 research outputs found

    Software reliability and dependability: a roadmap

    Get PDF
    Shifting the focus from software reliability to user-centred measures of dependability in complete software-based systems. Influencing design practice to facilitate dependability assessment. Propagating awareness of dependability issues and the use of existing, useful methods. Injecting some rigour in the use of process-related evidence for dependability assessment. Better understanding issues of diversity and variation as drivers of dependability. Bev Littlewood is founder-Director of the Centre for Software Reliability, and Professor of Software Engineering at City University, London. Prof Littlewood has worked for many years on problems associated with the modelling and evaluation of the dependability of software-based systems; he has published many papers in international journals and conference proceedings and has edited several books. Much of this work has been carried out in collaborative projects, including the successful EC-funded projects SHIP, PDCS, PDCS2, DeVa. He has been employed as a consultant t

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Cognition-Based Networks: A New Perspective on Network Optimization Using Learning and Distributed Intelligence

    Get PDF
    IEEE Access Volume 3, 2015, Article number 7217798, Pages 1512-1530 Open Access Cognition-based networks: A new perspective on network optimization using learning and distributed intelligence (Article) Zorzi, M.a , Zanella, A.a, Testolin, A.b, De Filippo De Grazia, M.b, Zorzi, M.bc a Department of Information Engineering, University of Padua, Padua, Italy b Department of General Psychology, University of Padua, Padua, Italy c IRCCS San Camillo Foundation, Venice-Lido, Italy View additional affiliations View references (107) Abstract In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication network

    Toward a process theory of entrepreneurship: revisiting opportunity identification and entrepreneurial actions

    Get PDF
    This dissertation studies the early development of new ventures and small business and the entrepreneurship process from initial ideas to viable ventures. I unpack the micro-foundations of entrepreneurial actions and new ventures’ investor communications through quality signals to finance their growth path. This dissertation includes two qualitative papers and one quantitative study. The qualitative papers employ an inductive multiple-case approach and include seven medical equipment manufacturers (new ventures) in a nascent market context (the mobile health industry) across six U.S. states and a secondary data analysis to understand the emergence of opportunities and the early development of new ventures. The quantitative research chapter includes 770 IPOs in the manufacturing industries in the U.S. and investigates the legitimation strategies of young ventures to gain resources from targeted resource-holders.Open Acces

    Creating business value from big data and business analytics : organizational, managerial and human resource implications

    Get PDF
    This paper reports on a research project, funded by the EPSRC’s NEMODE (New Economic Models in the Digital Economy, Network+) programme, explores how organizations create value from their increasingly Big Data and the challenges they face in doing so. Three case studies are reported of large organizations with a formal business analytics group and data volumes that can be considered to be ‘big’. The case organizations are MobCo, a mobile telecoms operator, MediaCo, a television broadcaster, and CityTrans, a provider of transport services to a major city. Analysis of the cases is structured around a framework in which data and value creation are mediated by the organization’s business analytics capability. This capability is then studied through a sociotechnical lens of organization/management, process, people, and technology. From the cases twenty key findings are identified. In the area of data and value creation these are: 1. Ensure data quality, 2. Build trust and permissions platforms, 3. Provide adequate anonymization, 4. Share value with data originators, 5. Create value through data partnerships, 6. Create public as well as private value, 7. Monitor and plan for changes in legislation and regulation. In organization and management: 8. Build a corporate analytics strategy, 9. Plan for organizational and cultural change, 10. Build deep domain knowledge, 11. Structure the analytics team carefully, 12. Partner with academic institutions, 13. Create an ethics approval process, 14. Make analytics projects agile, 15. Explore and exploit in analytics projects. In technology: 16. Use visualization as story-telling, 17. Be agnostic about technology while the landscape is uncertain (i.e., maintain a focus on value). In people and tools: 18. Data scientist personal attributes (curious, problem focused), 19. Data scientist as ‘bricoleur’, 20. Data scientist acquisition and retention through challenging work. With regards to what organizations should do if they want to create value from their data the paper further proposes: a model of the analytics eco-system that places the business analytics function in a broad organizational context; and a process model for analytics implementation together with a six-stage maturity model

    Managing Supplier Involvement in New Product Development: A Multiple-Case Study

    Get PDF
    Existing studies of supplier involvement in new product development have mainly focused on project-related short-term processes and success-factors. This study validates and extends an existing exploratory framework, which comprises both long-term strategic processes and short-term operational processes that are related to supplier involvement. The empirical validation is based on a multiple-case study of supplier collaborations at a manufacturer in the copier and printer industry. The analysis of eight cases of supplier involvement reveals that the results of supplier-manufacturer collaborations and the associated issues and problems can best be explained by the patterns in the extent to which the manufacturer manages supplier involvement in the short-term ànd the long-term. We find that our initial framework is helpful in understanding why certain collaborations are not effectively managed, yet conclude that the existing analytical distinction between four different management areas does not sufficiently reflect empirical reality. This leads us to reconceptualize and further detail the framework. Instead of four managerial areas, we propose to distinguish between the Strategic Management arena and the Operational Management arena. The Strategic Management arena contains processes that together provide long-term, strategic direction and operational support for project teams adopting supplier involvement. These processes also contribute to building up a supplier base that can meet current and future technology and capability needs. The Operational Management arena contains processes that are aimed at planning, managing and evaluating the actual collaborations in a specific development project. The results of this study suggest that success of involving suppliers in product development is reflected by the firm’s ability to capture both short-term and long-term benefits. If companies spend most of their time on operational management in development projects, they will fail to use the ‘leverage’ effect of planning and preparing such involvement through strategic management activities. Also, they will not be sufficiently able to capture possible long-term technology and learning benefits that may spin off from individual projects. Long-term collaboration benefits can only be captured if a company can build long-term relationships with key suppliers, where it builds learning routines and ensures that the capability sets of both parties are aligned and remain useful for future joint projects.Purchasing;Innovation;New Product Development;R&D Management;Supplier Relations

    A simplified activity-based costing approach for SMEs : the case study of an Italian small road company

    Get PDF
    Purpose: The paper proposes an original conceptual model for designing a simplified Activity-Based Costing (ABC) approach for Small and Medium-sized Enterprises (SMEs) by focusing on the transport sector. Design/Methodology/Approach: The model is designed starting from the distinctive characteristics of the SMEs’ collaborative culture. The approach is then tested in the case of an Italian small-road company. Findings: The simplified ABC, which was gradually introduced in the SME, allowed the firm to gain confidence with the costing system. Moreover, the discussion of the results led to identifying the main areas to improve. Practical Implications: Costing systems based on collaboration can lead to operational improvements in SMEs operating in dynamic and competitive sectors as transport. Moreover, advanced technologies may hold a crucial role for their development. Originality/Value: Not much research has considered collaboration as a driver for introducing ABC in SMEs. The paper contributes to the literature on simplified managerial approaches, suggesting trends for future research.peer-reviewe

    Quantify resilience enhancement of UTS through exploiting connect community and internet of everything emerging technologies

    Get PDF
    This work aims at investigating and quantifying the Urban Transport System (UTS) resilience enhancement enabled by the adoption of emerging technology such as Internet of Everything (IoE) and the new trend of the Connected Community (CC). A conceptual extension of Functional Resonance Analysis Method (FRAM) and its formalization have been proposed and used to model UTS complexity. The scope is to identify the system functions and their interdependencies with a particular focus on those that have a relation and impact on people and communities. Network analysis techniques have been applied to the FRAM model to identify and estimate the most critical community-related functions. The notion of Variability Rate (VR) has been defined as the amount of output variability generated by an upstream function that can be tolerated/absorbed by a downstream function, without significantly increasing of its subsequent output variability. A fuzzy based quantification of the VR on expert judgment has been developed when quantitative data are not available. Our approach has been applied to a critical scenario (water bomb/flash flooding) considering two cases: when UTS has CC and IoE implemented or not. The results show a remarkable VR enhancement if CC and IoE are deploye

    Estimating ToE Risk Level using CVSS

    Get PDF
    Security management is about calculated risk and requires continuous evaluation to ensure cost, time and resource effectiveness. Parts of which is to make future-oriented, cost-benefit investments in security. Security investments must adhere to healthy business principles where both security and financial aspects play an important role. Information on the current and potential risk level is essential to successfully trade-off security and financial aspects. Risk level is the combination of the frequency and impact of a potential unwanted event, often referred to as a security threat or misuse. The paper presents a risk level estimation model that derives risk level as a conditional probability over frequency and impact estimates. The frequency and impact estimates are derived from a set of attributes specified in the Common Vulnerability Scoring System (CVSS). The model works on the level of vulnerabilities (just as the CVSS) and is able to compose vulnerabilities into service levels. The service levels define the potential risk levels and are modelled as a Markov process, which are then used to predict the risk level at a particular time
    corecore