6,798 research outputs found

    Enterprise Systems Adoption and Firm Performance in Europe: The Role of Innovation

    Get PDF
    Despite the ubiquitous proliferation and importance of Enterprise Systems (ES), little research exists on their post-implementation impact on firm performance, especially in Europe. This paper provides representative, large-sample evidence on the differential effects of different ES types on performance of European enterprises. It also highlights the mediating role of innovation in the process of value creation from ES investments. Empirical data on the adoption of Enterprise Resource Planning (ERP), Supply Chain Management (SCM), Customer Relationship Management (CRM), Knowledge Management System (KMS), and Document Management System (DMS) is used to investigate the effects on product and process innovation, revenue, productivity and market share growth, and profitability. The data covers 29 sectors in 29 countries over a 5-year period. The results show that all ES categories significantly increase the likelihood of product and process innovation. Most of ES categories affect revenue, productivity and market share growth positively. Particularly, more domainspecific and simpler system types lead to stronger positive effects. ERP systems decrease the profitability likelihood of the firm, whereas other ES categories do not show any significant effect. The findings also imply that innovation acts as a full or partial mediator in the process of value creation of ES implementations. The direct effect of enterprise software on firm performance disappears or significantly diminishes when the indirect effects through product and process innovation are explicitly accounted for. The paper highlights future areas of research.Enterprise Systems; ERP; SCM; CRM; KMS; DMS; IT Adoption; Post-implementation Phase; IT Business Value; Innovation; Firm Performance; Europe

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    Towards Design Principles for Data-Driven Decision Making: An Action Design Research Project in the Maritime Industry

    Get PDF
    Data-driven decision making (DDD) refers to organizational decision-making practices that emphasize the use of data and statistical analysis instead of relying on human judgment only. Various empirical studies provide evidence for the value of DDD, both on individual decision maker level and the organizational level. Yet, the path from data to value is not always an easy one and various organizational and psychological factors mediate and moderate the translation of data-driven insights into better decisions and, subsequently, effective business actions. The current body of academic literature on DDD lacks prescriptive knowledge on how to successfully employ DDD in complex organizational settings. Against this background, this paper reports on an action design research study aimed at designing and implementing IT artifacts for DDD at one of the largest ship engine manufacturers in the world. Our main contribution is a set of design principles highlighting, besides decision quality, the importance of model comprehensibility, domain knowledge, and actionability of results

    Using machine learning for assessing customer suitability in business to business environment

    Get PDF
    The industry has gone through multiple revolutions as a result of advancements in technology. Now, it might be on the verge of another one because of machine learning. In business to business setting, sales processes include many manual tasks. They consume a lot of time, that could otherwise be used for deepening relationships with customers or for acquiring new ones. Machine learning could be used to support and automate these tasks. Also, it could be used to forecast how a sales process ends. This thesis proposes a new approach for using machine learning in the said environment. Instead of using sales data to just predict the outcomes of active sales processes, it is used to estimate how good a potential customer would be for a company. With these kinds of estimates, a company could more effectively screen the customers making their sales efforts potentially more profitable. The thesis aims to research questions that arise from this approach. These include the question of how to provide suitability estimates without saved data, when to start using machine learning for providing the estimates and how to maintain the performance of selected machine learning method. In the thesis, an open business to business dataset is used. It is introduced and analyzed. Also, a selection of machine learning models is introduced and one of them selected for the thesis. To answer the open questions of the thesis code is developed and tested. The code forms a two-mode system for providing suitability estimates. The purpose of the code was to verify the ideas useful for such a system. As results of the thesis it can be stated that without saved data it is possible to offer suitability estimates by defining an optimal customer and comparing all the new customers to it. The system can start using machine learning once a classifier fitted to the data gives good enough cross-validation results. Once a machine learning model is in use, its performance can be maintained by retraining the model and finding the optimal parameters each time the dataset has changed. Finally, it is however stressed that the suitability estimates are not absolute truths. They should not be used blindly, but as decision support when selecting new customers for a company

    Panel: A call for action in tackling environmental sustainability through green information technologies and systems

    Get PDF
    In a previous paper, we have found empirical evidence supporting a positive relationship between network centrality and success. However, we have also found that more successful projects have a lower technical quality. A first, straightforward argument explaining previous findings is that more central contributors are also highly skilled developers who are well known for their ability to manage the complexity of code with a lower attention to the software structure. The consolidated metrics of software quality used by the authors in their previous research represent measures of code structure. This paper provides empirical evidence supporting the idea that the negative impact of success on quality is caused by the careless behaviour of skilled developers, who are also hubs within the social network. Research hypotheses are tested on a sample of 56 OS applications from the SourceForge.net repository, with a total of 378 developers. The sample includes some of the most successful and large OS projects, as well as a cross-section of less famous active projects evenly distributed among SourceForge.net’s project categories

    Agate Outcomes Analysis: Using the Foundation for SATS

    Get PDF
    This study explored the policy implications of a specific jointly funded government-industry-academic research and development initiative on future planning. The researcher sought to uncover what trends or patterns of the National Aeronautics and Space Administration (NASA) Advanced General Aviation Transport Experiments (AGATE) had a positive affect on the outcomes of the consortium. By identifying these trends, the research may be able to help foster a more practical transition to follow-on programs. AGATE focused on developing innovative cockpit technologies that highlighted safety, affordability, and ease-of-use based in such areas as flight systems and integrated design and manufacturing. A quantitative analytic methodological approach encompassing qualitative data analysis software and the policy research construct was applied to analyze the organizational policy trends through the application of lessons learned from the AGATE program with reference to the current NASA program-the Small Aircraft Transportation System (SATS). Both NASA programs consist of a similar participant pool. By examining the effects of recommendations from previous studies, this analysis illustrated the transitional effects identified through the analysis. This planning framework illustrated the evolution of program and goal structure and the catalytic effect on the aviation industry and product development through increased interaction

    Modern Data Mining for Software Engineer, A Machine Learning PaaS Review

    Get PDF
    Using data mining methods to produce information from the data has been proven to be valuable for individuals and society. Evolution of technology has made it possible to use complicated data mining methods in different applications and systems to achieve these valuable results. However, there are challenges in data-driven projects which can affect people either directly or indirectly. The vast amount of data is collected and processed frequently to enable the functionality of many modern applications. Cloud-based platforms have been developed to aid in the development and maintenance of data-driven projects. The field of Information Technology (IT) and data-driven projects have become complex, and they require additional attention compared to standard software development. On this thesis, a literature review is conducted to study the existing industry methods and practices, to define the used terms, and describe the relevant data mining process models. We analyze the industry to find out the factors impacting the evolution of tools and platforms, and the roles of project members. Furthermore, a hands-on review is done on typical machine learning Platforms-as-a-Service (PaaS) with an example case, and heuristics are created to aid in choosing a machine learning platform. The results of this thesis provide knowledge and understanding for the software developers and project managers who are part of these data-driven projects without the in-depth knowledge of data science. In this study, we found out that it is necessary to have a valid process model or methodology, precise roles, and versatile tools or platforms when developing data-driven applications. Each of these elements affects other elements in some way. We noticed that traditional data mining process models are insufficient in the modern agile software development. Nevertheless, they can provide valuable insights and understanding about how to handle the data in the correct way. The cloud-based platforms aid in these data-driven projects to enable the development of complicated machine learning projects without the expertise of either a data scientist or a software developer. The platforms are versatile and easy to use. However, developing functionalities and predictive models which the developer does not understand can be seen as bad practice, and cause harm in the future

    How do Microservices Evolve?:An Empirical Analysis of Changes in Open-Source Microservice Repositories

    Get PDF
    Context.Microservice architectures are an emergent service-oriented paradigm widely used in industry to develop and deploy scalable software systems. The underlying idea is to design highly independent services that implement small units of functionality and can interact with each other through lightweight interfaces.Objective.Even though microservices are often used with success, their design and maintenance pose novel challenges to software engineers. In particular, it is questionable whether the intended independence of microservices can actually be achieved in practice.Method.So, it is important to understand how and why microservices evolve during a system’s life-cycle, for instance, to scope refactorings and improvements of a system’s architecture or to develop supporting tools. To provide insights into how microservices evolve, we report a large-scale empirical study on the (co-)evolution of microservices in 11 open-source systems, involving quantitative and qualitative analyses of 7,319 commits.Findings.Our quantitative results show that there are recurring patterns of (co-)evolution across all systems, for instance, “shotgun surgery” commits and microservices that are largely independent, evolve in tuples, or are evolved in almost all changes. We refine our results by analyzing service-evolving commits qualitatively to explore the (in-)dependence of microservices and the causes for their specific evolution.Conclusion.The contributions in this article provide an understanding for practitioners and researchers on how microservices evolve in what way, and how microservice-based systems may be improved
    corecore