909 research outputs found

    Goals/questions/metrics method and SAP implementation projects

    Get PDF
    During the last years some researchers have studied the critical success factors (CSFs) in ERP implementations. However, until now, no one has studied how these CSFs should be put in practice to help organizations achieve success in ERP implementations. This technical research report attempts to define the usage of Goals/Questions/Metrics (GQM) approach in the definition of a measurement system for ERP implementation projects. GQM approach is a mechanism for defining and interpreting operational, measurable goals. Lately, because of its intuitive nature the approach has gained widespread appeal. We present a metrics overview and a description of GQM approach. Then we provide an example of GQM application for monitoring sustained management support in ERP implementations. Sustained management support is the most cited critical success factor in ERP implementation projects.Postprint (published version

    Organizational and national issues of an ERP implementation in a Portuguese company

    Get PDF
    This technical research report describes a case of an Enterprise Resource Planning (ERP) implementation in a Portuguese SME. We focused on the identification of organizational factors that affect the ERP implementation project. We also analyzed the ERP implementation project from a national cultural perspective using Geert Hofstede's dimensions. These dimensions were used to explain some of the attitudes and behaviours during the ERP implementation project. The findings suggest that some of the problems in ERP implementation projects are not of technological nature but may be attributed to organizational factors while some issues related to national culture.Postprint (published version

    A Framework proposal for monitoring and evaluating training in ERP implementation project

    Get PDF
    During the last years some researchers have studied the topic of critical success factors in ERP implementations, out of which 'training' is cited as one of the most ones. Up to this moment, there is not enough research on the management and operationalization of critical success factors within ERP implementation projects.Postprint (published version

    Using the partial least squares (PLS) method to establish critical success factor interdependence in ERP implementation projects

    Get PDF
    This technical research report proposes the usage of a statistical approach named Partial Least squares (PLS) to define the relationships between critical success factors for ERP implementation projects. In previous research work, we developed a unified model of critical success factors for ERP implementation projects. Some researchers have evidenced the relationships between these critical success factors, however no one has defined in a formal way these relationships. PLS is one of the techniques of structural equation modeling approach. Therefore, in this report is presented an overview of this approach. We provide an example of PLS method modelling application; in this case we use two critical success factors. However, our project will be extended to all the critical success factors of our unified model. To compute the data, we are going to use PLS-graph developed by Wynne Chin.Postprint (published version

    Management of generic and multi-platform workflows for exploiting heterogeneous environments on e-Science

    Full text link
    Scientific Workflows (SWFs) are widely used to model applications in e-Science. In this programming model, scientific applications are described as a set of tasks that have dependencies among them. During the last decades, the execution of scientific workflows has been successfully performed in the available computing infrastructures (supercomputers, clusters and grids) using software programs called Workflow Management Systems (WMSs), which orchestrate the workload on top of these computing infrastructures. However, because each computing infrastructure has its own architecture and each scientific applications exploits efficiently one of these infrastructures, it is necessary to organize the way in which they are executed. WMSs need to get the most out of all the available computing and storage resources. Traditionally, scientific workflow applications have been extensively deployed in high-performance computing infrastructures (such as supercomputers and clusters) and grids. But, in the last years, the advent of cloud computing infrastructures has opened the door of using on-demand infrastructures to complement or even replace local infrastructures. However, new issues have arisen, such as the integration of hybrid resources or the compromise between infrastructure reutilization and elasticity, everything on the basis of cost-efficiency. The main contribution of this thesis is an ad-hoc solution for managing workflows exploiting the capabilities of cloud computing orchestrators to deploy resources on demand according to the workload and to combine heterogeneous cloud providers (such as on-premise clouds and public clouds) and traditional infrastructures (supercomputers and clusters) to minimize costs and response time. The thesis does not propose yet another WMS, but demonstrates the benefits of the integration of cloud orchestration when running complex workflows. The thesis shows several configuration experiments and multiple heterogeneous backends from a realistic comparative genomics workflow called Orthosearch, to migrate memory-intensive workload to public infrastructures while keeping other blocks of the experiment running locally. The running time and cost of the experiments is computed and best practices are suggested.Los flujos de trabajo científicos son comúnmente usados para modelar aplicaciones en e-Ciencia. En este modelo de programación, las aplicaciones científicas se describen como un conjunto de tareas que tienen dependencias entre ellas. Durante las últimas décadas, la ejecución de flujos de trabajo científicos se ha llevado a cabo con éxito en las infraestructuras de computación disponibles (supercomputadores, clústers y grids) haciendo uso de programas software llamados Gestores de Flujos de Trabajos, los cuales distribuyen la carga de trabajo en estas infraestructuras de computación. Sin embargo, debido a que cada infraestructura de computación posee su propia arquitectura y cada aplicación científica explota eficientemente una de estas infraestructuras, es necesario organizar la manera en que se ejecutan. Los Gestores de Flujos de Trabajo necesitan aprovechar el máximo todos los recursos de computación y almacenamiento disponibles. Habitualmente, las aplicaciones científicas de flujos de trabajos han sido ejecutadas en recursos de computación de altas prestaciones (tales como supercomputadores y clústers) y grids. Sin embargo, en los últimos años, la aparición de las infraestructuras de computación en la nube ha posibilitado el uso de infraestructuras bajo demanda para complementar o incluso reemplazar infraestructuras locales. No obstante, este hecho plantea nuevas cuestiones, tales como la integración de recursos híbridos o el compromiso entre la reutilización de la infraestructura y la elasticidad, todo ello teniendo en cuenta que sea eficiente en el coste. La principal contribución de esta tesis es una solución ad-hoc para gestionar flujos de trabajos explotando las capacidades de los orquestadores de recursos de computación en la nube para desplegar recursos bajo demando según la carga de trabajo y combinar proveedores de computación en la nube heterogéneos (privados y públicos) e infraestructuras tradicionales (supercomputadores y clústers) para minimizar el coste y el tiempo de respuesta. La tesis no propone otro gestor de flujos de trabajo más, sino que demuestra los beneficios de la integración de la orquestación de la computación en la nube cuando se ejecutan flujos de trabajo complejos. La tesis muestra experimentos con diferentes configuraciones y múltiples plataformas heterogéneas, haciendo uso de un flujo de trabajo real de genómica comparativa llamado Orthosearch, para traspasar cargas de trabajo intensivas de memoria a infraestructuras públicas mientras se mantienen otros bloques del experimento ejecutándose localmente. El tiempo de respuesta y el coste de los experimentos son calculados, además de sugerir buenas prácticas.Els fluxos de treball científics són comunament usats per a modelar aplicacions en e-Ciència. En aquest model de programació, les aplicacions científiques es descriuen com un conjunt de tasques que tenen dependències entre elles. Durant les últimes dècades, l'execució de fluxos de treball científics s'ha dut a terme amb èxit en les infraestructures de computació disponibles (supercomputadors, clústers i grids) fent ús de programari anomenat Gestors de Fluxos de Treballs, els quals distribueixen la càrrega de treball en aquestes infraestructures de computació. No obstant açò, a causa que cada infraestructura de computació posseeix la seua pròpia arquitectura i cada aplicació científica explota eficientment una d'aquestes infraestructures, és necessari organitzar la manera en què s'executen. Els Gestors de Fluxos de Treball necessiten aprofitar el màxim tots els recursos de computació i emmagatzematge disponibles. Habitualment, les aplicacions científiques de fluxos de treballs han sigut executades en recursos de computació d'altes prestacions (tals com supercomputadors i clústers) i grids. No obstant açò, en els últims anys, l'aparició de les infraestructures de computació en el núvol ha possibilitat l'ús d'infraestructures sota demanda per a complementar o fins i tot reemplaçar infraestructures locals. No obstant açò, aquest fet planteja noves qüestions, tals com la integració de recursos híbrids o el compromís entre la reutilització de la infraestructura i l'elasticitat, tot açò tenint en compte que siga eficient en el cost. La principal contribució d'aquesta tesi és una solució ad-hoc per a gestionar fluxos de treballs explotant les capacitats dels orquestadors de recursos de computació en el núvol per a desplegar recursos baix demande segons la càrrega de treball i combinar proveïdors de computació en el núvol heterogenis (privats i públics) i infraestructures tradicionals (supercomputadors i clústers) per a minimitzar el cost i el temps de resposta. La tesi no proposa un gestor de fluxos de treball més, sinó que demostra els beneficis de la integració de l'orquestració de la computació en el núvol quan s'executen fluxos de treball complexos. La tesi mostra experiments amb diferents configuracions i múltiples plataformes heterogènies, fent ús d'un flux de treball real de genòmica comparativa anomenat Orthosearch, per a traspassar càrregues de treball intensives de memòria a infraestructures públiques mentre es mantenen altres blocs de l'experiment executant-se localment. El temps de resposta i el cost dels experiments són calculats, a més de suggerir bones pràctiques.Carrión Collado, AA. (2017). Management of generic and multi-platform workflows for exploiting heterogeneous environments on e-Science [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86179TESI

    Extending the synthesis of update transaction programs to handle existential rules in deductive databases

    Get PDF
    We propose a new method for generating consistency-preserving transaction programs for (view) updates in deductive databases. The method augments the deductive database schema with a set of transition and intemal events rules, which explicitly define the database dynamic behaviour in front of a database update. At transaction-design-time, a formal procedure can use these rules to automatically generate parameterised transaction programs for base or view-update transaction requests. This is done in such a way that those transactions will never take the database into an inconsistent state. In this paper we extend a previous version of the method by incorporating existentially defined rules. Within this context, synthesis outputs and processes are provided. Toe method, implemented in Prolog using meta-programming techniques, draws from our previous work in deductive databases, particularly in view updating and integrity constraints checking

    Success and Threats in the Clustering of the Automotive Industry in Spain: The Role of Public and Private Agents

    Full text link
    [EN] Research Question: This article clarifies the role of clusters in industry agglomeration efficiency as well as the role that public and private agents play in their efficiency. Motivation: The automobile industry in Spain is an exception to the industrial decline suffered by the secondary sector since the economic crisis exploded in Spain. Employment in the vehicle manufacturing industry has recovered significantly in Spain in 2017, with a significant growth bringing it closer to 2008 levels. The sector accounts for 8.6 % of the country's GNP. How can we explain this success? Are there new threats (technology, environmental standards, emerging economies, etc.) menacing the sector? Based on value chain and cluster theories we explain its success and how new threats could be managed? The response lays in analyzing the role of cluster agents in the various clusters dynamics. The research shows how the openness of clusters plays a crucial role in their sustainability. Idea: Based on value chain and cluster theories we explain their success and how new threats could be managed? The response lays in analyzing the role of cluster agents in the various clusters dynamics. The research shows how the openness of clusters plays a crucial role in their sustainability. Data: primary data was collected in two surveys and interviews campaigns during 2013 and 2017. Furthermore, secondary data from national, regional and sectoral sources were analysed. Tools: The research is based on a series of interviews and visits to the automotive clusters in Spain. Additionally, the authors have analyzed abundant secondary information and web contents available on the clusters agents: manufacturers, suppliers, unions, associations, etc. Findings: The paper concludes that regional and national policies are relevant but that consensus between clusters¿ agents is essential for their success. However, will the existing agents be able to withstand new threats? .Contribution: The article contributes to clusters literature and the clusters¿ role of the agents in the global value chain context. It also sheds light on public policies to support automotive industries. Limitations are linked to resource limitations.This research study has been supported by various national organizations (MINETUR; CDTI; ANFAC, SERNAUTI) as well as regions such as clusters associations around Spain.Albors Garrigós, J.; Collado, A. (2019). Success and Threats in the Clustering of the Automotive Industry in Spain: The Role of Public and Private Agents. Management Journal of Sustainable Business and Management Solutions in Emerging Economies. 24(3):1-20. https://doi.org/10.7595/management.fon.2019.0002S12024

    Children’s restorative experiences and self-reported environmental behaviors

    Full text link
    Positive experiences in nature relate to children’s environmental behaviors. The reasons for this link remain unknown. One possibility is that children behave more ecologically because they obtain benefits from spending time in nature. In the present study, we looked at positive experiences in nature, specifically restoration, as a motivational factor enhancing children’s proenvironmental behavior. Children (N = 832) rated their school yards in terms of restoration and reported their frequency of proenvironmental behaviors as well as their environmental attitudes. Perceived restoration predicted 37% of the variance in reported proenvironmental behavior. Moreover, this relationship was completely mediated by environmental attitudes. In addition, fascination, a component of restoration, was the only direct predictor of proenvironmental behavior.This research has been carried out supported by the Spanish Ministry of Sciences and Innovation (PSI 2009-13422)

    Towards a definition of SCM systems through SCOR

    Get PDF
    In recent years Supply Chain Management (SCM) in general and in management information systems in particular, have gained interest among researchers. However, derived from a recent analysis of the area and from many definitions used in literature, we think that there is not a clear understanding of what should be considered as a SCM system. In the same way, the minimal functional requirements for a system to be considered as an SCM information system are not yet clear. This contrasts with the existence of SCOR, a much publicised model used as standard in the SCM domain. Although SCOR does not include a definition for an SCM information system and, in fact, it leaves the system implementation at will of the companies, we think that it can be used to provide a better definition. Thus, in the present work we intent to offer a tentative definition of SCM systems based on SCOR.Postprint (published version
    • …
    corecore