1,962 research outputs found

    ERP implementation methodologies and frameworks: a literature review

    Get PDF
    Enterprise Resource Planning (ERP) implementation is a complex and vibrant process, one that involves a combination of technological and organizational interactions. Often an ERP implementation project is the single largest IT project that an organization has ever launched and requires a mutual fit of system and organization. Also the concept of an ERP implementation supporting business processes across many different departments is not a generic, rigid and uniform concept and depends on variety of factors. As a result, the issues addressing the ERP implementation process have been one of the major concerns in industry. Therefore ERP implementation receives attention from practitioners and scholars and both, business as well as academic literature is abundant and not always very conclusive or coherent. However, research on ERP systems so far has been mainly focused on diffusion, use and impact issues. Less attention has been given to the methods used during the configuration and the implementation of ERP systems, even though they are commonly used in practice, they still remain largely unexplored and undocumented in Information Systems research. So, the academic relevance of this research is the contribution to the existing body of scientific knowledge. An annotated brief literature review is done in order to evaluate the current state of the existing academic literature. The purpose is to present a systematic overview of relevant ERP implementation methodologies and frameworks as a desire for achieving a better taxonomy of ERP implementation methodologies. This paper is useful to researchers who are interested in ERP implementation methodologies and frameworks. Results will serve as an input for a classification of the existing ERP implementation methodologies and frameworks. Also, this paper aims also at the professional ERP community involved in the process of ERP implementation by promoting a better understanding of ERP implementation methodologies and frameworks, its variety and history

    Extended use of SOA and Cloud Computing Security Gateway Protocol for Big Data analytic sessions

    Get PDF
    The advent of Cloud computing and Big Data has introduced a paradigm shift in the area of Information Technology. The Cloud security is lagging behind the evolution of Cloud computing; this lag requires further research. The adaptation of Cloud by the businesses results in the use of VPN and SANs. In this new paradigm the computing is conducted in the Cloud rather onsite. This needs a security protocol, since the processing of Big Data is simultaneously massive and vulnerable. The utilization of Cloud and Big Data has introduced gaps in terms of standard business processes as well as data security, while the data is being processed using the concept of MapReduce. The lag of open source security is the problem area which is dealt with in this doctoral thesis to provide a security gateway protocol for any organization or entity to tailor according to their environmental constraints. All of the major software solution providers, such as Microsoft, Oracle, SAP etc., have their own closed source security protocols available for the organizations to use. There are several versions of the open source Kerberos also available to be used in providing security for Big Data processing within Cloud using commodity hardware. Another famous open source security gateway is the Apache Knox Gateway. However it only provides a single access point for all REST interactions with Hadoop clusters. There has been a need of an open source security protocol that organizations can customize according to their needs, which is not bound of using only REST interactions. The research presented in this thesis provides such an Open source solution for the industry. The provided security gateway utilizes an extended use of SOA by adapting Achievable Service Oriented Architecture (ASOA). Since the use of Information Technology has been significantly altered after the emergence of the Cloud, there is also a need for the organization using legacy technology to transition from current business processing to processing of the Big Data within the Cloud in a secured way. This thesis builds security gateway protocol upon the SOA using ASOA as the base methodology. It also introduces a Master Observer Service (MOS), which uses Messaging Secured Service (MSS) as an added capability to strengthen the idea of secured data availability for MapReduce processing to deal with Big Data. The thesis presents an actual implementation using Business Process Engineering (BPR). The tailored implementation of the Security Gateway Protocol has been implemented in one of the fortune one hundred financial institutions using Master Observer Service. This allows the institution to process their Big Data using MapReduce in secured sessions using Cloud. L'avĆØnement du Cloud Computing et du Big Data a introduit un changement de paradigme dans le domaine des technologies de l'information. La sĆ©curitĆ© Cloud est Ć  la traĆ®ne de l'Ć©volution du Cloud Computing (CC); ce dĆ©calage nĆ©cessite de plus amples recherches. L'adaptation du Cloud par les entreprises se traduit par l'utilisation de rĆ©seaux privĆ©s virtuels (VPN) et des rĆ©seaux de stockage VPN. Dans ce nouveau paradigme du CC, des calculs sont effectuĆ©s dans le Cloud plutĆ“t que sur place. Ceci nĆ©cessite d'un protocole de sĆ©curitĆ©, puisque le traitement de Big Data est simultanĆ©ment massif et vulnĆ©rable. L'utilisation de Cloud et Big Data a introduit des lacunes en termes de processus d'affaires standard, ainsi que de la sĆ©curitĆ© des donnĆ©es, tandis que les donnĆ©es sont traitĆ©es en utilisant le concept de MapReduce. Le dĆ©calage de la sĆ©curitĆ© open source est la zone Ć  problĆØme qui est traitĆ© dans cette thĆØse de doctorat en proposant un protocole de sĆ©curitĆ© de passerelle du CC pour toute organisation ou entitĆ© en fonction de leurs contraintes environnementales. Tous les principaux fournisseurs de solutions de logiciels, tels que Microsoft, Oracle, SAP, etc., ont leurs propres protocoles de sĆ©curitĆ©, disponibles pour les organisations Ć  utiliser sans code source. Il existe Ć©galement plusieurs versions de l'open source Kerberos disponibles Ć  ĆŖtre utilisĆ©es afin dā€™assurer la sĆ©curitĆ© pour le traitement des Big Data. Une autre passerelle de sĆ©curitĆ© open source est celle de lā€™Apache Knox. Toutefois, il ne fournit qu'un point d'accĆØs unique pour toutes les interactions REST avec des clusters Hadoop. Il y a eu une nĆ©cessitĆ© d'un protocole de sĆ©curitĆ© open source que les organisations peuvent personnaliser selon leurs besoins. Ce protocole nā€™est pas liĆ© seulement aux interactions REST. La recherche prĆ©sentĆ©e dans cette thĆØse propose une telle solution open source pour l'industrie. La passerelle de sĆ©curitĆ© fournie utilise une extension de lā€™architecture orientĆ©e service SOA en adaptant notre modĆØle de cette architecture : Achievable Service Oriented Architecture (ASOA). AprĆØs l'Ć©mergence du Cloud, l'utilisation des technologies de l'information a Ć©tĆ© modifiĆ©e de faƧon significative, il y a aussi une nĆ©cessitĆ© des organisations utilisant des technologies lĆ©gataires pour faire la transition de traitement des affaires en cours vers le traitement des Big Data dans le Cloud de maniĆØre sĆ©curisĆ©e. Cette thĆØse conƧoit et valide un protocole de passerelle de sĆ©curitĆ© en utilisant ASOA. Il introduit Ć©galement un service d'observation maĆ®tre (Master Observer Service MOS), qui Ć  son tour utilise le service de messagerie sĆ©curisĆ© (MSS) comme Ć©tant une capacitĆ© supplĆ©mentaire pour renforcer l'idĆ©e de la disponibilitĆ© de donnĆ©es sĆ©curisĆ©e pour le traitement de MapReduce afin de faire face aux Big Data. La thĆØse prĆ©sente une mise en oeuvre rĆ©elle en utilisant Business Process Engineering (BPR). La mise en oeuvre adaptĆ©e du Protocole Security Gateway a Ć©tĆ© implĆ©mentĆ©e dans plusieurs institutions financiĆØres utilisant Master Observer Service. Cela permet Ć  l'institution de traiter leurs Big Data en utilisant MapReduce dans les sessions sĆ©curisĆ©es en Cloud Computing

    Checking global usage of resources handled with local policies

    Get PDF
    We present a methodology to reason about resource usage (acquisition, release, revision, and so on) and, in particular, to predict bad usage of resources. Keeping in mind the interplay between local and global information that occur in application-resource interactions, we model resources as entities with local policies and we study global properties that govern overall interactions. Formally, our model is an extension of Ļ€-calculus with primitives to manage resources. To predict possible bad usage of resources, we develop a Control Flow Analysis that computes a static over-approximation of process behaviour

    Proactive cybersecurity tailoring through deception techniques

    Get PDF
    DissertaĆ§Ć£o de natureza cientĆ­fica para obtenĆ§Ć£o do grau de Mestre em Engenharia InformĆ”tica e de ComputadoresUma abordagem proativa Ć  ciberseguranƧa pode complementar uma postura reativa ajudando as empresas a lidar com incidentes de seguranƧa em fases iniciais. As organizaƧƵes podem proteger-se ativamente contra a assimetria inerente Ć  guerra cibernĆ©tica atravĆ©s do uso de tĆ©cnicas proativas, como por exemplo a ciber deception. A implantaĆ§Ć£o intencional de artefactos enganosos para construir uma infraestrutura que permite a investigaĆ§Ć£o em tempo real dos padrƵes e abordagens de um atacante sem comprometer a rede principal da organizaĆ§Ć£o Ć© o propĆ³sito da deception cibernĆ©tica. Esta metodologia pode revelar vulnerabilidades por descobrir, conhecidas como vulnerabilidades de dia-zero, sem interferir com as atividades de rotina da organizaĆ§Ć£o. AlĆ©m disso, permite Ć s empresas a extraĆ§Ć£o de informaƧƵes vitais sobre o atacante que, de outra forma, seriam difĆ­ceis de adquirir. No entanto, colocar estes conceitos em prĆ”tica em circunstĆ¢ncias reais constitui problemas de grande ordem. Este estudo propƵe uma arquitetura para um sistema informĆ”tico de deception, que culmina numa implementaĆ§Ć£o que implanta e adapta dinamicamente uma rede enganosa atravĆ©s do uso de tĆ©cnicas de redes definidas por software e de virtualizaĆ§Ć£o de rede. A rede ilusora Ć© uma rede de ativos virtuais com uma topologia e especificaƧƵes prĆ©-planeadas, coincidentes com uma estratĆ©gia de deception. O sistema pode rastrear e avaliar a atividade do atacante atravĆ©s da monitorizaĆ§Ć£o contĆ­nua dos artefactos da rede. O refinamento em tempo real do plano de deception pode exigir alteraƧƵes na topologia e nos artefactos da rede, possĆ­veis devido Ć s capacidades de modificaĆ§Ć£o dinĆ¢mica das redes definidas por software. As organizaƧƵes podem maximizar as suas capacidades de deception ao combinar estes processos com componentes avanƧados de deteĆ§Ć£o e classificaĆ§Ć£o de ataques informĆ”ticos. A eficĆ”cia da soluĆ§Ć£o proposta Ć© avaliada usando vĆ”rios casos de estudo que demonstram a sua utilidade.A proactive approach to cybersecurity can supplement a reactive posture by helping businesses to handle security incidents in the early phases of an attack. Organizations can actively protect against the inherent asymmetry of cyber warfare by using proactive techniques such as cyber deception. The intentional deployment of misleading artifacts to construct an infrastructure that allows real-time investigation of an attacker's patterns and approaches without compromising the organization's principal network is what cyber deception entails. This method can reveal previously undiscovered vulnerabilities, referred to as zero-day vulnerabilities, without interfering with routine corporate activities. Furthermore, it enables enterprises to collect vital information about the attacker that would otherwise be difficult to access. However, putting such concepts into practice in real-world circumstances involves major problems. This study proposes an architecture for a deceptive system, culminating in an implementation that deploys and dynamically customizes a deception grid using Software-Defined Networking (SDN) and network virtualization techniques. The deception grid is a network of virtual assets with a topology and specifications that are pre-planned to coincide with a deception strategy. The system can trace and evaluate the attacker's activity by continuously monitoring the artifacts within the deception grid. Real-time refinement of the deception plan may necessitate changes to the grid's topology and artifacts, which can be assisted by software-defined networking's dynamic modification capabilities. Organizations can maximize their deception capabilities by merging these processes with advanced cyber-attack detection and classification components. The effectiveness of the given solution is assessed using numerous use cases that demonstrate its utility.N/

    Partitioning workflow applications over federated clouds to meet non-functional requirements

    Get PDF
    PhD ThesisWith cloud computing, users can acquire computer resources when they need them on a pay-as-you-go business model. Because of this, many applications are now being deployed in the cloud, and there are many di erent cloud providers worldwide. Importantly, all these various infrastructure providers o er services with di erent levels of quality. For example, cloud data centres are governed by the privacy and security policies of the country where the centre is located, while many organisations have created their own internal \private cloud" to meet security needs. With all this varieties and uncertainties, application developers who decide to host their system in the cloud face the issue of which cloud to choose to get the best operational conditions in terms of price, reliability and security. And the decision becomes even more complicated if their application consists of a number of distributed components, each with slightly di erent requirements. Rather than trying to identify the single best cloud for an application, this thesis considers an alternative approach, that is, combining di erent clouds to meet users' non-functional requirements. Cloud federation o ers the ability to distribute a single application across two or more clouds, so that the application can bene t from the advantages of each one of them. The key challenge for this approach is how to nd the distribution (or deployment) of application components, which can yield the greatest bene ts. In this thesis, we tackle this problem and propose a set of algorithms, and a framework, to partition a work ow-based application over federated clouds in order to exploit the strengths of each cloud. The speci c goal is to split a distributed application structured as a work ow such that the security and reliability requirements of each component are met, whilst the overall cost of execution is minimised. To achieve this, we propose and evaluate a cloud broker for partitioning a work ow application over federated clouds. The broker integrates with the e-Science Central cloud platform to automatically deploy a work ow over public and private clouds. We developed a deployment planning algorithm to partition a large work ow appli- - i - cation across federated clouds so as to meet security requirements and minimise the monetary cost. A more generic framework is then proposed to model, quantify and guide the partitioning and deployment of work ows over federated clouds. This framework considers the situation where changes in cloud availability (including cloud failure) arise during work ow execution

    On the Security of Software Systems and Services

    Get PDF
    This work investigates new methods for facing the security issues and threats arising from the composition of software. This task has been carried out through the formal modelling of both the software composition scenarios and the security properties, i.e., policies, to be guaranteed. Our research moves across three different modalities of software composition which are of main interest for some of the most sensitive aspects of the modern information society. They are mobile applications, trust-based composition and service orchestration. Mobile applications are programs designed for being deployable on remote platforms. Basically, they are the main channel for the distribution and commercialisation of software for mobile devices, e.g., smart phones and tablets. Here we study the security threats that affect the application providers and the hosting platforms. In particular, we present a programming framework for the development of applications with a static and dynamic security support. Also, we implemented an enforcement mechanism for applying fine-grained security controls on the execution of possibly malicious applications. In addition to security, trust represents a pragmatic and intuitive way for managing the interactions among systems. Currently, trust is one of the main factors that human beings keep into account when deciding whether to accept a transaction or not. In our work we investigate the possibility of defining a fully integrated environment for security policies and trust including a runtime monitor. Finally, Service-Oriented Computing (SOC) is the leading technology for business applications distributed over a network. The security issues related to the service networks are many and multi-faceted. We mainly deal with the static verification of secure composition plans of web services. Moreover, we introduce the synthesis of dynamic security checks for protecting the services against illegal invocations

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European ā€“ Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of ā€œFrontiers in Artificial Intelligenceā€ by IOS Press (Amsterdam). The books ā€œInformation Modelling and Knowledge Basesā€ are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    A Computational Architecture Based on RFID Sensors for Traceability in Smart Cities

    Get PDF
    Information Technology and Communications (ICT) is presented as the main element in order to achieve more efficient and sustainable city resource management, while making sure that the needs of the citizens to improve their quality of life are satisfied. A key element will be the creation of new systems that allow the acquisition of context information, automatically and transparently, in order to provide it to decision support systems. In this paper, we present a novel distributed system for obtaining, representing and providing the flow and movement of people in densely populated geographical areas. In order to accomplish these tasks, we propose the design of a smart sensor network based on RFID communication technologies, reliability patterns and integration techniques. Contrary to other proposals, this system represents a comprehensive solution that permits the acquisition of user information in a transparent and reliable way in a non-controlled and heterogeneous environment. This knowledge will be useful in moving towards the design of smart cities in which decision support on transport strategies, business evaluation or initiatives in the tourism sector will be supported by real relevant information. As a final result, a case study will be presented which will allow the validation of the proposal

    Security Enhanced Applications for Information Systems

    Get PDF
    Every day, more users access services and electronically transmit information which is usually disseminated over insecure networks and processed by websites and databases, which lack proper security protection mechanisms and tools. This may have an impact on both the usersā€™ trust as well as the reputation of the systemā€™s stakeholders. Designing and implementing security enhanced systems is of vital importance. Therefore, this book aims to present a number of innovative security enhanced applications. It is titled ā€œSecurity Enhanced Applications for Information Systemsā€ and includes 11 chapters. This book is a quality guide for teaching purposes as well as for young researchers since it presents leading innovative contributions on security enhanced applications on various Information Systems. It involves cases based on the standalone, network and Cloud environments
    • ā€¦
    corecore