32 research outputs found

    Grid Data Management: Open Problems and New Issues

    Get PDF
    International audienceInitially developed for the scientific community, Grid computing is now gaining much interest in important areas such as enterprise information systems. This makes data management critical since the techniques must scale up while addressing the autonomy, dynamicity and heterogeneity of the data sources. In this paper, we discuss the main open problems and new issues related to Grid data management. We first recall the main principles behind data management in distributed systems and the basic techniques. Then we make precise the requirements for Grid data management. Finally, we introduce the main techniques needed to address these requirements. This implies revisiting distributed database techniques in major ways, in particular, using P2P techniques

    Uma proposta diferenciada de taxonomia para mecanismos de controle de concorrĂŞncia de bancos de dados em ambientes sem fio

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Ciência da Computação.O objetivo fundamental do controle de concorrência em banco de dados é assegurar que a execução concorrente de transações não resulte na perda da consistência do banco de dados, ou seja, é necessário assegurar o isolamento das transações. No que diz respeito aos bancos de dados móveis, os mecanismos de controle de concorrência aplicados em bancos de dados tradicionais, ou até mesmo distribuídos, não satisfazem as restrições impostas pelo ambiente de computação móvel, como mobilidade das unidades, as freqüentes desconexões de rede, a baixa largura de banda e a portabilidade. Baseando-se na referida fundamentação, neste trabalho é feito um estudo bibliográfico dos principais modelos de transações móveis, evidenciando suas arquiteturas, modos de processamento, tipos de transações utilizadas, traçando um comparativo de como é feito o suporte das propriedades ACID (Atomicidade, Consistência, Isolamento e Durabilidade) em cada modelo estudado. Com estas informações é feita uma análise dos mecanismos de controle de concorrência utilizados em cada modelo. De acordo com as necessidades dos modelos de transações investigados na literatura, a taxonomia proposta tem como diferencial a apresentação da abordagem híbrida, onde os modelos de transações móveis poderão obter um melhor desempenho utilizando o modo pessimista, quando a conectado ao banco de dados, e otimista quando desconectado

    A Knowledge Development Perspective on Literature Reviews: Validation of a new Typology in the IS Field

    Get PDF
    Literature reviews (LRs) play an important role in developing domain knowledge in all fields. Yet, we observe insufficient insights into the activities with which LRs actually develop knowledge. To address this important gap, we 1) derive knowledge-building activities from the extant literature on LRs, 2) suggest a knowledge-based LR typology that complements existing typologies, and 3) apply the typology in an empirical study that explores how LRs with different goals and methodologies have contributed to knowledge development. In analyzing 240 LRs published in 40 renowned information systems (IS) journals between 2000 and 2014, we draw a detailed picture of knowledge development that one of the most important genres in the IS field has achieved. With this work, we help to unify extant LR conceptualizations by clarifying and illustrating how they apply different methodologies in a range of knowledge-building activities to achieve their goals with respect to theory

    A distribute deadlock detection and resolution algorithm using agents

    Get PDF
    Deadlock is an intrinsic bottleneck in Distributed Real-Time Database Systems (DRTDBS). Deadlock detection and resolution algorithms are important because in DRTDBS, deadlocked transactions are prone to missing deadlines. We propose an Agent Deadlock Detection and Resolution algorithm (ADCombine), a novel framework for distributed deadlock handling using stationary agents, to address the high overhead suffered by current agent-based algorithms. We test a combined deadlock detection and resolution algorithm that enables the Multi Agent System to adjust its execution based on the changing system load, and that selects its victim transactions more judiciously. We demonstrate the advantages of ADCombine over existing algorithms that use agents or traditional edge-chasing through simulation experiments that measure overhead and performance under a widely varying of experimental conditions.deadlockdistribute real-time database systemsdrtdbsalgorithmmulti agent syste

    Fault-tolerant and transactional mobile agent execution

    Get PDF
    Mobile agents constitute a computing paradigm of a more general nature than the widely used client/server computing paradigm. A mobile agent is essentially a computer program that acts autonomously on behalf of a user and travels through a network of heterogeneous machines. However, the greater flexibility of the mobile agent paradigm compared to the client/server computing paradigm comes at additional costs. These costs include, among others, the additional complexity of developing and managing mobile agent-based applications. This additional complexity comprises such issues as reliability. Before mobile agent technology can appear at the core of tomorrow's business applications, reliability mechanisms for mobile agents must be established. In this context, fault tolerance and transaction support are mechanisms of considerable importance. Various approaches to fault tolerance and transaction support exist. They have different strengths and weaknesses, and address different environments. Because of this variety, it is often difficult for the application programmer to choose the approach best suited to an application. This thesis introduces a classification of current approaches to fault-tolerant and transactional mobile agent execution. The classification, which focuses on algorithmic aspects, aims at structuring the field of fault-tolerant and transactional mobile agent execution and facilitates an understanding of the properties and weaknesses of particular approaches. In a distributed system, any software or hardware component may be subject to failures. A single failing component (e.g., agent or machine) may prevent the agent from proceeding with its execution. Worse yet, the current state of the agent and even its code may be lost. We say that the agent execution is blocked. For the agent owner, i.e., the person or application that has configured the agent, the agent does not return. To achieve fault-tolerance, the agent owner can try to detect the failure of the agent, and upon such an event launch a new agent. However, this requires the ability to correctly detect the crash of the agent, i.e., to distinguish between a failed agent and an agent that is delayed by slow processors or slow communication links. Unfortunately, this cannot be achieved in systems such as the Internet. An agent owner who tries to detect the failure of the agent thus cannot prevent the case in which the agent is mistakenly assumed to have crashed. In this case, launching a new agent leads to multiple executions of the agent, i.e., to the violation of the desired exactly-once property of agent execution. Although this may be acceptable for certain applications (e.g., applications whose operations do not have side-effects), others clearly forbid it. In this context, launching a new agent is a form of replication. In general, replication prevents blocking, but may lead to multiple executions of the agent, i.e., to a violation of the exactly-once execution property. This thesis presents an approach that ensures the exactly-once execution property using a simple principle: the mobile agent execution is modeled as a sequence of agreement problems. This model leads to an approach based on two well-known building blocks: consensus and reliable broadcast. We validate this approach with the implementation of FATOMAS, a Java-based FAult-TOlerant Mobile Agent System, and measure its overhead. Transactional mobile agents execute the mobile agent as a transaction. Assume, for instance, an agent whose task is to buy an airline ticket, book a hotel room, and rent a car at the flight destination. The agent owner naturally wants all three operations to succeed or none at all. Clearly, the rental car at the destination is of no use if no flight to the destination is available. On the other hand, the airline ticket may be useless if no rental car is available. The mobile agent's operations thus need to execute atomically, i.e., either all of them or none at all. Execution atomicity also needs to be ensured in the event of failures of hardware or software components. The approach presented in this thesis is non-blocking. A non-blocking transactional mobile agent execution has the important advantage that it can make progress despite failures. In a blocking transactional mobile agent execution, by contrast, progress is only possible when the failed component has recovered. Until then, the acquired locks generally cannot be freed. As no other transactional mobile agents can acquire the lock, overall system throughput is dramatically reduced. The present approach reuses the work on fault-tolerant mobile agent execution to prevent blocking. We have implemented the proposed approach and present the evaluation results

    A conceptual framework and a risk management approach for interoperability between geospatial datacubes

    Get PDF
    De nos jours, nous observons un intérêt grandissant pour les bases de données géospatiales multidimensionnelles. Ces bases de données sont développées pour faciliter la prise de décisions stratégiques des organisations, et plus spécifiquement lorsqu’il s’agit de données de différentes époques et de différents niveaux de granularité. Cependant, les utilisateurs peuvent avoir besoin d’utiliser plusieurs bases de données géospatiales multidimensionnelles. Ces bases de données peuvent être sémantiquement hétérogènes et caractérisées par différent degrés de pertinence par rapport au contexte d’utilisation. Résoudre les problèmes sémantiques liés à l’hétérogénéité et à la différence de pertinence d’une manière transparente aux utilisateurs a été l’objectif principal de l’interopérabilité au cours des quinze dernières années. Dans ce contexte, différentes solutions ont été proposées pour traiter l’interopérabilité. Cependant, ces solutions ont adopté une approche non systématique. De plus, aucune solution pour résoudre des problèmes sémantiques spécifiques liés à l’interopérabilité entre les bases de données géospatiales multidimensionnelles n’a été trouvée. Dans cette thèse, nous supposons qu’il est possible de définir une approche qui traite ces problèmes sémantiques pour assurer l’interopérabilité entre les bases de données géospatiales multidimensionnelles. Ainsi, nous définissons tout d’abord l’interopérabilité entre ces bases de données. Ensuite, nous définissons et classifions les problèmes d’hétérogénéité sémantique qui peuvent se produire au cours d’une telle interopérabilité de différentes bases de données géospatiales multidimensionnelles. Afin de résoudre ces problèmes d’hétérogénéité sémantique, nous proposons un cadre conceptuel qui se base sur la communication humaine. Dans ce cadre, une communication s’établit entre deux agents système représentant les bases de données géospatiales multidimensionnelles impliquées dans un processus d’interopérabilité. Cette communication vise à échanger de l’information sur le contenu de ces bases. Ensuite, dans l’intention d’aider les agents à prendre des décisions appropriées au cours du processus d’interopérabilité, nous évaluons un ensemble d’indicateurs de la qualité externe (fitness-for-use) des schémas et du contexte de production (ex., les métadonnées). Finalement, nous mettons en œuvre l’approche afin de montrer sa faisabilité.Today, we observe wide use of geospatial databases that are implemented in many forms (e.g., transactional centralized systems, distributed databases, multidimensional datacubes). Among those possibilities, the multidimensional datacube is more appropriate to support interactive analysis and to guide the organization’s strategic decisions, especially when different epochs and levels of information granularity are involved. However, one may need to use several geospatial multidimensional datacubes which may be semantically heterogeneous and having different degrees of appropriateness to the context of use. Overcoming the semantic problems related to the semantic heterogeneity and to the difference in the appropriateness to the context of use in a manner that is transparent to users has been the principal aim of interoperability for the last fifteen years. However, in spite of successful initiatives, today's solutions have evolved in a non systematic way. Moreover, no solution has been found to address specific semantic problems related to interoperability between geospatial datacubes. In this thesis, we suppose that it is possible to define an approach that addresses these semantic problems to support interoperability between geospatial datacubes. For that, we first describe interoperability between geospatial datacubes. Then, we define and categorize the semantic heterogeneity problems that may occur during the interoperability process of different geospatial datacubes. In order to resolve semantic heterogeneity between geospatial datacubes, we propose a conceptual framework that is essentially based on human communication. In this framework, software agents representing geospatial datacubes involved in the interoperability process communicate together. Such communication aims at exchanging information about the content of geospatial datacubes. Then, in order to help agents to make appropriate decisions during the interoperability process, we evaluate a set of indicators of the external quality (fitness-for-use) of geospatial datacube schemas and of production context (e.g., metadata). Finally, we implement the proposed approach to show its feasibility

    Offshore Outsourcing of the United States Semiconductor Manufacturing: Management Approaches and Strategies

    Get PDF
    The United States manufacturing employment decreased 33% from 1985 to 2014. During the same period, the United States semiconductor manufacturing, accounting for 1.7% of the total of the United States manufacturing workforce, lost 35% of its employees. The decline in semiconductor manufacturing jobs began in 1985 when semiconductor firms began offshoring product manufacturing overseas because of low cost of qualified labor force and facilities. This qualitative case study explored the analytical approaches and strategies business leaders of semiconductor firms that offshore manufacturing use in making informed strategic outsourcing and offshoring decisions conducive to sustainability and profitability of operations. The location theory provided the conceptual framework for this research study. Semistructured interviews were conducted using video conferencing with 5 midlevel managers who conducted offshoring or were currently offshoring semiconductor manufacturing. There were 10 themes identified and analyzed from transcription software. The themes were manufacturing cost, onshore manufacturing, offshoring site selection, competitive cost analysis, offshoring advantages, offshoring disadvantages, national manufacturing program, offshoring, reshoring, and social Impact. The findings showed that offshoring of the semiconductor product manufacturing will continue because of lower cost of operation. Social change could ensue if the leader of firms, together with the educational institutions and lawmakers, establish a national program for the industrial type of knowledge to build skills in the United States

    Coordination fiable de services de données à base de politiques actives

    Get PDF
    We propose an approach for adding non-functional properties (exception handling, atomicity, security, persistence) to services' coordinations. The approach is based on an Active Policy Model (AP Model) for representing services' coordinations with non-functional properties as a collection of types. In our model, a services' coordination is represented as a workflow composed of an ordered set of activities, each activity in charge of implementing a call to a service' operation. We use the type Activity for representing a workflow and its components (i.e., the workflow' activities and the order among them). A non-functional property is represented as one or several Active Policy types, each policy composed of a set of event-condition-action rules in charge of implementing an aspect of the property. Instances of active policy and activity types are considered in the model as entities that can be executed. We use the Execution Unit type for representing them as entities that go through a series of states at runtime. When an active policy is associated to one or several execution units, its rules verify whether each unit respects the implemented non-functional property by evaluating their conditions over their execution unit state, and when the property is not verified, the rules execute their actions for enforcing the property at runtime. We also proposed a proof of concept Active Policy Execution Engine for executing an active policy oriented workflow modelled using our AP Model. The engine implements an execution model that determines how AP, Rule and Activity instances interact among each other for adding non-functional properties (NFPs) to a workflow at execution time. We validated the AP Model and the Active Policy Execution Engine by defining active policy types for addressing exception handling, atomicity, state management, persistency and authentication properties. These active policy types were used for implementing reliable service oriented applications, and mashups for integrating data from services.Nous proposons une approche pour ajouter des propriétés non-fonctionnelles (traitement d'exceptions, atomicité, sécurité, persistance) à des coordinations de services. L'approche est basée sur un Modèle de Politiques Actives (AP Model) pour représenter les coordinations de services avec des propriétés non-fonctionnelles comme une collection de types. Dans notre modèle, une coordination de services est représentée comme un workflow compose d'un ensemble ordonné d'activité. Chaque activité est en charge d'implante un appel à l'opération d'un service. Nous utilisons le type Activité pour représenter le workflow et ses composants (c-à-d, les activités du workflow et l'ordre entre eux). Une propriété non-fonctionnelle est représentée comme un ou plusieurs types de politiques actives, chaque politique est compose d'un ensemble de règles événement-condition-action qui implantent un aspect d'un propriété. Les instances des entités du modèle, politique active et activité peuvent être exécutées. Nous utilisons le type unité d'exécution pour les représenter comme des entités dont l'exécution passe par des différents états d'exécution en exécution. Lorsqu'une politique active est associée à une ou plusieurs unités d'exécution, les règles vérifient si l'unité d'exécution respecte la propriété non-fonctionnelle implantée en évaluant leurs conditions sur leurs états d'exécution. Lorsqu'une propriété n'est pas vérifiée, les règles exécutant leurs actions pour renforcer les propriétés en cours d'exécution. Nous avons aussi proposé un Moteur d'exécution de politiques actives pour exécuter un workflow orientés politiques actives modélisé en utilisant notre AP Model. Le moteur implante un modèle d'exécution qui détermine comment les instances d'une AP, une règle et une activité interagissent entre elles pour ajouter des propriétés non-fonctionnelles (NFP) à un workflow en cours d'exécution. Nous avons validé le modèle AP et le moteur d'exécution de politiques actives en définissant des types de politiques actives pour adresser le traitement d'exceptions, l'atomicité, le traitement d'état, la persistance et l'authentification. Ces types de politiques actives ont été utilisés pour implanter des applications à base de services fiables, et pour intégrer les données fournies par des services à travers des mashups

    Strategies Information Technology Outsourcing Managers Use to Improve Business Performance and Employee Retention

    Get PDF
    AbstractVoluntary employee turnover can result in adverse business outcomes. Information technology (IT) managers are concerned with voluntary turnover, which is the number one reason for actual turnover. Grounded in the transaction cost theory, the purpose of this qualitative multiple case study was to explore strategies IT managers use to improve business performance and employee retention. The participants comprised four IT managers from two firms in Arizona who effectively used strategies to improve business performance and employee retention. Data sources included semi-structured interviews, a review of archival company documents, and field notes. The data were thematically coded and analyzed, and four themes emerged: communication and relationships, labor costs, organizational learning and culture, and vendor management. A key recommendation is for IT managers to implement effective communication and relationships with vendors to achieve efficient labor costs in ITO contracts. The implications for positive social change include the potential for IT managers to create job opportunities, maintain socioeconomic stability for local citizens, provide social amenities and welfare, and promote regional communities’ economic development across the globe
    corecore