797,534 research outputs found

    A Component-based Framework for Distributed Business Simulations in E-Business Environments

    Get PDF
    Simulations preserve the knowledge of complex dynamic systems and consequently transfer the knowledge of the cohesions of its elements to a specified target group. As the progress in information technology and therefore the dynamic e-business driven economy adapts even faster to the business demands, new ways to preserve this growing amount of knowledge have to be found. This paper presents an extensible business simulation framework which is realized as a component-based distributed Java Version 2 Enterprise Edition (J2EE) architecture. The framework aspires to offer an extensible and domain independent simulation environment which ensures the return of investment in the sense of implementing this framework once and extending it to the future requirements of diverse domains in e-business. The system architecture follows the requirements in offering distributed deployment of its components on highly standardized level by nevertheless staying vendor independent. The architecture itself was developed by model driven architecture (MDA)-conform software engineering methods using best of breed design patterns composed to a flexible micro-architecture which possess import facilities for simulation entities (business objects) and (business) processes from e-business solutions. Combining the features of the framework, the layered pattern driven micro-architecture, and the distributed J2EE architecture, the postulated knowledge transfer from rapid changes in e-business can be realized

    The Use of Information Systems in Collocated and Distributed Teams: A Test of the 24-Hour Knowledge Factory

    Get PDF
    Recent academic and policy studies focus on offshoring as a cost-of-labor driven activity that has a direct impact on employment opportunities in the countries involved. This paper broadens this perspective by introducing and evaluating the 24-hour knowledge factory as a model of information systems offshoring that leverages other strategic factors beyond cost savings. A true 24-hour knowledge factory ensures that progress is being made on information systems related tasks at all times of day by utilizing talented information systems professionals around the globe. Many organizations currently implement other variants of offshoring that appear similar but are fundamentally distinct. The typical model is a service provider framework in which an offshore site provides service to the central site, often with two centers and a distinction between a primary center and secondary center. Entire tasks are often outsourced to the lower-cost overseas site and sent back when completed. In contrast, the 24-hour knowledge factory involves continuous and collaborative round-the-clock knowledge production achieved by sequentially and progressively distributing the knowledge creation task around the globe, completing one cycle every 24 hours. Thus, the 24-hour knowledge factory creates a virtual distributed team, in contrast to a team that is collocated in one site, either onshore or offshore. By organizing knowledge tasks in this way, the 24-hour knowledge factory has the potential to work faster, to provide cheaper solutions, and to achieve better overall performance. Previous studies have examined individual teams over time and explored various benefits of distributing work to distant teams, but have not directly compared the effect of collocation versus geographic distribution on the use of information systems and the overall performance over time of two real-world teams working on a similar task in controlled conditions. This paper highlights the concept of the 24-hour knowledge factory and tests the model in a controlled field experiment that directly compares the use of information systems and subsequent performance in collocated and globally distributed software development teams. The central finding is that while collocation versus geographic distribution changes the way teams use information systems and interact at key points during a project, each type of team has the potential to use information systems to leverage its inherent advantages, to overcome disadvantages, and ultimately, to perform equally well. In other words, one organizational structure is not inherently superior nor must structure pre-determine performance. Geographic distance introduces new challenges but these can be overcome – and even leveraged for strategic advantage. In sum, our findings suggest that firms can apply the 24-hour knowledge factory model to transition from a service provider framework in which offshoring is a short-term and unilateral cost-saving tactic to a strategic partnership between centers in which offshoring becomes a core component of a global corporate strategy

    Event-triggered Consensus Control of Heterogeneous Multi-agent Systems: Model- and Data-based Analysis

    Full text link
    This article deals with model- and data-based consensus control of heterogenous leader-following multi-agent systems (MASs) under an event-triggering transmission scheme. A dynamic periodic transmission protocol is developed to significantly alleviate the transmission frequency and computational burden, where the followers can interact locally with each other approaching the dynamics of the leader. Capitalizing on a discrete-time looped-functional, a model-based consensus condition for the closed-loop MASs is derived in form of linear matrix inequalities (LMIs), as well as a design method for obtaining the distributed controllers and event-triggering parameters. Upon collecting noise-corrupted state-input measurements during open-loop operation, a data-driven leader-following MAS representation is presented, and employed to solve the data-driven consensus control problem without requiring any knowledge of the agents' models. This result is then extended to the case of guaranteeing an H\mathcal{H}_{\infty} performance. A simulation example is finally given to corroborate the efficacy of the proposed distributed event-triggering scheme in cutting off data transmissions and the data-driven design method.Comment: 13 pages, 6 figures. This draft was firstly submitted to IEEE Open Journal of Control Systems on April 30, 2022, but rejected on June 19, 2022. Later, on July 23, 2022, this paper was submitted to the journal SCIENCE CHINA information scienc

    Extracting Information from Qubit-Environment Correlations

    Full text link
    Most works on open quantum systems generally focus on the reduced physical system by tracing out the environment degrees of freedom. Here we show that the qubit distributions with the environment are essential for a thorough analysis, and demonstrate that the way that quantum correlations are distributed in a quantum register is constrained by the way in which each subsystem gets correlated with the environment. For a two-qubit system coupled to a common dissipative environment E\mathcal{E}, we show how to optimise interqubit correlations and entanglement via a quantification of the qubit-environment information flow, in a process that, perhaps surprisingly, does not rely on the knowledge of the state of the environment. To illustrate our findings, we consider an optically-driven bipartite interacting qubit ABAB system under the action of E\mathcal{E}. By tailoring the light-matter interaction, a relationship between the qubits early stage disentanglement and the qubit-environment entanglement distribution is found. We also show that, under suitable initial conditions, the qubits energy asymmetry allows the identification of physical scenarios whereby qubit-qubit entanglement minima coincide with the extrema of the AEA\mathcal{E} and BEB\mathcal{E} entanglement oscillations.Comment: 4 figures, 9 page

    A Novel Method for Adaptive Control of Manufacturing Equipment in Cloud Environments

    Get PDF
    The ability to adaptively control manufacturing equipment, both in local and distributed environments, is becoming increasingly more important for many manufacturing companies. One important reason for this is that manufacturing companies are facing increasing levels of changes, variations and uncertainty, caused by both internal and external factors, which can negatively impact their performance. Frequently changing consumer requirements and market demands usually lead to variations in manufacturing quantities, product design and shorter product life-cycles. Variations in manufacturing capability and functionality, such as equipment breakdowns, missing/worn/broken tools and delays, also contribute to a high level of uncertainty. The result is unpredictable manufacturing system performance, with an increased number of unforeseen events occurring in these systems. Events which are difficult for traditional planning and control systems to satisfactorily manage. For manufacturing scenarios such as these, the use of real-time manufacturing information and intelligence is necessary to enable manufacturing activities to be performed according to actual manufacturing conditions and requirements, and not according to a pre-determined process plan. Therefore, there is a need for an event-driven control approach to facilitate adaptive decision-making and dynamic control capabilities. Another reason driving the move for adaptive control of manufacturing equipment is the trend of increasing globalization, which forces manufacturing industry to focus on more cost-effective manufacturing systems and collaboration within global supply chains and manufacturing networks. Cloud Manufacturing is evolving as a new manufacturing paradigm to match this trend, enabling the mutually advantageous sharing of resources, knowledge and information between distributed companies and manufacturing units. One of the crucial objectives for Cloud Manufacturing is the coordinated planning, control and execution of discrete manufacturing operations in collaborative and networked environments. Therefore, there is also a need that such an event-driven control approach supports the control of distributed manufacturing equipment. The aim of this research study is to define and verify a novel and comprehensive method for adaptive control of manufacturing equipment in cloud environments. The presented research follows the Design Science Research methodology. From a review of research literature, problems regarding adaptive manufacturing equipment control have been identified. A control approach, building on a structure of event-driven Manufacturing Feature Function Blocks, supported by an Information Framework, has been formulated. The Function Block structure is constructed to generate real-time control instructions, triggered by events from the manufacturing environment. The Information Framework uses the concept of Ontologies and The Semantic Web to enable description and matching of manufacturing resource capabilities and manufacturing task requests in distributed environments, e.g. within Cloud Manufacturing. The suggested control approach has been designed and instantiated, implemented as prototype systems for both local and distributed manufacturing scenarios, in both real and virtual applications. In these systems, event-driven Assembly Feature Function Blocks for adaptive control of robotic assembly tasks have been used to demonstrate the applicability of the control approach. The utility and performance of these prototype systems have been tested, verified and evaluated for different assembly scenarios. The proposed control approach has many promising characteristics for use within both local and distributed environments, such as cloud environments. The biggest advantage compared to traditional control is that the required control is created at run-time according to actual manufacturing conditions. The biggest obstacle for being applicable to its full extent is manufacturing equipment controlled by proprietary control systems, with native control languages. To take the full advantage of the IEC Function Block control approach, controllers which can interface, interpret and execute these Function Blocks directly, are necessary

    A framework for the co-design of business and IT systems

    Get PDF
    Paper presented at Hawaii International Conference on System Sciences (HICSS-41).This study deals with the intersection of knowledge and action: how knowledge is developed, transformed, interpreted and used to change systems of business process and IT so that stakeholders may make effective decisions and take effective action in their work. The co-design of business and IT systems is a process within which business systems of human activity and IT systems of information processing are mutually constituted. It requires the negotiation of competing technological frames across multiple knowledge domains. Three major challenges hinder effective innovation: (i) a mismatch between goal-driven IS design methods and the need for cross-functional knowledge-sharing, (ii) the distributed and partial knowledge possessed by stakeholders from diverse groups; (iii) the need to maintain interpretive flexibility across cycles of discovery and analysis. This paper develops an analytical framework for integrating knowledge frames across stakeholder groups, to provide a common language for the co-design of business and IT systems

    Analysis of Mobile Business Processes for the Design of Mobile Information Systems

    Get PDF
    The adoption of mobile technologies into companies frequently follows a technology-driven approach without precise knowledge about the potential benefits that may be realised. Especially in larger organisations with complex business processes, a systematic procedure is required if a verifiable economic benefit is to be created by the use of mobile technologies. Therefore, the term “mobile business process”, as well as requirements for information systems applied in such processes, are defined in this paper. Subsequently, we introduce a procedure for the systematical analysis of the distributed structure of a business process model in order to identify mobile sub-processes. For that purpose, the method Mobile Process Landscaping is used to decompose a process model into different levels of detail. The method aims to manage the complexity and limit the process analysis to the potentially mobile sub-processes from the beginning. The result of the analysis can be used on the one hand as a foundation for the redesign of the business processes and on the other hand for the requirements engineering of mobile information systems. An application of this method is shown by the example of business processes in the insurance industry

    Memory Organization for Invariant Object Recognition and Categorization

    Get PDF
    Using distributed representations of objects enables artificial systems to be more versatile regarding inter- and intra-category variability, improving the appearance-based modeling of visual object understanding. They are built on the hypothesis that object models are structured dynamically using relatively invariant patches of information arranged in visual dictionaries, which can be shared across objects from the same category. However, implementing distributed representations efficiently to support the complexity of invariant object recognition and categorization remains a research problem of outstanding significance for the biological, the psychological, and the computational approach to understanding visual perception. The present work focuses on solutions driven by top-down object knowledge. It is motivated by the idea that, equipped with sensors and processing mechanisms from the neural pathways serving visual perception, biological systems are able to define efficient measures of similarities between properties observed in objects and use these relationships to form natural clusters of object parts that share equivalent ones. Based on the comparison of stimulus-response signatures from these object-to-memory mappings, biological systems are able to identify objects and their kinds. The present work combines biologically inspired mathematical models to develop memory frameworks for artificial systems, where these invariant patches are represented with regular-shaped graphs, whose nodes are labeled with elementary features that capture texture information from object images. It also applies unsupervised clustering techniques to these graph image features to corroborate the existence of natural clusters within their data distribution and determine their composition. The properties of such computational theory include self-organization and intelligent matching of these graph image features based on the similarity and co-occurrence of their captured texture information. The performance to model invariant object recognition and categorization of feature-based artificial systems equipped with each of the developed memory frameworks is validated applying standard methodologies to well-known image libraries found in literature. Additionally, these artificial systems are cross-compared with state-of-the-art alternative solutions. In conclusion, the findings of the present work convey implications for strategies and experimental paradigms to analyze human object memory as well as technical applications for robotics and computer vision

    Facilitating High Performance Code Parallelization

    Get PDF
    With the surge of social media on one hand and the ease of obtaining information due to cheap sensing devices and open source APIs on the other hand, the amount of data that can be processed is as well vastly increasing. In addition, the world of computing has recently been witnessing a growing shift towards massively parallel distributed systems due to the increasing importance of transforming data into knowledge in today’s data-driven world. At the core of data analysis for all sorts of applications lies pattern matching. Therefore, parallelizing pattern matching algorithms should be made efficient in order to cater to this ever-increasing abundance of data. We propose a method that automatically detects a user’s single threaded function call to search for a pattern using Java’s standard regular expression library, and replaces it with our own data parallel implementation using Java bytecode injection. Our approach facilitates parallel processing on different platforms consisting of shared memory systems (using multithreading and NVIDIA GPUs) and distributed systems (using MPI and Hadoop). The major contributions of our implementation consist of reducing the execution time while at the same time being transparent to the user. In addition to that, and in the same spirit of facilitating high performance code parallelization, we present a tool that automatically generates Spark Java code from minimal user-supplied inputs. Spark has emerged as the tool of choice for efficient big data analysis. However, users still have to learn the complicated Spark API in order to write even a simple application. Our tool is easy to use, interactive and offers Spark’s native Java API performance. To the best of our knowledge and until the time of this writing, such a tool has not been yet implemented
    corecore