271 research outputs found

    ISURF: RFID Enabled Collaborative Supply Chain Planning Environment

    Get PDF
    To be able to cope with the requirements of today’s competitive and demanding digital world of business, companies, especially SMEs, need to be more agile, and be ready to react to the changing requirements of the sector. This requires a better view and a more comprehensive analysis of the whole marketplace which can be achieved through a knowledge oriented collaborative supply chain planning initiative. The parties also need to be capable of monitoring the supply chain visibility in a real time fashion, which can be enabled through the use of RFID devices. RFID enabled collaborative supply chain planning has been achieved by big industry players in well defined restricted business circumstances through some selected standard message schemes. However, SMEs are still far behind in this process due to their small IT budgets. In iSURF Project we address this problem by providing a set of open source tools to enable seamless collection of supply chain visibility, synchronizing this with master data, exchanging supply chain visibility and other planning data with each other through a service oriented supply chain planning environment which also handles the interoperability of the messages exchanged

    Software Defined Application Delivery Networking

    Get PDF
    In this thesis we present the architecture, design, and prototype implementation details of AppFabric. AppFabric is a next generation application delivery platform for easily creating, managing and controlling massively distributed and very dynamic application deployments that may span multiple datacenters. Over the last few years, the need for more flexibility, finer control, and automatic management of large (and messy) datacenters has stimulated technologies for virtualizing the infrastructure components and placing them under software-based management and control; generically called Software-defined Infrastructure (SDI). However, current applications are not designed to leverage this dynamism and flexibility offered by SDI and they mostly depend on a mix of different techniques including manual configuration, specialized appliances (middleboxes), and (mostly) proprietary middleware solutions together with a team of extremely conscientious and talented system engineers to get their applications deployed and running. AppFabric, 1) automates the whole control and management stack of application deployment and delivery, 2) allows application architects to define logical workflows consisting of application servers, message-level middleboxes, packet-level middleboxes and network services (both, local and wide-area) composed over application-level routing policies, and 3) provides the abstraction of an application cloud that allows the application to dynamically (and automatically) expand and shrink its distributed footprint across multiple geographically distributed datacenters operated by different cloud providers. The architecture consists of a hierarchical control plane system called Lighthouse and a fully distributed data plane design (with no special hardware components such as service orchestrators, load balancers, message brokers, etc.) called OpenADN . The current implementation (under active development) consists of ~10000 lines of python and C code. AppFabric will allow applications to fully leverage the opportunities provided by modern virtualized Software-Defined Infrastructures. It will serve as the platform for deploying massively distributed, and extremely dynamic next generation application use-cases, including: Internet-of-Things/Cyber-Physical Systems: Through support for managing distributed gather-aggregate topologies common to most Internet-of-Things(IoT) and Cyber-Physical Systems(CPS) use-cases. By their very nature, IoT and CPS use cases are massively distributed and have different levels of computation and storage requirements at different locations. Also, they have variable latency requirements for their different distributed sites. Some services, such as device controllers, in an Iot/CPS application workflow may need to gather, process and forward data under near-real time constraints and hence need to be as close to the device as possible. Other services may need more computation to process aggregated data to drive long term business intelligence functions. AppFabric has been designed to provide support for such very dynamic, highly diversified and massively distributed application use-cases. Network Function Virtualization: Through support for heterogeneous workflows, application-aware networking, and network-aware application deployments, AppFabric will enable new partnerships between Application Service Providers (ASPs) and Network Service Providers (NSPs). An application workflow in AppFabric may comprise of application services, packet and message-level middleboxes, and network transport services chained together over an application-level routing substrate. The Application-level routing substrate allows policy-based service chaining where the application may specify policies for routing their application traffic over different services based on application-level content or context. Virtual worlds/multiplayer games: Through support for creating, managing and controlling dynamic and distributed application clouds needed by these applications. AppFabric allows the application to easily specify policies to dynamically grow and shrink the application\u27s footprint over different geographical sites, on-demand. Mobile Apps: Through support for extremely diversified and very dynamic application contexts typical of such applications. Also, AppFabric provides support for automatically managing massively distributed service deployment and controlling application traffic based on application-level policies. This allows mobile applications to provide the best Quality-of-Experience to its users without This thesis is the first to handle and provide a complete solution for such a complex and relevant architectural problem that is expected to touch each of our lives by enabling exciting new application use-cases that are not possible today. Also, AppFabric is a non-proprietary platform that is expected to spawn lots of innovations both in the design of the platform itself and the features it provides to applications. AppFabric still needs many iterations, both in terms of design and implementation maturity. This thesis is not the end of journey for AppFabric but rather just the beginning

    CERN openlab Whitepaper on Future IT Challenges in Scientific Research

    Get PDF
    This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates

    High-Performance Near-Time Processing of Bulk Data

    Get PDF
    Enterprise Systems like customer-billing systems or financial transaction systems are required to process large volumes of data in a fixed period of time. Those systems are increasingly required to also provide near-time processing of data to support new service offerings. Common systems for data processing are either optimized for high maximum throughput or low latency. This thesis proposes the concept for an adaptive middleware, which is a new approach for designing systems for bulk data processing. The adaptive middleware is able to adapt its processing type fluently between batch processing and single-event processing. By using message aggregation, message routing and a closed feedback-loop to adjust the data granularity at runtime, the system is able to minimize the end-to-end latency for different load scenarios. The relationship of end-to-end latency and throughput of batch and message-based systems is formally analyzed and a performance evaluation of both processing types has been conducted. Additionally, the impact of message aggregation on throughput and latency is investigated. The proposed middleware concept has been implemented with a research prototype and has been evaluated. The results of the evaluation show that the concept is viable and is able to optimize the end-to-end latency of a system. The design, implementation and operation of an adaptive system for bulk data processing differs from common approaches to implement enterprise systems. A conceptual framework has been development to guide the development process of how to build an adaptive software for bulk data processing. It defines the needed roles and their skills, the necessary tasks and their relationship, artifacts that are created and required by different tasks, the tools that are needed to process the tasks and the processes, which describe the order of tasks

    Speculative Segmented Sum for Sparse Matrix-Vector Multiplication on Heterogeneous Processors

    Full text link
    Sparse matrix-vector multiplication (SpMV) is a central building block for scientific software and graph applications. Recently, heterogeneous processors composed of different types of cores attracted much attention because of their flexible core configuration and high energy efficiency. In this paper, we propose a compressed sparse row (CSR) format based SpMV algorithm utilizing both types of cores in a CPU-GPU heterogeneous processor. We first speculatively execute segmented sum operations on the GPU part of a heterogeneous processor and generate a possibly incorrect results. Then the CPU part of the same chip is triggered to re-arrange the predicted partial sums for a correct resulting vector. On three heterogeneous processors from Intel, AMD and nVidia, using 20 sparse matrices as a benchmark suite, the experimental results show that our method obtains significant performance improvement over the best existing CSR-based SpMV algorithms. The source code of this work is downloadable at https://github.com/bhSPARSE/Benchmark_SpMV_using_CSRComment: 22 pages, 8 figures, Published at Parallel Computing (PARCO

    Participant Domain Name Token Profile for security enhancements supporting service oriented architecture

    Get PDF
    This research proposes a new secure token profile for improving the existing Web Services security standards. It provides a new authentication mechanism. This additional level of security is important for the Service-Oriented Architecture (SOA), which is an architectural style that uses a set of principles and design rules to shape interacting applications and maintain interoperability. Currently, the market push is towards SOA, which provides several advantages, for instance: integration with heterogeneous systems, services reuse, standardization of data exchange, etc. Web Services is one of the technologies to implement SOA and it can be implemented using Simple Object Access Protocol (SOAP). A SOAP-based Web Service relies on XML for its message format and common application layer protocols for message negotiation and transmission. However, it is a security challenge when a message is transmitted over the network, especially on the Internet. The Organization for Advancement of Structured Information Standards (OASIS) announced a set of Web Services Security standards that focus on two major areas. “Who” can use the Web Service and “What” are the permissions. However, the location or domain of the message sender is not authenticated. Therefore, a new secure token profile called: Participant Domain Name Token Profile (PDNT) is created to tackle this issue. The PDNT provides a new security feature, which the existing token profiles do not address. Location-based authentication is achieved if adopting the PDNT when using Web Services. In the performance evaluation, PDNT is demonstrated to be significantly faster than other secure token profiles. The processing overhead of using the PDNT with other secure token profiles is very small given the additional security provided. Therefore all the participants can acquire the benefits of increased security and performance at low cost

    Cloud computing for energy management in smart grid - an application survey

    Get PDF
    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid

    Conception et évaluation de performance d'un Bus applicatif, massivement parallèle et orienté service.

    Get PDF
    Enterprise Service Bus (ESB) est actuellement l'approche la plus prometteuse pour l'implémentation d'une architecture orientée services (SOA : Service-Oriented Architecture) par l'intégration des différentes applications isolées dans une plateforme centralisée. De nombreuses solutions d'intégration à base d'ESB on été proposées, elles sont soit open-source comme : Mule, Petals, ou encore Fuse, soit propriétaires tels que : Sonic ESB, IBM WebSphere Message Broker, ou Oracle ESB. Cependant, il n'en existe aucune en mesure de traiter, à la fois des aspects : d'intégration et de traitement massivement parallèle, du moins à notre connaissance. L'intégration du parallélisme dans le traitement est un moyen de tirer profit des technologies multicœurs/multiprocesseurs qui améliorent considérablement les performances des ESBs.Toutefois, cette intégration est une démarche complexe et soulève des problèmes à plusieurs niveaux : communication, synchronisation, partage de données, etc.Dans cette thèse, nous présentons l'étude d'une nouvelle architecture massivement parallèle de type ESB.Enterprise service bus (ESB) is currently the most promising approach for business application integration in distributed and heterogeneous environments. It allows to deploy a service-oriented architecture (SOA) by the integration of all the isolated applications on a decentralized platform.Several commercial or open source ESB-based solutions have been proposed. However, to the best of our knowledge, none of these solutions has integrated the parallel processing. The integration of parallelism in the treatment allows to take advantage of the multicore/multiprocessor technologies and thus can improve greatly the ESB performance. However, this integration is difficult to achieve, and poses problems at multiple levels (communication, synchronization, etc). In this study, we present a new massively parallel ESB architecture that meets this challenge.PARIS-CNAM (751032301) / SudocSudocFranceF

    Monitoring and Information Alignment in Pursuit of an IoT-Enabled Self-Sustainable Interoperability

    Get PDF
    To remain competitive with big corporations, small and medium-sized enterprises (SMEs) often need to be more dynamic, adapt to new business situations, react faster, and thereby survive in today‘s global economy. To do so, SMEs normally seek to create consortiums, thus gaining access to new and more opportunities. However, this strategy may also lead to complications. Due to the different sources of enterprise models and semantics, organizations are experiencing difficulties in seamlessly exchanging vital information via electronic means. In their attempt to address this issue, most seek to achieve interoperability by establishing peer-to-peer mappings with different business partners, or by using neutral data standards to regulate communications in optimized networks. Moreover, systems are more and more dynamic, frequently changing to answer new customer‘s requirements, causing new interoperability problems and a reduction of efficiency. Another situation that is constantly changing is the devices used in the enterprises, as the Enterprise Information Systems, devices are used to register internal data, and to be used to monitor several aspects. These devices are constantly changing, following the evolution and growth of the market. So, it is important to monitor these devices and doing a model representation of them. This dissertation proposes a self-sustainable interoperable framework to monitor existing enterprise information systems and their devices, monitor the device/enterprise network for changes and automatically detecting model changes. With this, network harmonization disruptions are detected in a timely way, and possible solutions are suggested to regain the interoperable status, thus enhancing robustness for reaching sustainability of business networks along time
    corecore