87 research outputs found

    An agent approach to improving radio frequency identification enabled Returnable Transport Equipment

    Get PDF
    Returnable transport equipment (RTE) such as pallets form an integral part of the supply chain and poor management leads to costly losses. Companies often address this matter by outsourcing the management of RTE to logistics service providers (LSPs). LSPs are faced with the task to provide logistical expertise to reduce RTE related waste, whilst differentiating their own services to remain competitive. In the current challenging economic climate, the role of the LSP to deliver innovative ways to achieve competitive advantage has never been so important. It is reported that radio frequency identification (RFID) application to RTE enables LSPs such as DHL to gain competitive advantage and offer clients improvements such as loss reduction, process efficiency improvement and effective security. However, the increased visibility and functionality of RFID enabled RTE requires further investigation in regards to decision‐making. The distributed nature of the RTE network favours a decentralised decision‐making format. Agents are an effective way to represent objects from the bottom‐up, capturing the behaviour and enabling localised decision‐making. Therefore, an agent based system is proposed to represent the RTE network and utilise the visibility and data gathered from RFID tags. Two types of agents are developed in order to represent the trucks and RTE, which have bespoke rules and algorithms in order to facilitate negotiations. The aim is to create schedules, which integrate RTE pick‐ups as the trucks go back to the depot. The findings assert that: - agent based modelling provides an autonomous tool, which is effective in modelling RFID enabled RTE in a decentralised utilising the real‐time data facility. ‐ the RFID enabled RTE model developed enables autonomous agent interaction, which leads to a feasible schedule integrating both forward and reverse flows for each RTE batch. ‐ the RTE agent scheduling algorithm developed promotes the utilisation of RTE by including an automatic return flow for each batch of RTE, whilst considering the fleet costs andutilisation rates. ‐ the research conducted contributes an agent based platform, which LSPs can use in order to assess the most appropriate strategies to implement for RTE network improvement for each of their clients

    Reference Model for Management of RFID System Implementations

    Get PDF
    Radio frequency identification (RFID) technology is adopted in supply chain as it possesses high potential for optimization. However, the adoption is constrained with management and technological issues for certain domains. Applicability and profitability of the technology and implementation approaches as well as maturity of the technology and data integration are few of the concerns in this regard. Therefore, many enterprises are still skeptical about investment in RFID technology. Rightly, for instance, there are no appropriate approaches for management of the RFID system implementations at present that consider specific concerns of preparation of the food manufacturing enterprises. This research suggests a reference model for the purpose. The model is a result of extensive literature reviews and practice-oriented research aiming practical solutions to the problems of the respective domain. The model, which involves planning, organization and realization of RFID system implementation activities, considers multiple facets of RFID system implementations in order to increase understanding of RFID technology (i.e. knowledge development), ease decision making of an RFID implementation (i.e. willingness), and reduce cost and complexity of RFID system implementations (i.e. effectiveness and efficiency). It is an artifact of design-oriented information system research and includes a frame of reference, a process model, input and output templates, and tools and techniques. The model is applied in ‘real life context’ in order to achieve objectives of the involved enterprises. Similarly, the model aims effectiveness and efficiency in the future use, for example, by providing free of cost acquisition and appropriateness for manufacturing industries of food businesses of Saxony-Anhalt. However, adaptation efforts (e.g. by instantiation or specialization) may vary depending on the skills of users of individual enterprises. The reference model provides flexibility in terms of independence from specific vendors, openness by complying with available standards (e.g. PMBOK), and relationship to RFID system development artifacts during technical work realization

    New Trends in the Use of Artificial Intelligence for the Industry 4.0

    Get PDF
    Industry 4.0 is based on the cyber-physical transformation of processes, systems and methods applied in the manufacturing sector, and on its autonomous and decentralized operation. Industry 4.0 reflects that the industrial world is at the beginning of the so-called Fourth Industrial Revolution, characterized by a massive interconnection of assets and the integration of human operators with the manufacturing environment. In this regard, data analytics and, specifically, the artificial intelligence is the vehicular technology towards the next generation of smart factories.Chapters in this book cover a diversity of current and new developments in the use of artificial intelligence on the industrial sector seen from the fourth industrial revolution point of view, namely, cyber-physical applications, artificial intelligence technologies and tools, Industrial Internet of Things and data analytics. This book contains high-quality chapters containing original research results and literature review of exceptional merit. Thus, it is in the aim of the book to contribute to the literature of the topic in this regard and let the readers know current and new trends in the use of artificial intelligence for the Industry 4.0

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Workplace values in the Japanese public sector: a constraining factor in the drive for continuous improvement

    Get PDF

    Advances in Supply Chain Management Decision Support Systems: Potential for Improving Decision Support Catalysed by Semantic Interoperability between Systems

    Get PDF
    Globalization has catapulted ‘cycle time’ as a key indicator of operational efficiency [1] in processes such as supply chain management (SCM). Systems automation holds the promise to augment the ability of supply chain operations or supply networks to rapidly adapt to changes, with minimal human intervention, under ideal conditions. Business communities are emerging as loose federations or organization of networks that may evolve to act as infomediaries in global SCM. These changes, although sluggish, are likely to impact process knowledge and in turn may be stimulated or inhibited by the availability or lack of process interoperability, respectively. The latter will determine operational efficiencies of supply chains. Currently “community of systems” or organization of networks (aligned by industry or business focus) contribute minimally in SCM decisions because true collaboration remains elusive. Convergence and maturity of multiple advances offers the potential for a paradigm shift in interoperability. It may evolve hand-in-hand with [a] the gradual adoption of the semantic web [2] with concomitant development of ontological frameworks, [b] increase in use of multi-agent systems and [c] advent of ubiquitous computing enabling near real-time access to identification of objects and analytics [4]. This paper examines some of these complex trends and related technologies. Irrespective of the characteristics of information systems, the development of various industry-contributed ontologies for knowledge and decision layers, may spur self-organizing networks of business communities and systems to increase their ability to sense and respond, more profitably, through better enterprise and extraprise exchange. In order to transform this vision into reality, systems automation must be weaned from the syntactic web and integrated with the organic growth of the semantic web. Understanding of process semantics and incorporation of intelligent agents with access to ubiquitous near real-time data “bus” are pillars for “intelligent” evolution of decision support systems. Software as infrastructure may integrate plethora of agent colonies through improved architectures (such as, service oriented architecture or SOA) and business communities aligned by industry or service focus may emerge as hubs of such agent empires. However, the feasibility of the path from exciting “pilots” in specific areas toward an informed convergence of systemic real-world implementation remains unclear and fraught with hurdles related to gaps in knowledge transfer from experts in academia to real-world practitioners. The value of interoperability between systems that may catalyse real-time intelligent decision support is further compromised by the lack of clarity of approach and tools. The latter offers significant opportunities for development of tools that may segue to innovative solutions approach. A critical mass of such solutions may spawn the necessary systems architecture for intelligent interoperability, essential for sustainable profitability and productivity in an intensely competitive global economy. This paper addresses some of these issues, tools and solutions that may have broad applicability in several operations including the management of adaptive supply-demand networks [7]
    • 

    corecore