5,810 research outputs found
Cyber-physical manufacturing systems: An architecture for sensor integration, production line simulation and cloud services
none9noThe pillars of Industry 4.0 require the integration of a modern smart factory, data storage in the Cloud, access to the Cloud for data analytics, and information sharing at the software level for simulation and hardware-in-the-loop (HIL) capabilities. The resulting cyber-physical system (CPS) is often termed the cyber-physical manufacturing system, and it has become crucial to cope with this increased system complexity and to attain the desired performances. However, since a great number of old production systems are based on monolithic architectures with limited external communication ports and reduced local computational capabilities, it is difficult to ensure such production lines are compliant with the Industry 4.0 pillars. A wireless sensor network is one solution for the smart connection of a production line to a CPS elaborating data through cloud computing. The scope of this research work lies in developing a modular software architecture based on the open service gateway initiative framework, which is able to seamlessly integrate both hardware and software wireless sensors, send data into the Cloud for further data analysis and enable both HIL and cloud computing capabilities. The CPS architecture was initially tested using HIL tools before it was deployed within a real manufacturing line for data collection and analysis over a period of two months.openPrist Mariorosario; Monteriu' Andrea; Pallotta Emanuele; Cicconi Paolo; Freddi Alessandro; Giuggioloni Federico; Caizer Eduard; Verdini Carlo; Longhi SauroPrist, Mariorosario; Monteriu', Andrea; Pallotta, Emanuele; Cicconi, Paolo; Freddi, Alessandro; Giuggioloni, Federico; Caizer, Eduard; Verdini, Carlo; Longhi, Saur
Service-oriented architecture for device lifecycle support in industrial automation
Dissertação para obtenção do Grau de Doutor em
Engenharia Electrotécnica e de Computadores
Especialidade: RobĂłtica e Manufactura IntegradaThis thesis addresses the device lifecycle support thematic in the scope of service oriented industrial automation domain. This domain is known for its plethora of heterogeneous equipment encompassing distinct functions, form factors, network interfaces, or I/O specifications supported by dissimilar software and hardware platforms. There is then an evident and crescent need to take every device into account and improve the agility performance during setup, control, management, monitoring and diagnosis phases.
Service-oriented Architecture (SOA) paradigm is currently a widely endorsed approach
for both business and enterprise systems integration. SOA concepts and technology
are continuously spreading along the layers of the enterprise organization envisioning
a unified interoperability solution. SOA promotes discoverability, loose coupling,
abstraction, autonomy and composition of services relying on open web standards â features that can provide an important contribution to the industrial automation domain.
The present work seized industrial automation device level requirements, constraints and needs to determine how and where can SOA be employed to solve some of the existent difficulties. Supported by these outcomes, a reference architecture shaped by distributed, adaptive and composable modules is proposed. This architecture will assist and ease the role of systems integrators during reengineering-related interventions throughout system lifecycle. In a converging direction, the present work also proposes a serviceoriented
device model to support previous architecture vision and goals by including
embedded added-value in terms of service-oriented peer-to-peer discovery and identification, configuration, management, as well as agile customization of device resources.
In this context, the implementation and validation work proved not simply the feasibility and fitness of the proposed solution to two distinct test-benches but also its relevance to the expanding domain of SOA applications to support device lifecycle in the industrial automation domain
Developing a Digital Twin at Building and City Levels: A Case Study of West Cambridge Campus
A digital twin (DT) refers to a digital replica of physical assets, processes, and systems. DTs integrate artificial intelligence, machine learning, and data analytics to create living digital simulation models that are able to learn and update from multiple sources as well as represent and predict the current and future conditions of physical counterparts. However, current activities related to DTs are still at an early stage with respect to buildings and other infrastructure assets from an architectural and engineering/construction point of view. Less attention has been paid to the operation and maintenance (O&M) phase, which is the longest time span in the asset life cycle. A systematic and clear architecture verified with practical use cases for constructing a DT would be the foremost step for effective operation and maintenance of buildings and cities. According to current research about multitier architectures, this paper presents a system architecture for DTs that is specifically designed at both the building and city levels. Based on this architecture, a DT demonstrator of the West Cambridge site of the University of Cambridge in the UK was developed that integrates heterogeneous data sources, supports effective data querying and analysis, supports decision-making processes in O&M management, and further bridges the gap between human relationships with buildings/cities. This paper aims at going through the whole process of developing DTs in building and city levels from the technical perspective and sharing lessons learned and challenges involved in developing DTs in real practices. Through developing this DT demonstrator, the results provide a clear roadmap and present particular DT research efforts for asset management practitioners, policymakers, and researchers to promote the implementation and development of DT at the building and city levels
Scalable Multi-cloud Platform to Support Industry and Scientific Applications
Cloud computing offers resources on-demand and without large capital investments. As such, it is attractive to many industry and scientific application areas that require large computation and storage facilities. Although Infrastructure as a Service (IaaS) clouds provide elasticity and on demand resource access, the challenges represented by multi-cloud capabilities and application level scalability are still largely unsolved. The CloudSME Simulation Platform (CSSP) extended with the Microservices-based Cloud Application-level Dynamic Orchestrator (MiCADO) addresses such issues. CSSP is a generic multi-cloud access platform for the development and execution of large scale industry and scientific simulations on heterogeneous cloud resources. MiCADO provides application level scalability to optimise execution time and costs. This paper outlines how these technologies have been developed in various European research projects, and showcases several application case-studies from manufacturing, engineering and life-sciences where these tools have been successfully utilised to execute large-scale simulations in an optimised way on heterogeneous cloud infrastructures
A Process Modelling Framework Based on Point Interval Temporal Logic with an Application to Modelling Patient Flows
This thesis considers an application of a temporal theory to describe and model the patient journey in the hospital accident and emergency (A&E) department. The aim is to introduce a generic but dynamic method applied to any setting, including healthcare. Constructing a consistent process model can be instrumental in streamlining healthcare issues. Current process modelling techniques used in healthcare such as flowcharts, unified modelling language activity diagram (UML AD), and business process modelling notation (BPMN) are intuitive and imprecise. They cannot fully capture the complexities of the types of activities and the full extent of temporal constraints to an extent where one could reason about the flows. Formal approaches such as Petri have also been reviewed to investigate their applicability to the healthcare domain to model processes.
Additionally, to schedule patient flows, current modelling standards do not offer any formal mechanism, so healthcare relies on critical path method (CPM) and program evaluation review technique (PERT), that also have limitations, i.e. finish-start barrier. It is imperative to specify the temporal constraints between the start and/or end of a process, e.g., the beginning of a process A precedes the start (or end) of a process B. However, these approaches failed to provide us with a mechanism for handling these temporal situations. If provided, a formal representation can assist in effective knowledge representation and quality enhancement concerning a process. Also, it would help in uncovering complexities of a system and assist in modelling it in a consistent way which is not possible with the existing modelling techniques.
The above issues are addressed in this thesis by proposing a framework that would provide a knowledge base to model patient flows for accurate representation based on point interval temporal logic (PITL) that treats point and interval as primitives. These objects would constitute the knowledge base for the formal description of a system. With the aid of the inference mechanism of the temporal theory presented here, exhaustive temporal constraints derived from the proposed axiomatic systemâ components serves as a knowledge base.
The proposed methodological framework would adopt a model-theoretic approach in which a theory is developed and considered as a model while the corresponding instance is considered as its application. Using this approach would assist in identifying core components of the system and their precise operation representing a real-life domain deemed suitable to the process modelling issues specified in this thesis. Thus, I have evaluated the modelling standards for their most-used terminologies and constructs to identify their key components. It will also assist in the generalisation of the critical terms (of process modelling standards) based on their ontology. A set of generalised terms proposed would serve as an enumeration of the theory and subsume the core modelling elements of the process modelling standards. The catalogue presents a knowledge base for the business and healthcare domains, and its components are formally defined (semantics). Furthermore, a resolution theorem-proof is used to show the structural features of the theory (model) to establish it is sound and complete.
After establishing that the theory is sound and complete, the next step is to provide the instantiation of the theory. This is achieved by mapping the core components of the theory to their corresponding instances. Additionally, a formal graphical tool termed as point graph (PG) is used to visualise the cases of the proposed axiomatic system. PG facilitates in modelling, and scheduling patient flows and enables analysing existing models for possible inaccuracies and inconsistencies supported by a reasoning mechanism based on PITL. Following that, a transformation is developed to map the core modelling components of the standards into the extended PG (PG*) based on the semantics presented by the axiomatic system.
A real-life case (from the Kingâs College hospital accident and emergency (A&E) departmentâs trauma patient pathway) is considered to validate the framework. It is divided into three patient flows to depict the journey of a patient with significant trauma, arriving at A&E, undergoing a procedure and subsequently discharged. Their staff relied upon the UML-AD and BPMN to model the patient flows. An evaluation of their representation is presented to show the shortfalls of the modelling standards to model patient flows. The last step is to model these patient flows using the developed approach, which is supported by enhanced reasoning and scheduling
Recommended from our members
Methodology for profiling literature in healthcare simulation
The publications that relate to the application of simulation to healthcare have steadily increased over the years. These publications are scattered amongst various journals that belong to several subject categories, including Operational Research, Health Economics and Pharmacokinetics. The simulation techniques that are applied to the study of healthcare problems are also varied. The aim of this study is to present
a methodology for profiling literature in
healthcare simulation. In our methodology, we
have considered papers on healthcare that have been published between 1970 and 2007 in
journals with impact factors that belonging to various subject categories reporting on the application of four simulation techniques, namely, Monte Carlo Simulation, Discrete-Event Simulation, System Dynamics and Agent-Based Simulation. The methodology has the following objectives: (a) to categorise the papers under the different simulation techniques and identify the
healthcare problems that each technique is
employed to investigate; (b) to profile, within our dataset, variables such as authors, article citations, etc.; (c) to identify turning point (strategically important) papers and authors through co-citation analysis of references cited
by the papers in our dataset. The focus of the paper is on the literature profiling methodology, and not the results that have been derived through the application of this methodology. The authors hope that the methodology presented here will be used to conduct similar work in not only healthcare but also other research domains
- âŠ