418 research outputs found

    Multimodal Shared-Control Interaction for Mobile Robots in AAL Environments

    Get PDF
    This dissertation investigates the design, development and implementation of cognitively adequate, safe and robust, spatially-related, multimodal interaction between human operators and mobile robots in Ambient Assisted Living environments both from the theoretical and practical perspectives. By focusing on different aspects of the concept Interaction, the essential contribution of this dissertation is divided into three main research packages; namely, Formal Interaction, Spatial Interaction and Multimodal Interaction in AAL. As the principle package, in Formal Interaction, research effort is dedicated to developing a formal language based interaction modelling and management solution process and a unified dialogue modelling approach. This package aims to enable a robust, flexible, and context-sensitive, yet formally controllable and tractable interaction. This type of interaction can be used to support the interaction management of any complex interactive systems, including the ones covered in the other two research packages. In the second research package, Spatial Interaction, a general qualitative spatial knowledge based multi-level conceptual model is developed and proposed. The goal is to support a spatially-related interaction in human-robot collaborative navigation. With a model-based computational framework, the proposed conceptual model has been implemented and integrated into a practical interactive system which has been evaluated by empirical studies. It has been particularly tested with respect to a set of high-level and model-based conceptual strategies for resolving the frequent spatially-related communication problems in human-robot interaction. Last but not least, in Multimodal Interaction in AAL, attention is drawn to design, development and implementation of multimodal interaction for elderly persons. In this elderly-friendly scenario, ageing-related characteristics are carefully considered for an effective and efficient interaction. Moreover, a standard model based empirical framework for evaluating multimodal interaction is provided. This framework was especially applied to evaluate a minutely developed and systematically improved elderly-friendly multimodal interactive system through a series of empirical studies with groups of elderly persons

    Enhanced Living Environments

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1303 “Algorithms, Architectures and Platforms for Enhanced Living Environments (AAPELE)”. The concept of Enhanced Living Environments (ELE) refers to the area of Ambient Assisted Living (AAL) that is more related with Information and Communication Technologies (ICT). Effective ELE solutions require appropriate ICT algorithms, architectures, platforms, and systems, having in view the advance of science and technology in this area and the development of new and innovative solutions that can provide improvements in the quality of life for people in their homes and can reduce the financial burden on the budgets of the healthcare providers. The aim of this book is to become a state-of-the-art reference, discussing progress made, as well as prompting future directions on theories, practices, standards, and strategies related to the ELE area. The book contains 12 chapters and can serve as a valuable reference for undergraduate students, post-graduate students, educators, faculty members, researchers, engineers, medical doctors, healthcare organizations, insurance companies, and research strategists working in this area

    A knowledge-based approach towards human activity recognition in smart environments

    Get PDF
    For many years it is known that the population of older persons is on the rise. A recent report estimates that globally, the share of the population aged 65 years or over is expected to increase from 9.3 percent in 2020 to around 16.0 percent in 2050 [1]. This point has been one of the main sources of motivation for active research in the domain of human activity recognition in smart-homes. The ability to perform ADL without assistance from other people can be considered as a reference for the estimation of the independent living level of the older person. Conventionally, this has been assessed by health-care domain experts via a qualitative evaluation of the ADL. Since this evaluation is qualitative, it can vary based on the person being monitored and the caregiver\u2019s experience. A significant amount of research work is implicitly or explicitly aimed at augmenting the health-care domain expert\u2019s qualitative evaluation with quantitative data or knowledge obtained from HAR. From a medical perspective, there is a lack of evidence about the technology readiness level of smart home architectures supporting older persons by recognizing ADL [2]. We hypothesize that this may be due to a lack of effective collaboration between smart-home researchers/developers and health-care domain experts, especially when considering HAR. We foresee an increase in HAR systems being developed in close collaboration with caregivers and geriatricians to support their qualitative evaluation of ADL with explainable quantitative outcomes of the HAR systems. This has been a motivation for the work in this thesis. The recognition of human activities \u2013 in particular ADL \u2013 may not only be limited to support the health and well-being of older people. It can be relevant to home users in general. For instance, HAR could support digital assistants or companion robots to provide contextually relevant and proactive support to the home users, whether young adults or old. This has also been a motivation for the work in this thesis. Given our motivations, namely, (i) facilitation of iterative development and ease in collaboration between HAR system researchers/developers and health-care domain experts in ADL, and (ii) robust HAR that can support digital assistants or companion robots. There is a need for the development of a HAR framework that at its core is modular and flexible to facilitate an iterative development process [3], which is an integral part of collaborative work that involves develop-test-improve phases. At the same time, the framework should be intelligible for the sake of enriched collaboration with health-care domain experts. Furthermore, it should be scalable, online, and accurate for having robust HAR, which can enable many smart-home applications. The goal of this thesis is to design and evaluate such a framework. This thesis contributes to the domain of HAR in smart-homes. Particularly the contribution can be divided into three parts. The first contribution is Arianna+, a framework to develop networks of ontologies - for knowledge representation and reasoning - that enables smart homes to perform human activity recognition online. The second contribution is OWLOOP, an API that supports the development of HAR system architectures based on Arianna+. It enables the usage of Ontology Web Language (OWL) by the means of Object-Oriented Programming (OOP). The third contribution is the evaluation and exploitation of Arianna+ using OWLOOP API. The exploitation of Arianna+ using OWLOOP API has resulted in four HAR system implementations. The evaluations and results of these HAR systems emphasize the novelty of Arianna+

    Proceedings of the 2012 Workshop on Ambient Intelligence Infrastructures (WAmIi)

    Get PDF
    This is a technical report including the papers presented at the Workshop on Ambient Intelligence Infrastructures (WAmIi) that took place in conjunction with the International Joint Conference on Ambient Intelligence (AmI) in Pisa, Italy on November 13, 2012. The motivation for organizing the workshop was the wish to learn from past experience on Ambient Intelligence systems, and in particular, on the lessons learned on the system architecture of such systems. A significant number of European projects and other research have been performed, often with the goal of developing AmI technology to showcase AmI scenarios. We believe that for AmI to become further successfully accepted the system architecture is essential

    Proceedings of the 2012 Workshop on Ambient Intelligence Infrastructures (WAmIi)

    Get PDF
    This is a technical report including the papers presented at the Workshop on Ambient Intelligence Infrastructures (WAmIi) that took place in conjunction with the International Joint Conference on Ambient Intelligence (AmI) in Pisa, Italy on November 13, 2012. The motivation for organizing the workshop was the wish to learn from past experience on Ambient Intelligence systems, and in particular, on the lessons learned on the system architecture of such systems. A significant number of European projects and other research have been performed, often with the goal of developing AmI technology to showcase AmI scenarios. We believe that for AmI to become further successfully accepted the system architecture is essential

    Central monitoring system for ambient assisted living

    Get PDF
    Smart homes for aged care enable the elderly to stay in their own homes longer. By means of various types of ambient and wearable sensors information is gathered on people living in smart homes for aged care. This information is then processed to determine the activities of daily living (ADL) and provide vital information to carers. Many examples of smart homes for aged care can be found in literature, however, little or no evidence can be found with respect to interoperability of various sensors and devices along with associated functions. One key element with respect to interoperability is the central monitoring system in a smart home. This thesis analyses and presents key functions and requirements of a central monitoring system. The outcomes of this thesis may benefit developers of smart homes for aged care

    Smartphone-based human activity recognition

    Get PDF
    Cotutela Universitat Politècnica de Catalunya i Università degli Studi di GenovaHuman Activity Recognition (HAR) is a multidisciplinary research field that aims to gather data regarding people's behavior and their interaction with the environment in order to deliver valuable context-aware information. It has nowadays contributed to develop human-centered areas of study such as Ambient Intelligence and Ambient Assisted Living, which concentrate on the improvement of people's Quality of Life. The first stage to accomplish HAR requires to make observations from ambient or wearable sensor technologies. However, in the second case, the search for pervasive, unobtrusive, low-powered, and low-cost devices for achieving this challenging task still has not been fully addressed. In this thesis, we explore the use of smartphones as an alternative approach for performing the identification of physical activities. These self-contained devices, which are widely available in the market, are provided with embedded sensors, powerful computing capabilities and wireless communication technologies that make them highly suitable for this application. This work presents a series of contributions regarding the development of HAR systems with smartphones. In the first place we propose a fully operational system that recognizes in real-time six physical activities while also takes into account the effects of postural transitions that may occur between them. For achieving this, we cover some research topics from signal processing and feature selection of inertial data, to Machine Learning approaches for classification. We employ two sensors (the accelerometer and the gyroscope) for collecting inertial data. Their raw signals are the input of the system and are conditioned through filtering in order to reduce noise and allow the extraction of informative activity features. We also emphasize on the study of Support Vector Machines (SVMs), which are one of the state-of-the-art Machine Learning techniques for classification, and reformulate various of the standard multiclass linear and non-linear methods to find the best trade off between recognition performance, computational costs and energy requirements, which are essential aspects in battery-operated devices such as smartphones. In particular, we propose two multiclass SVMs for activity classification:one linear algorithm which allows to control over dimensionality reduction and system accuracy; and also a non-linear hardware-friendly algorithm that only uses fixed-point arithmetic in the prediction phase and enables a model complexity reduction while maintaining the system performance. The efficiency of the proposed system is verified through extensive experimentation over a HAR dataset which we have generated and made publicly available. It is composed of inertial data collected from a group of 30 participants which performed a set of common daily activities while carrying a smartphone as a wearable device. The results achieved in this research show that it is possible to perform HAR in real-time with a precision near 97\% with smartphones. In this way, we can employ the proposed methodology in several higher-level applications that require HAR such as ambulatory monitoring of the disabled and the elderly during periods above five days without the need of a battery recharge. Moreover, the proposed algorithms can be adapted to other commercial wearable devices recently introduced in the market (e.g. smartwatches, phablets, and glasses). This will open up new opportunities for developing practical and innovative HAR applications.El Reconocimiento de Actividades Humanas (RAH) es un campo de investigación multidisciplinario que busca recopilar información sobre el comportamiento de las personas y su interacción con el entorno con el propósito de ofrecer información contextual de alta significancia sobre las acciones que ellas realizan. Recientemente, el RAH ha contribuido en el desarrollo de áreas de estudio enfocadas a la mejora de la calidad de vida del hombre tales como: la inteligència ambiental (Ambient Intelligence) y la vida cotidiana asistida por el entorno para personas dependientes (Ambient Assisted Living). El primer paso para conseguir el RAH consiste en realizar observaciones mediante el uso de sensores fijos localizados en el ambiente, o bien portátiles incorporados de forma vestible en el cuerpo humano. Sin embargo, para el segundo caso, aún se dificulta encontrar dispositivos poco invasivos, de bajo consumo energético, que permitan ser llevados a cualquier lugar, y de bajo costo. En esta tesis, nosotros exploramos el uso de teléfonos móviles inteligentes (Smartphones) como una alternativa para el RAH. Estos dispositivos, de uso cotidiano y fácilmente asequibles en el mercado, están dotados de sensores embebidos, potentes capacidades de cómputo y diversas tecnologías de comunicación inalámbrica que los hacen apropiados para esta aplicación. Nuestro trabajo presenta una serie de contribuciones en relación al desarrollo de sistemas para el RAH con Smartphones. En primera instancia proponemos un sistema que permite la detección de seis actividades físicas en tiempo real y que, además, tiene en cuenta las transiciones posturales que puedan ocurrir entre ellas. Con este fin, hemos contribuido en distintos ámbitos que van desde el procesamiento de señales y la selección de características, hasta algoritmos de Aprendizaje Automático (AA). Nosotros utilizamos dos sensores inerciales (el acelerómetro y el giroscopio) para la captura de las señales de movimiento de los usuarios. Estas han de ser procesadas a través de técnicas de filtrado para la reducción de ruido, segmentación y obtención de características relevantes en la detección de actividad. También hacemos énfasis en el estudio de Máquinas de soporte vectorial (MSV) que son uno de los algoritmos de AA más usados en la actualidad. Para ello reformulamos varios de sus métodos estándar (lineales y no lineales) con el propósito de encontrar la mejor combinación de variables que garanticen un buen desempeño del sistema en cuanto a precisión, coste computacional y requerimientos de energía, los cuales son aspectos esenciales en dispositivos portátiles con suministro de energía mediante baterías. En concreto, proponemos dos MSV multiclase para la clasificación de actividad: un algoritmo lineal que permite el balance entre la reducción de la dimensionalidad y la precisión del sistema; y asimismo presentamos un algoritmo no lineal conveniente para dispositivos con limitaciones de hardware que solo utiliza aritmética de punto fijo en la fase de predicción y que permite reducir la complejidad del modelo de aprendizaje mientras mantiene el rendimiento del sistema. La eficacia del sistema propuesto es verificada a través de una experimentación extensiva sobre la base de datos RAH que hemos generado y hecho pública en la red. Esta contiene la información inercial obtenida de un grupo de 30 participantes que realizaron una serie de actividades de la vida cotidiana en un ambiente controlado mientras tenían sujeto a su cintura un smartphone que capturaba su movimiento. Los resultados obtenidos en esta investigación demuestran que es posible realizar el RAH en tiempo real con una precisión cercana al 97%. De esta manera, podemos emplear la metodología propuesta en aplicaciones de alto nivel que requieran el RAH tales como monitorizaciones ambulatorias para personas dependientes (ej. ancianos o discapacitados) durante periodos mayores a cinco días sin la necesidad de recarga de baterías.Postprint (published version

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results

    Text–to–Video: Image Semantics and NLP

    Get PDF
    When aiming at automatically translating an arbitrary text into a visual story, the main challenge consists in finding a semantically close visual representation whereby the displayed meaning should remain the same as in the given text. Besides, the appearance of an image itself largely influences how its meaningful information is transported towards an observer. This thesis now demonstrates that investigating in both, image semantics as well as the semantic relatedness between visual and textual sources enables us to tackle the challenging semantic gap and to find a semantically close translation from natural language to a corresponding visual representation. Within the last years, social networking became of high interest leading to an enormous and still increasing amount of online available data. Photo sharing sites like Flickr allow users to associate textual information with their uploaded imagery. Thus, this thesis exploits this huge knowledge source of user generated data providing initial links between images and words, and other meaningful data. In order to approach visual semantics, this work presents various methods to analyze the visual structure as well as the appearance of images in terms of meaningful similarities, aesthetic appeal, and emotional effect towards an observer. In detail, our GPU-based approach efficiently finds visual similarities between images in large datasets across visual domains and identifies various meanings for ambiguous words exploring similarity in online search results. Further, we investigate in the highly subjective aesthetic appeal of images and make use of deep learning to directly learn aesthetic rankings from a broad diversity of user reactions in social online behavior. To gain even deeper insights into the influence of visual appearance towards an observer, we explore how simple image processing is capable of actually changing the emotional perception and derive a simple but effective image filter. To identify meaningful connections between written text and visual representations, we employ methods from Natural Language Processing (NLP). Extensive textual processing allows us to create semantically relevant illustrations for simple text elements as well as complete storylines. More precisely, we present an approach that resolves dependencies in textual descriptions to arrange 3D models correctly. Further, we develop a method that finds semantically relevant illustrations to texts of different types based on a novel hierarchical querying algorithm. Finally, we present an optimization based framework that is capable of not only generating semantically relevant but also visually coherent picture stories in different styles.Bei der automatischen Umwandlung eines beliebigen Textes in eine visuelle Geschichte, besteht die größte Herausforderung darin eine semantisch passende visuelle Darstellung zu finden. Dabei sollte die Bedeutung der Darstellung dem vorgegebenen Text entsprechen. Darüber hinaus hat die Erscheinung eines Bildes einen großen Einfluß darauf, wie seine bedeutungsvollen Inhalte auf einen Betrachter übertragen werden. Diese Dissertation zeigt, dass die Erforschung sowohl der Bildsemantik als auch der semantischen Verbindung zwischen visuellen und textuellen Quellen es ermöglicht, die anspruchsvolle semantische Lücke zu schließen und eine semantisch nahe Übersetzung von natürlicher Sprache in eine entsprechend sinngemäße visuelle Darstellung zu finden. Des Weiteren gewann die soziale Vernetzung in den letzten Jahren zunehmend an Bedeutung, was zu einer enormen und immer noch wachsenden Menge an online verfügbaren Daten geführt hat. Foto-Sharing-Websites wie Flickr ermöglichen es Benutzern, Textinformationen mit ihren hochgeladenen Bildern zu verknüpfen. Die vorliegende Arbeit nutzt die enorme Wissensquelle von benutzergenerierten Daten welche erste Verbindungen zwischen Bildern und Wörtern sowie anderen aussagekräftigen Daten zur Verfügung stellt. Zur Erforschung der visuellen Semantik stellt diese Arbeit unterschiedliche Methoden vor, um die visuelle Struktur sowie die Wirkung von Bildern in Bezug auf bedeutungsvolle Ähnlichkeiten, ästhetische Erscheinung und emotionalem Einfluss auf einen Beobachter zu analysieren. Genauer gesagt, findet unser GPU-basierter Ansatz effizient visuelle Ähnlichkeiten zwischen Bildern in großen Datenmengen quer über visuelle Domänen hinweg und identifiziert verschiedene Bedeutungen für mehrdeutige Wörter durch die Erforschung von Ähnlichkeiten in Online-Suchergebnissen. Des Weiteren wird die höchst subjektive ästhetische Anziehungskraft von Bildern untersucht und "deep learning" genutzt, um direkt ästhetische Einordnungen aus einer breiten Vielfalt von Benutzerreaktionen im sozialen Online-Verhalten zu lernen. Um noch tiefere Erkenntnisse über den Einfluss des visuellen Erscheinungsbildes auf einen Betrachter zu gewinnen, wird erforscht, wie alleinig einfache Bildverarbeitung in der Lage ist, tatsächlich die emotionale Wahrnehmung zu verändern und ein einfacher aber wirkungsvoller Bildfilter davon abgeleitet werden kann. Um bedeutungserhaltende Verbindungen zwischen geschriebenem Text und visueller Darstellung zu ermitteln, werden Methoden des "Natural Language Processing (NLP)" verwendet, die der Verarbeitung natürlicher Sprache dienen. Der Einsatz umfangreicher Textverarbeitung ermöglicht es, semantisch relevante Illustrationen für einfache Textteile sowie für komplette Handlungsstränge zu erzeugen. Im Detail wird ein Ansatz vorgestellt, der Abhängigkeiten in Textbeschreibungen auflöst, um 3D-Modelle korrekt anzuordnen. Des Weiteren wird eine Methode entwickelt die, basierend auf einem neuen hierarchischen Such-Anfrage Algorithmus, semantisch relevante Illustrationen zu Texten verschiedener Art findet. Schließlich wird ein optimierungsbasiertes Framework vorgestellt, das nicht nur semantisch relevante, sondern auch visuell kohärente Bildgeschichten in verschiedenen Bildstilen erzeugen kann

    A Hybrid Context-aware Middleware for Relevant Information Delivery in Multi-Role and Multi-User Monitoring Systems: An Application to the Building Management Domain

    Get PDF
    Recent advances in information and communications technology (ICT) have greatly extended capabilities and functionalities of control and monitoring systems including Building Management Systems (BMS). Specifically, it is now possible to integrate diverse set of devices and information systems providing heterogeneous data. This data, in turn, is now available on the higher levels of the system architectures, providing more information on the matter at hand and enabling principal possibility of better-informed decisions. Furthermore, the diversity and availability of information have made control and monitoring systems more attractive to new user groups, who now have the opportunity to find needed information, which was not available before. Thus, modern control and monitoring systems are well-equipped, multi-functional systems, which incorporate great number and variety of data sources and are used by multiple users with their special tasks and information needs.In theory, the diversity and availability of new data should lead to more informed users and better decisions. In practice, it overwhelms user capacities to perceive all available information and leads to the situations, where important data is hindered and lost, therefore complicating understanding of the ongoing status. Thus, there is a need in development of new solutions, which would reduce the unnecessary information burden to the users of the system, while keeping them well informed with respect to their personal needs and responsibilities.This dissertation proposes the middleware for relevant information delivery in multi-role and multi-user BMS, which is capable of analysing ongoing situations in the environment and delivering information personalized to specific user needs. The middleware implementation is based on a novel hybrid approach, which involve semantic modelling of the contextual information and fusion of this information with runtime device data by means of Complex Event Processing (CEP). The context model is actively used at the configuration stages of the middleware, which enables flexible redirection of information flows, simplified (re)configuration of the solution, and consideration of additional information at the runtime phases. The CEP utilizes contextual information and enables temporal reasoning support in combination with runtime analysis capabilities, thus processing ongoing data from devices and delivering personalized information flows. In addition, the work proposes classification and combination principles of ongoing system notifications, which further specialize information flows in accordance to user needs and environment status.The middleware and corresponding principles (e.g. knowledge modelling, classification and combination of ongoing notifications) have been designed contemplating the building management (BM) domain. A set of experiments on real data from rehabilitation facility has been carried out demonstrating applicability of the approach with respect to delivered information and performance considerations. It is expected that with minor modifications the approach has the potential of being adopted for control and monitoring systems of discrete manufacturing domain
    corecore