79 research outputs found

    Semantic-guided predictive modeling and relational learning within industrial knowledge graphs

    Get PDF
    The ubiquitous availability of data in today’s manufacturing environments, mainly driven by the extended usage of software and built-in sensing capabilities in automation systems, enables companies to embrace more advanced predictive modeling and analysis in order to optimize processes and usage of equipment. While the potential insight gained from such analysis is high, it often remains untapped, since integration and analysis of data silos from different production domains requires high manual effort and is therefore not economic. Addressing these challenges, digital representations of production equipment, so-called digital twins, have emerged leading the way to semantic interoperability across systems in different domains. From a data modeling point of view, digital twins can be seen as industrial knowledge graphs, which are used as semantic backbone of manufacturing software systems and data analytics. Due to the prevalent historically grown and scattered manufacturing software system landscape that is comprising of numerous proprietary information models, data sources are highly heterogeneous. Therefore, there is an increasing need for semi-automatic support in data modeling, enabling end-user engineers to model their domain and maintain a unified semantic knowledge graph across the company. Once the data modeling and integration is done, further challenges arise, since there has been little research on how knowledge graphs can contribute to the simplification and abstraction of statistical analysis and predictive modeling, especially in manufacturing. In this thesis, new approaches for modeling and maintaining industrial knowledge graphs with focus on the application of statistical models are presented. First, concerning data modeling, we discuss requirements from several existing standard information models and analytic use cases in the manufacturing and automation system domains and derive a fragment of the OWL 2 language that is expressive enough to cover the required semantics for a broad range of use cases. The prototypical implementation enables domain end-users, i.e. engineers, to extend the basis ontology model with intuitive semantics. Furthermore it supports efficient reasoning and constraint checking via translation to rule-based representations. Based on these models, we propose an architecture for the end-user facilitated application of statistical models using ontological concepts and ontology-based data access paradigms. In addition to that we present an approach for domain knowledge-driven preparation of predictive models in terms of feature selection and show how schema-level reasoning in the OWL 2 language can be employed for this task within knowledge graphs of industrial automation systems. A production cycle time prediction model in an example application scenario serves as a proof of concept and demonstrates that axiomatized domain knowledge about features can give competitive performance compared to purely data-driven ones. In the case of high-dimensional data with small sample size, we show that graph kernels of domain ontologies can provide additional information on the degree of variable dependence. Furthermore, a special application of feature selection in graph-structured data is presented and we develop a method that allows to incorporate domain constraints derived from meta-paths in knowledge graphs in a branch-and-bound pattern enumeration algorithm. Lastly, we discuss maintenance of facts in large-scale industrial knowledge graphs focused on latent variable models for the automated population and completion of missing facts. State-of-the art approaches can not deal with time-series data in form of events that naturally occur in industrial applications. Therefore we present an extension of learning knowledge graph embeddings in conjunction with data in form of event logs. Finally, we design several use case scenarios of missing information and evaluate our embedding approach on data coming from a real-world factory environment. We draw the conclusion that industrial knowledge graphs are a powerful tool that can be used by end-users in the manufacturing domain for data modeling and model validation. They are especially suitable in terms of the facilitated application of statistical models in conjunction with background domain knowledge by providing information about features upfront. Furthermore, relational learning approaches showed great potential to semi-automatically infer missing facts and provide recommendations to production operators on how to keep stored facts in synch with the real world

    Semantically defined Analytics for Industrial Equipment Diagnostics

    Get PDF
    In this age of digitalization, industries everywhere accumulate massive amount of data such that it has become the lifeblood of the global economy. This data may come from various heterogeneous systems, equipment, components, sensors, systems and applications in many varieties (diversity of sources), velocities (high rate of changes) and volumes (sheer data size). Despite significant advances in the ability to collect, store, manage and filter data, the real value lies in the analytics. Raw data is meaningless, unless it is properly processed to actionable (business) insights. Those that know how to harness data effectively, have a decisive competitive advantage, through raising performance by making faster and smart decisions, improving short and long-term strategic planning, offering more user-centric products and services and fostering innovation. Two distinct paradigms in practice can be discerned within the field of analytics: semantic-driven (deductive) and data-driven (inductive). The first emphasizes logic as a way of representing the domain knowledge encoded in rules or ontologies and are often carefully curated and maintained. However, these models are often highly complex, and require intensive knowledge processing capabilities. Data-driven analytics employ machine learning (ML) to directly learn a model from the data with minimal human intervention. However, these models are tuned to trained data and context, making it difficult to adapt. Industries today that want to create value from data must master these paradigms in combination. However, there is great need in data analytics to seamlessly combine semantic-driven and data-driven processing techniques in an efficient and scalable architecture that allows extracting actionable insights from an extreme variety of data. In this thesis, we address these needs by providing: • A unified representation of domain-specific and analytical semantics, in form of ontology models called TechOnto Ontology Stack. It is highly expressive, platform-independent formalism to capture conceptual semantics of industrial systems such as technical system hierarchies, component partonomies etc and its analytical functional semantics. • A new ontology language Semantically defined Analytical Language (SAL) on top of the ontology model that extends existing DatalogMTL (a Horn fragment of Metric Temporal Logic) with analytical functions as first class citizens. • A method to generate semantic workflows using our SAL language. It helps in authoring, reusing and maintaining complex analytical tasks and workflows in an abstract fashion. • A multi-layer architecture that fuses knowledge- and data-driven analytics into a federated and distributed solution. To our knowledge, the work in this thesis is one of the first works to introduce and investigate the use of the semantically defined analytics in an ontology-based data access setting for industrial analytical applications. The reason behind focusing our work and evaluation on industrial data is due to (i) the adoption of semantic technology by the industries in general, and (ii) the common need in literature and in practice to allow domain expertise to drive the data analytics on semantically interoperable sources, while still harnessing the power of analytics to enable real-time data insights. Given the evaluation results of three use-case studies, our approach surpass state-of-the-art approaches for most application scenarios.Im Zeitalter der Digitalisierung sammeln die Industrien überall massive Daten-mengen, die zum Lebenselixier der Weltwirtschaft geworden sind. Diese Daten können aus verschiedenen heterogenen Systemen, Geräten, Komponenten, Sensoren, Systemen und Anwendungen in vielen Varianten (Vielfalt der Quellen), Geschwindigkeiten (hohe Änderungsrate) und Volumina (reine Datengröße) stammen. Trotz erheblicher Fortschritte in der Fähigkeit, Daten zu sammeln, zu speichern, zu verwalten und zu filtern, liegt der eigentliche Wert in der Analytik. Rohdaten sind bedeutungslos, es sei denn, sie werden ordnungsgemäß zu verwertbaren (Geschäfts-)Erkenntnissen verarbeitet. Wer weiß, wie man Daten effektiv nutzt, hat einen entscheidenden Wettbewerbsvorteil, indem er die Leistung steigert, indem er schnellere und intelligentere Entscheidungen trifft, die kurz- und langfristige strategische Planung verbessert, mehr benutzerorientierte Produkte und Dienstleistungen anbietet und Innovationen fördert. In der Praxis lassen sich im Bereich der Analytik zwei unterschiedliche Paradigmen unterscheiden: semantisch (deduktiv) und Daten getrieben (induktiv). Die erste betont die Logik als eine Möglichkeit, das in Regeln oder Ontologien kodierte Domänen-wissen darzustellen, und wird oft sorgfältig kuratiert und gepflegt. Diese Modelle sind jedoch oft sehr komplex und erfordern eine intensive Wissensverarbeitung. Datengesteuerte Analysen verwenden maschinelles Lernen (ML), um mit minimalem menschlichen Eingriff direkt ein Modell aus den Daten zu lernen. Diese Modelle sind jedoch auf trainierte Daten und Kontext abgestimmt, was die Anpassung erschwert. Branchen, die heute Wert aus Daten schaffen wollen, müssen diese Paradigmen in Kombination meistern. Es besteht jedoch ein großer Bedarf in der Daten-analytik, semantisch und datengesteuerte Verarbeitungstechniken nahtlos in einer effizienten und skalierbaren Architektur zu kombinieren, die es ermöglicht, aus einer extremen Datenvielfalt verwertbare Erkenntnisse zu gewinnen. In dieser Arbeit, die wir auf diese Bedürfnisse durch die Bereitstellung: • Eine einheitliche Darstellung der Domänen-spezifischen und analytischen Semantik in Form von Ontologie Modellen, genannt TechOnto Ontology Stack. Es ist ein hoch-expressiver, plattformunabhängiger Formalismus, die konzeptionelle Semantik industrieller Systeme wie technischer Systemhierarchien, Komponenten-partonomien usw. und deren analytische funktionale Semantik zu erfassen. • Eine neue Ontologie-Sprache Semantically defined Analytical Language (SAL) auf Basis des Ontologie-Modells das bestehende DatalogMTL (ein Horn fragment der metrischen temporären Logik) um analytische Funktionen als erstklassige Bürger erweitert. • Eine Methode zur Erzeugung semantischer workflows mit unserer SAL-Sprache. Es hilft bei der Erstellung, Wiederverwendung und Wartung komplexer analytischer Aufgaben und workflows auf abstrakte Weise. • Eine mehrschichtige Architektur, die Wissens- und datengesteuerte Analysen zu einer föderierten und verteilten Lösung verschmilzt. Nach unserem Wissen, die Arbeit in dieser Arbeit ist eines der ersten Werke zur Einführung und Untersuchung der Verwendung der semantisch definierten Analytik in einer Ontologie-basierten Datenzugriff Einstellung für industrielle analytische Anwendungen. Der Grund für die Fokussierung unserer Arbeit und Evaluierung auf industrielle Daten ist auf (i) die Übernahme semantischer Technologien durch die Industrie im Allgemeinen und (ii) den gemeinsamen Bedarf in der Literatur und in der Praxis zurückzuführen, der es der Fachkompetenz ermöglicht, die Datenanalyse auf semantisch inter-operablen Quellen voranzutreiben, und nutzen gleichzeitig die Leistungsfähigkeit der Analytik, um Echtzeit-Daten-einblicke zu ermöglichen. Aufgrund der Evaluierungsergebnisse von drei Anwendungsfällen Übertritt unser Ansatz für die meisten Anwendungsszenarien Modernste Ansätze

    Knowledge Based Systems: A Critical Survey of Major Concepts, Issues, and Techniques

    Get PDF
    This Working Paper Series entry presents a detailed survey of knowledge based systems. After being in a relatively dormant state for many years, only recently is Artificial Intelligence (AI) - that branch of computer science that attempts to have machines emulate intelligent behavior - accomplishing practical results. Most of these results can be attributed to the design and use of Knowledge-Based Systems, KBSs (or ecpert systems) - problem solving computer programs that can reach a level of performance comparable to that of a human expert in some specialized problem domain. These systems can act as a consultant for various requirements like medical diagnosis, military threat analysis, project risk assessment, etc. These systems possess knowledge to enable them to make intelligent desisions. They are, however, not meant to replace the human specialists in any particular domain. A critical survey of recent work in interactive KBSs is reported. A case study (MYCIN) of a KBS, a list of existing KBSs, and an introduction to the Japanese Fifth Generation Computer Project are provided as appendices. Finally, an extensive set of KBS-related references is provided at the end of the report

    Organisational sustainability readiness: a model and assessment tool for manufacturing companies

    Get PDF
    Manufacturing plays a major role in the economic and social development of society, yet this often comes at a high environmental cost. Despite great advances in our understanding of sustainability issues and solutions developed to tackle this challenge, current production and consumption models are still largely unsustainable. Strong industrial actions are required to move towards safer and cleaner practices respectful of the planetary boundaries. This paper puts forward a novel approach for top and middle management in manufacturing companies to build capabilities for sustainable manufacturing by assessing their organisational sustainability readiness. The proposed model and tool for organisational sustainability readiness were developed based on themes emerging from empirical data collected via interviews and focus groups in six companies. The resulting themes were consolidated and validated with relevant literature to create four levels of readiness, displaying a crescendo of operations management practices on the shop floor that positively affect sustainability performance. Finally, an industrial application was used to further validate the tool and demonstrate how it can help companies develop a roadmap for a more sustainable manufacturing industry

    Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    Get PDF
    Navigation through an indoor environment is a formidable challenge for an autonomous micro air vehicle. One solution is a vision aided inertial navigation system using depth-from-defocus to determine heading and depth to features in the scene. Depth-from-defocus uses a focal blur pattern to estimate depth. As depth increases, the observable change in the focal blur is generally reduced. Consequently, as the depth of a feature to be measured increases, the measurement performance decreases. The Fresnel zone plate, used as an aperture, introduces multiple focal planes. Interference between the multiple focal planes produce changes in the aperture that extend the depth at which changes in the focal blur are observable. This improved depth measurement performance results in improved performance of the vision aided navigation system as well. This research provides an in-depth study of the Fresnel zone plate used as a coded aperture and the performance improvement obtained by augmenting a single camera vision aided inertial navigation system

    Service architecting and dynamic composition in pervasive smart ecosystems for the Internet of things based on sensor network technology

    Get PDF
    Why pervasive awareness and Ambient Intelligence are perceived by a great part of the academia and industry as a massive revolution in the short-term? In our best knowledge, a cornerstone of this thought is based on the fact that the ultimate nature of the smart environment paradigm is not in the technology itself, but on a people-centered approach. Perhaps, is in this apparently simple conception where precisely lies the boldness of this promising vision, which has been consolidated in recent years with the emerging proliferation of mobile, personal, portable, wearable and sensory computing: to reach everyone and everywhere. On the one hand, it touches our daily lives in a close manner, minimizing the required attention from the users, anticipating to their needs with the main intention of redefining our idea of Quality of Experience. On the other hand, this new wave impacts everywhere at both global and personal scales allowing expanded connectivity between devices and smart objects, in a dynamic and ubiquitous manner, as a natural extension of the physical world around us. According to the above, this doctoral dissertation focuses on contributing to the integration of software and networking engineering advances in the field of pervasive smart spaces and environment using sensor networks. This is founded on the convergence of some information technology and computer science paradigms, such as service and agent orientation, semantic technologies and knowledge management in the framework of pervasive computing and the Internet of Things. To this end, the nSOM (nano Service-Oriented Middleware) and nSOL (nano Semantics-Oriented Language) approaches are presented. Firstly, the nSOM proposal defines a service-oriented platform for the implementation, deployment and exposure of agent-based in-network services to the Internet cloud on heterogeneous sensor devices. Secondly, the nSOL solution enables an abstraction for supporting ubiquitous service composition based on semantic knowledge management. The integration of both contributions leads to the formal modelling and practical development of adaptive virtual sensor services for pervasive Ambient Intelligence ecosystems. This work includes also the related performance characterization of the resulting prototype according to several metrics such as code size, volatile memory footprint, CPU overhead, service time delay and battery lifetime. Main foundations and outcomes presented in this essay are contextualized in the following European Research Projects: μSWN (FP6 code: IST-034642), DiYSE (ITEA2 code: 08005) and LifeWear (ITEA2 code: 09026). --------------------¿Por qué la sensibilidad ubicua y la inteligencia ambiental son percibidas por una gran parte de las comunidades académica e industrial como una revolución masiva en el corto plazo? En nuestra opinión, una piedra angular de este pensamiento es el hecho de que la naturaleza última del paradigma de entornos inteligentes no reside en la tecnología en sí misma, sino en una aproximación centrada en las personas. Y es quizá en esta aparente simple concepción donde se halla precisamente el atrevimiento de esta prometedora visión, consolidada en los últimos años con la emergente proliferación de la computación móvil, personal, portable, llevable y sensorial: llegar a todos y a todas partes. Por un lado, esta alcanza nuestras vidas de una manera cercana, minimizando la atención requerida por los usuarios, anticipándose a sus necesidades con el objetivo de redefinir nuestra idea de calidad de experiencia. Por otro lado, esta impacta en todas partes tanto a escala global como personal, con una conectividad expandida entre dispositivos y objetos inteligentes, de un modo ubicuo y dinámico, como una extensión natural del mundo que nos rodea. Conforme a lo anterior, esta tesis doctoral se centra en contribuir en la integración de los avances de ingeniería de redes y software en el ámbito de los espacios y entornos inteligentes ubicuos basados en redes de sensores. Esto se fundamenta en la convergencia de diversos paradigmas de las tecnologías de la información y ciencia de la computación, tales como orientación a servicios y agentes, tecnologías semánticas y de gestión del conocimiento en el contento de la computación ubicua en la Internet de las Cosas. Para este fin, se presentan las aproximaciones nSOM (nano Service-Oriented Middleware) y nSOL (nano Semantics-Oriented Language). En primer lugar, nSOM define una plataforma orientada a servicios para la implementación, despliegue y exposición a la nube de servicios basados en agentes e implementados en red sobre dispositivos heterogéneos de sensores. En segundo lugar, nSOL habilita una abstracción para proporcionar composición ubicua de servicios basada en gestión semántica del conocimiento. La integración de ambas contribuciones conduce a un modelado formal y de implementación práctica de servicios de sensor virtual adaptativos para ecosistemas de inteligencia ambiental. Este trabajo incluye la caracterización del rendimiento del prototipo resultante, basándonos para ello en métricas tales como tamaño de código, tamaño de memoria volátil, sobrecarga de procesamiento, retardo en tiempo de servicio y autonomía de baterías. Los principales fundamentos y resultados discutidos en este ensayo están contextualizados en los siguientes Proyectos de Investigación Europeos: μSWN (FP6 código: IST-034642), DiYSE (ITEA2 código: 08005) y LifeWear (ITEA2 código: 09026).Presidente: Juan Ramón Velasco Pérez; Vocal: Juan Carlos Dueñas; Secretario: Mario Muñoz Organer

    Adolescent Mothers: The Space Between What They Know and What They Do

    Get PDF
    Although the current rate of teen pregnancy in the United States is at a historic low (Martin et al., 2007), there are a number of risk factors associated with early parenthood. Adolescent parenthood is often embedded in a larger context of risk such as poverty, single parenthood, low educational attainment, a history of physical and emotional abuse, and engagement in risky behavior (Hans & Wakschlag, 2000). As parents, adolescent mothers tend to be less knowledgeable about child development, less stimulating in interactions with infants, less tolerant, and more punitive with punishment (Brooks-Gunn & Furstenberg, 1986). The children of adolescent mothers are at a greater risk for health problems, cognitive deficits, behavior problems, and insecure attachment styles (Broussard, 1995; Hans & Wakschlag, 2000). This study examined the effectiveness of an intervention designed to promote positive parenting skills in a group of homeless adolescent mothers residing in a group home. The intervention lasted 8 weeks and included weekly group and individual sessions. The goals of the intervention were to increase maternal knowledge of child development, improve maternal beliefs and expectations of infants, and increase maternal responsiveness. The effectiveness of the intervention was assessed by examining differences in pre and post intervention measures within the targeted group of homeless adolescent mothers. Results are presented in a case study format. This research adds to the literature on teen parenting and has implications for relationship-based interventions targeting teen mothers. The intervention may become a component of the services offered to teen mothers by a local transitional housing program for adolescent mothers

    EDMON - Electronic Disease Surveillance and Monitoring Network: A Personalized Health Model-based Digital Infectious Disease Detection Mechanism using Self-Recorded Data from People with Type 1 Diabetes

    Get PDF
    Through time, we as a society have been tested with infectious disease outbreaks of different magnitude, which often pose major public health challenges. To mitigate the challenges, research endeavors have been focused on early detection mechanisms through identifying potential data sources, mode of data collection and transmission, case and outbreak detection methods. Driven by the ubiquitous nature of smartphones and wearables, the current endeavor is targeted towards individualizing the surveillance effort through a personalized health model, where the case detection is realized by exploiting self-collected physiological data from wearables and smartphones. This dissertation aims to demonstrate the concept of a personalized health model as a case detector for outbreak detection by utilizing self-recorded data from people with type 1 diabetes. The results have shown that infection onset triggers substantial deviations, i.e. prolonged hyperglycemia regardless of higher insulin injections and fewer carbohydrate consumptions. Per the findings, key parameters such as blood glucose level, insulin, carbohydrate, and insulin-to-carbohydrate ratio are found to carry high discriminative power. A personalized health model devised based on a one-class classifier and unsupervised method using selected parameters achieved promising detection performance. Experimental results show the superior performance of the one-class classifier and, models such as one-class support vector machine, k-nearest neighbor and, k-means achieved better performance. Further, the result also revealed the effect of input parameters, data granularity, and sample sizes on model performances. The presented results have practical significance for understanding the effect of infection episodes amongst people with type 1 diabetes, and the potential of a personalized health model in outbreak detection settings. The added benefit of the personalized health model concept introduced in this dissertation lies in its usefulness beyond the surveillance purpose, i.e. to devise decision support tools and learning platforms for the patient to manage infection-induced crises
    corecore