6 research outputs found

    Simplifying the use of event-based systems with context mediation and declarative descriptions

    Get PDF
    Current trends like the proliferation of sensors or the Internet of Things lead to Cyber-physical Systems (CPSs). In these systems many different components communicate by exchanging events. While events provide a convenient abstraction for handling the high load these systems generate, CPSs are very complex and require expert computer scientists to handle correctly. We realized that one of the primary reasons for this inherent complexity is that events do not carry context. We analyzed the context of events and realized that there are two dimensions: context about the data of an event and context about the event itself. Context about the data includes assumptions like systems of measurement units or the structure of the encoded information that are required to correctly understand the event. Context about the event itself is data that provides additional information to the information carried by the event. For example an event might carry positional data, the additional information could then be the room identifier belonging to this position. Context about the data helps bridge the heterogeneity that CPSs possess. Event producers and consumers may have different assumptions about the data and thus interpret events in different ways. To overcome this gap, we developed the ACTrESS middleware. ACTrESS provides a model to encode interpretation assumptions in an interpretation context. Clients can thus make their assumptions explicit and send them to the middleware, which is then able to mediate between different contexts by transforming events. Through analysis of the provided contexts, ACTrESS can generate transformers, which are dynamically loaded into the system. It does not need to rely on costly operations like reflection. To prove this, we conducted a performance study which shows that in a content-based publish/subscribe system, the overhead introduced by ACTrESS’ transformations is too small to be measurable. Because events do not carry contextual information, expert computer scientists are required to describe situations that are made up of multiple events. The fact that CPSs promise to transform our everyday life (e.g., smart homes) makes this problem even more severe in that most of the target users cannot use CPSs. In this thesis, we developed a declarative language to easily describe situations and a desired reaction. Furthermore, we provide a mechanism to translate this high-level description to executable code. The key idea is that events are contextualized, i.e. our middleware enriches the event with the missing contextual information based on the situation description. The enriched events are then correlated and combined automatically, to ultimately be able to decide if the described situation is fulfilled or not. By generating small computational units, we achieve good parallelization and are able to elegantly scale up and down, which makes our approach particularly suitable for modern cloud architectures. We conducted a usability analysis and performance study. The usability analysis shows that our approach significantly simplifies the definition of reactive behavior in CPS. The performance study shows that the achieved automatic distribution and parallelization incur a small performance cost compared to highly optimized systems like Esper

    Batch study on COD and ammonia nitrogen removal using granular activated carbon and cockle shells

    Get PDF
    Landfills generate leachate that contains elevated concentration of contaminants and is hazardous to human health and the ecosystem. In this study, the mixture of granular activated carbon and cockle shells was investigated for remediation of COD and ammonia from stabilized landfill leachate. All adsorbent media were sieved to a particle size between 2.00 and 3.35 mm. The optimum mixing ratio, shaking speed, shaking time, pH, and dosage were determined. Characterization results show that the leachate had a high concentration of COD (1763 mg/L), ammonia nitrogen (573 mg/L), and BOD5/COD ratio (0.09). The optimum mixing ratio of granular activated carbon and cockle shells was 20:20, shaking speed 150 rpm, pH level 6, shaking time 120 min, and dosage 32 g. The adsorption isotherm analysis reveals that the Langmuir isotherm yielded the best fit to experimental data as compared with the Freundlich isotherm. The media produce encouraging results and can be used as a good and economical adsorbent

    Enhancing RFID data quality and reliability

    Full text link
    This thesis addressed the problem of data quality, reliability and energy consumption of networked Radio Frequency Identification systems for business intelligence applications decision making processes. The outcome of the research substantially improved the accuracy and reliability of RFID generated data as well as energy depletion thus prolonging RFID system lifetime

    Dealing with others' physical pain reveals variance in empathic processes: Evidence from event-related potentials.

    Get PDF
    The present work consists of a review of 5 event-related potentials (i.e., ERPs) experiments I conducted, which deal with the multifaceted nature of human empathy for pain (Experiment 1) and variances in empathic processes, as a function of othersâ race (Experiment 2) and othersâ perceived trustworthiness, i.e. driven by facial features (Experiments 4-5), addressed through classical and modified versions of the pain decision task. The classical version of the pain decision task requires participants to decide whether presented stimuli (either pictures of individuals or body parts) receive either painful or neutral stimulation. Furthermore, prior to investigate trustworthiness as modulator of neural empathic response, I adopted in Experiment 3 a different paradigm, namely the change detection task, and a direct neural correlate of the resolution of visual working memory (i.e., VWM) representations to test whether trustworthiness is automatically extracted from faces biasing VWM processing. The main issue of the neuroscientific research on empathy for pain is about its multiple aspects. Indeed, neuroscientific research identified at least two subprocesses constituting empathy: Experience sharing and mentalizing. The former encompasses affective and sensorimotor aspects to inner feel the otherâs emotive state; the latter allows to infer/attribute the otherâs mental state. Experience sharing and mentalizing appear to be at least anatomically dissociated. One important aim of the present thesis is to provide evidence on the possible functional dissociation in the temporal domain. In Experiment 1 I addressed this issue by implementing a new version of the pain decision task. I presented participants with both sensorimotor (picture of a face with either painful or neutral expression) and contextual information (a sentence describing either a painful or neutral context) to highlight the deployment of electrophysiological reaction to pain related to the both subprocesses and I provided evidence of selective engagement of experience sharing and mentalizing into two time-windows. This is the starting point of the present studies on the way of exploring variance in neural empathic response. Previous studies suggested that people are more naturally empathic towards own-race individuals relative to other-race individuals (Avenanti et al., 2010; Xu et al., 2009). In Experiment 2 I provided compelling evidence that such preference is confined to experience sharing. Indeed, mentalizing is responsive to other-race pain. Although implicitly appraised, race of a face is processed quickly and automatically driven by physical facial features. Recently it has been demonstrated that evaluation of perceived individualsâ facial trustworthiness is appraised at first sight (Willis and Todorov, 2006), similarly to race. I hypothesized that trustworthiness, either in computerized faces (Experiment 4) and real faces (Experiment 5) plays another key role in modulating empathy even in the absence of previous knowledge on othersâ personality and social behavior because it can implicitly and quickly shape our social interactions. In an attempt to determine the efficacy of trustworthiness appraisal, I tested in Experiment 3 whether and how standardized physical facial features of trustworthiness (Oosterhof and Todorov, 2008) bias VWM processing even when task-irrelevant

    Operational research IO 2021—analytics for a better world. XXI Congress of APDIO, Figueira da Foz, Portugal, November 7–8, 2021

    Get PDF
    This book provides the current status of research on the application of OR methods to solve emerging and relevant operations management problems. Each chapter is a selected contribution of the IO2021 - XXI Congress of APDIO, the Portuguese Association of Operational Research, held in Figueira da Foz from 7 to 8 November 2021. Under the theme of analytics for a better world, the book presents interesting results and applications of OR cutting-edge methods and techniques to various real-world problems. Of particular importance are works applying nonlinear, multi-objective optimization, hybrid heuristics, multicriteria decision analysis, data envelopment analysis, simulation, clustering techniques and decision support systems, in different areas such as supply chain management, production planning and scheduling, logistics, energy, telecommunications, finance and health. All chapters were carefully reviewed by the members of the scientific program committee.info:eu-repo/semantics/publishedVersio

    Modélisation et exploitation des connaissances pour les processus d'expertise collaborative

    Get PDF
    Les démarches d’expertise sont aujourd’hui mises en oeuvre dans de nombreux domaines, et plus particulièrement dans le domaine industriel, pour évaluer des situations, comprendre des problèmes ou encore anticiper des risques. Placés en amont des problèmes complexes et mal définis, elles servent à la compréhension de ceux-ci et facilitent ainsi les prises de décisions. Ces démarches sont devenues tellement généralisées qu’elles ont fait l’objet d’une norme (NF X 50-110) et d’un guide de recommandation édité en 2011 (FDX 50-046). Ces démarches reposent principalement sur la formulation d’hypothèses avec un certain doute par un ou plusieurs experts. Par la suite, ces hypothèses vont progressivement être validées ou invalidées au cours des différentes phases de la démarche par rapport aux connaissances disponibles. Ainsi, les certitudes accordées aux hypothèses vont connaître une évolution au cours des dites phases et permettront d’avoir une certitude sur la compréhension d’un problème en fonction des hypothèses valides. Bien que cette approche d’étude de problèmes ait fait l’objet d’une norme, elle manque d’outils automatiques ou semi-automatiques pour assister les experts du domaine lors des différentes phases exploratoires des problèmes. De plus, cette approche quasi manuelle manque des mécanismes appropriés pour gérer les connaissances produites de manière à ce qu’elles soient compréhensibles par les humains et manipulables par les machines. Avant de proposer des solutions à ces limites de l’état actuel des processus d’expertise, une revue des études fondamentales et appliquées en logique, en représentation des connaissances pour l’expertise ou l’expérience, et en intelligence collaborative a été réalisée pour identifier les briques technologiques des solutions proposées. Une analyse de la norme NF X 50-100 a été menée pour comprendre les caractéristiques des Processus d’Expertise et comment ils peuvent être représentés formellement et utilisés comme retour d’expérience. Une étude a été menée sur des rapports d’expertise passés d’accidents d’avion pour trouver comment ils peuvent être représentés dans un format lisible par une machine, général et extensible, indépendant du domaine et partageable entre les systèmes. Cette thèse apporte les contributions suivantes à la démarche d’expertise : Une formalisation des connaissances et une méthodologie de résolution collaborative de problèmes en utilisant des hypothèses. Cette méthode est illustrée par un cas d’étude tiré d’un problème de l’industrie de production, dans lequel un produit fabriqué a été rejeté par des clients. La méthode décrit également des mécanismes d’inférence compatibles avec la représentation formelle proposée. Un raisonnement collaboratif non-monotone basé sur la programmation logique par l’ensemble et la théorie d’incertitude utilisant les fonctions de croyance. Une représentation sémantique des rapports d’expertise basée sur les ontologies. Premièrement, ces contributions ont permis une exécution formelle et systématique des Processus d’Expertise, avec une motivation centrée sur l’humain. Ensuite, elles favorisent leur utilisation pour un traitement approprié selon des propriétés essentielles telles que la traçabilité, la transparence, le raisonnement non-monotone et l’incertitude, en tenant compte du doute humain et de la connaissance limitée des experts. Enfin, ils fournissent une représentation sémantique lisible par l’homme et la machine pour les expertise réalisées
    corecore