25 research outputs found

    Data consistency: toward a terminological clarification

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-21413-9_15Consistency is an inconsistency are ubiquitous term in data engineering. Its relevance to quality is obvious, since consistency is a commonplace dimension of data quality. However, connotations are vague or ambiguous. In this paper, we address semantic consistency, transaction consistency, replication consistency, eventual consistency and the new notion of partial consistency in databases. We characterize their distinguishing properties, and also address their differences, interactions and interdependencies. Partial consistency is an entry door to living with inconsistency, which is an ineludible necessity in the age of big data.Decker and F.D. Muñoz—supported by the Spanish MINECO grant TIN 2012-37719-C03-01.Decker, H.; Muñoz Escoí, FD.; Misra, S. (2015). Data consistency: toward a terminological clarification. En Computational Science and Its Applications -- ICCSA 2015: 15th International Conference, Banff, AB, Canada, June 22-25, 2015, Proceedings, Part V. Springer International Publishing. 206-220. https://doi.org/10.1007/978-3-319-21413-9_15S206220Abadi, D.: Consistency tradeoffs in modern distributed database system design: Cap is only part of the story. Computer 45(2), 37–42 (2012)Bailis, P. (2015). http://www.bailis.org/blog/Bailis, P., Ghodsi, A.: Eventual consistency today: limitations, extensions, and beyond. ACM Queue, 11(3) (2013)Balegas, V., Duarte, S., Ferreira, C., Rodrigues, R., Preguica, N., Najafzadeh, M., Shapiro, M.: Putting consistency back into eventual consistency. In: 10th EuroSys. ACM (2015). http://dl.acm.org/citation.cfm?doid=2741948.2741972Beeri, C., Bernstein, P., Goodman, N.: A sophisticate’s introduction to database normalization theory. In: VLDB, pp. 113–124 (1978)Berenson, H., Bernstein, P., Gray, J., Melton, J., O’Neil, E., O’Neil, P.: A critique of ansi sql isolation levels. SIGMoD Record 24(2), 1–10 (1995)Bermbach, D., Tai, S.: Eventual consistency: how soon is eventual? In: 6th MW4SOC. ACM (2011)Bernabé-Gisbert, J., Muñoz-Escoí, F.: Supporting multiple isolation levels in replicated environments. Data & Knowledge Engineering 7980, 1–16 (2012)Bernstein, P., Das, S.. Rethinking eventual consistency. In: SIGMOD 2013, pp. 923–928. ACM (2013)Bernstein, P., Hadzilacos, V., Goodman, N.: Concurrency Control and Recovery in Database Systems. Addison-Wesley (1987)Bertossi, L., Hunter, A., Schaub, T.: Inconsistency Tolerance. In: Bertossi, L., Hunter, A., Schaub, T. (eds.) Inconsistency Tolerance. LNCS, vol. 3300, pp. 1–14. Springer, Heidelberg (2005)Bobenrieth, A.: Inconsistencias por qué no? Un estudio filosófico sobre la lógica paraconsistente. Premios Nacionales Colcultura. Tercer Mundo Editores. Magister Thesis, Universidad de los Andes, Santafé de Bogotá, Columbia (1995)Bosneag, A.-M., Brockmeyer, M.: A formal model for eventual consistency semantics. In: PDCS 2002, pp. 204–209. IASTED (2001)Browne, J.: Brewer’s cap theorem (2009). http://www.julianbrowne.com/article/viewer/brewers-cap-theoremCong, G., Fan, W., Geerts, F., Jia, X., Ma, S.: Improving data quality: consistency and accuracy. In: Proc. 33rd VLDB, pp. 315–326. ACM (2007)Dechter, R., van Beek, P.: Local and global relational consistency. Theor. Comput. Sci. 173(1), 283–308 (1997)Decker, H.: Translating advanced integrity checking technology to SQL. In: Doorn, J., Rivero, L. (eds.) Database integrity: challenges and solutions, pp. 203–249. Idea Group (2002)Decker, H.: Historical and computational aspects of paraconsistency in view of the logic foundation of databases. In: Bertossi, L., Katona, G.O.H., Schewe, K.-D., Thalheim, B. (eds.) Semantics in Databases 2001. LNCS, vol. 2582, pp. 63–81. Springer, Heidelberg (2003)Decker, H.: Answers that have integrity. In: Schewe, K.-D., Thalheim, B. (eds.) SDKB 2010. LNCS, vol. 6834, pp. 54–72. Springer, Heidelberg (2011)Decker, H.: New measures for maintaining the quality of databases. In: Murgante, B., Gervasi, O., Misra, S., Nedjah, N., Rocha, A.M.A.C., Taniar, D., Apduhan, B.O. (eds.) ICCSA 2012, Part IV. LNCS, vol. 7336, pp. 170–185. Springer, Heidelberg (2012)Decker, H.: A pragmatic approach to model, measure and maintain the quality of information in databases (2012). www.iti.upv.es/~hendrik/papers/ahrc-workshop_quality-of-data.pdf , www.iti.upv.es/~hendrik/papers/ahrc-workshop_quality-of-data_comments.pdf . Slides and comments presented at the Workshop on Information Quality. Univ, Hertfordshire, UKDecker, H.: Answers that have quality. In: Murgante, B., Misra, S., Carlini, M., Torre, C.M., Nguyen, H.-Q., Taniar, D., Apduhan, B.O., Gervasi, O. (eds.) ICCSA 2013, Part II. LNCS, vol. 7972, pp. 543–558. Springer, Heidelberg (2013)Decker, H.: Measure-based inconsistency-tolerant maintenance of database integrity. In: Schewe, K.-D., Thalheim, B. (eds.) SDKB 2013. LNCS, vol. 7693, pp. 149–173. Springer, Heidelberg (2013)Decker, H., Martinenghi, D.: Inconsistency-tolerant integrity checking. IEEE Transactions of Knowledge and Data Engineering 23(2), 218–234 (2011)Decker, H., Muñoz-Escoí, F.D.: Revisiting and improving a result on integrity preservation by concurrent transactions. In: Meersman, R., Dillon, T., Herrero, P. (eds.) OTM 2010. LNCS, vol. 6428, pp. 297–306. Springer, Heidelberg (2010)Dong, X.L., Berti-Equille, L., Srivastava, D.: Data fusion: resolving conflicts from multiple sources (2015). http://arxiv.org/abs/1503.00310Eswaran, K., Gray, J., Lorie, R., Traiger, I.: The notions of consistency and predicate locks in a database system. CACM 19(11), 624–633 (1976)Muñoz-Escoí, F.D., Ruiz-Fuertes, M.I., Decker, H., Armendáriz-Íñigo, J.E., de Mendívil, J.R.G.: Extending middleware protocols for database replication with integrity support. In: Meersman, R., Tari, Z. (eds.) OTM 2008, Part I. LNCS, vol. 5331, pp. 607–624. Springer, Heidelberg (2008)Fekete, A.: Consistency models for replicated data. In: Encyclopedia of Database Systems, pp. 450–451. Springer (2009)Fekete, A., Gupta, D., Lynch, V., Luchangco, N., Shvartsman, A.: Eventually-serializable data services. In: 15th PoDC, pp. 300–309. ACM (1996)Gilbert, S., Lynch, N.: Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News 33(2), 51–59 (2002)Golab, W., Rahman, M., Auyoung, A., Keeton, K., Li, X.: Eventually consistent: Not what you were expecting? ACM Queue, 12(1) (2014)Grant, J., Hunter, A.: Measuring inconsistency in knowledgebases. Journal of Intelligent Information Systems 27(2), 159–184 (2006)Gray, J., Lorie, R., Putzolu, G., Traiger, I.: Granularity of locks and degrees of consistency in a shared data base. In: Nijssen, G. (ed.) Modelling in Data Base Management Systems. North Holland (1976)Haerder, T., Reuter, A.: Principles of transaction-oriented database recovery. Computing Surveys 15(4), 287–317 (1983)Herlihy, M., Wing, J.: Linearizability: a correctness condition for concurrent objects. TOPLAS 12(3), 463–492 (1990)R. Ho. Design pattern for eventual consistency (2009). http://horicky.blogspot.com.es/2009/01/design-pattern-for-eventual-consistency.htmlIkeda, R., Park, H., Widom, J.: Provenance for generalized map and reduce workflows. In: CIDR (2011)Kempster, T., Stirling, C., Thanisch, P.: Diluting acid. SIGMoD Record 28(4), 17–23 (1999)Li, X., Dong, X.L., Meng, W., Srivastava, D.: Truth finding on the deep web: Is the problem solved? VLDB Endowment 6(2), 97–108 (2012)Lloyd, W., Freedman, M., Kaminsky, M., Andersen, D.: Don’t settle for eventual: scalable causal consistency for wide-area storage with cops. In: 23rd SOPS, pp. 401–416 (2011)Lomet, D.: Transactions: from local atomicity to atomicity in the cloud. In: Jones, C.B., Lloyd, J.L. (eds.) Dependable and Historic Computing. LNCS, vol. 6875, pp. 38–52. Springer, Heidelberg (2011)Monge, P., Contractor, N.: Theory of Communication Networks. Oxford University Press (2003)Nicolas, J.-M.: Logic for improving integrity checking in relational data bases. Acta Informatica 18, 227–253 (1982)Muñoz-Escoí, F.D., Irún, L., H. Decker: Database replication protocols. In: Encyclopedia of Database Technologies and Applications, pp. 153–157. IGI Global (2005)Oracle: Constraints. http://docs.oracle.com/cd/B19306_01/server.102/b14223/constra.htm (May 1, 2015)Ouzzani, M., Medjahed, B., Elmagarmid, A.: Correctness criteria beyond serializability. In: Encyclopedia of Database Systems, pp. 501–506. Springer (2009)Rosenkrantz, D., Stearns, R., Lewis, P.: Consistency and serializability in concurrent datanbase systems. SIAM J. Comput. 13(3), 508–530 (1984)Saito, Y., Shapiro, M.: Optimistic replication. JACM 37(1), 42–81 (2005)Sandhu, R.: On five definitions of data integrity. In: Proc. IFIP WG11.3 Workshop on Database Security, pp. 257–267. North-Holland (1994)Simmons, G.: Contemporary Cryptology: The Science of Information Integrity. IEEE Press (1992)Sivathanu, G., Wright, C., Zadok, E.: Ensuring data integrity in storage: techniques and applications. In: Proc. 12th Conf. on Computer and Communications Security, p. 26. ACM (2005)Svanks, M.: Integrity analysis: Methods for automating data quality assurance. Information and Software Technology 30(10), 595–605 (1988)Technet, M.: Data integrity. https://technet.microsoft.com/en-us/library/aa933058 (May 1, 2015)Terry, D.: Replicated data consistency explained through baseball. Technical report, Microsoft. MSR Technical Report (2011)Traiger, I., Gray, J., Galtieri, C., Lindsay, B.: Transactions and consistency in distributed database systems. ACM Trans. Database Syst. 7(3), 323–342 (1982)Vidyasankar, K.: Serializability. In: Encyclopedia of Database Systems, pp. 2626–2632. Springer (2009)Vogels, W.: Eventually consistent (2007). http://www.allthingsdistributed.com/2007/12/eventually_consistent.html . Other versions in ACM Queue 6(6), 14–19. http://queue.acm.org/detail.cfm?id=1466448 (2008) and CACM 52(1), 40–44 (2009)Wikipedia: Consistency model. http://en.wikipedia.org/wiki/Consistency_model (May 1, 2015)Wikipedia: Data integrity. http://en.wikipedia.org/wiki/Data_integrity (May 1, 2015)Wikipedia: Data quality. http://en.wikipedia.org/wiki/Data_quality (May 1, 2015)Yin, X., Han, J., Yu, P.: Truth discovery with multiple conflicting information providers on the web. IEEE Transactions of Knowledge and Data Engineering 20(6), 796–808 (2008)Young, G.: Quick thoughts on eventual consistency (2010). http://codebetter.com/gregyoung/2010/04/14/quick-thoughts-on-eventual-consistency/ (May 1, 2015

    Assessment of the Physiological Network in Sleep Apnea

    Get PDF
    Objective: Machine Learning models, in particular Artificial Neural Networks, have shown to be applicable in clinical research for tumor detection and sleep phase classification. Applications in systems medicine and biology, for example in Physiological Networks, could benefit from the ability of these methods to recognize patterns in high-dimensional data, but decisions of an Artificial Neural Network cannot be interpreted based on the model itself. In a medical context this is an undesirable characteristic, because hidden age, gender or other data biases negatively impact the model quality. If insights are based on a biased model, the ability of an independent study to come to similar conclusions is limited and therefore an essential property of scientific experiments, known as results reproducibility, is violated. Besides results reproducibility, methods reproducibility allows others to reproduce exact outputs of computational experiments, but requires data, code and runtime environments to be available. These challenges in interpretability and reproducibility are addressed as part of an assessment of the Physiological Network in Obstructive Sleep Apnea. Approach: A research platform is developed, that connects medical data, code and environ-ments to enable methods reproducibility. The platform employs a compute cluster or cloud to accelerate the demanding model training. Artificial Neural Networks are trained on the Physiological Network data of a healthy control group for age and gender prediction to verify the influence of these biases. In a subsequent study, an Artificial Neural Network is trained to classify the Physiological Networks in Obstructive Sleep Apnea and a healthy control group. The state-of-the-art interpretation method DeepLift is applied to explain model predictions. Results: An existing collaboration platform has been extended for sleep research data and modern container technologies are used to distribute training environments in compute clusters. Artificial Neural Network models predict the age of healthy subjects in a resolution of one decade and correctly classify the gender with 91% accuracy. Due to the verified biases, a matched dataset is created for the classification of Obstructive Sleep Apnea. The classification accuracy reaches 87% and DeepLift provides biomarkers as significant indicators towards or against the disorder. Analysis of misclassified samples shows potential Obstructive Sleep Apnea phenotypes. Significance: The presented platform is extensible for future use cases and focuses on the reproducibility of computational experiments, a concern across many disciplines. Machine learning approaches solve analysis tasks on high-dimensional data and novel interpretation techniques provide the required transparency for medical applications.Ziel: Methoden des maschinellen Lernens, insbesondere künstliche neuronale Netze, finden Anwendung in der klinischen Forschung, um beispielsweise Tumorzellen oder Schlafphasen zu klassifizieren. Anwendungen in der Systemmedizin und -biologie, wie physiologische Netzwerke, könnten von der Fähigkeit dieser Methoden, Muster in großen Merkmalsräumen zu finden, profitieren. Allerdings sind Entscheidungen eines künstlichen neuronalen Netzes nicht allein anhand des Modells interpretierbar. In einem medizinischen Kontext ist dies eine unerwünschte Charakteristik, weil die Daten, mit denen ein Modell trainiert wird, versteckte Einflüsse wie Alters- und Geschlechtsabhängigkeiten beinhalten können. Erkenntnisse, die auf einem beeinflussten Modell basieren, sind nur bedingt durch unabhängige Studien nach-vollziehbar, sodass keine Ergebnisreproduzierbarkeit gegeben ist. Neben der Ergebnisreproduzier-barkeit bezeichnet Methodenreproduzierbarkeit die Möglichkeit exakte Programmausgaben zu reproduzieren, was die Verfügbarkeit von Daten, Programmcode und Ausführungsumgebungen voraussetzt. Diese Promotion untersucht Veränderungen im physiologischen Netzwerk bei obstruktivem Schlafapnoesyndrom mit Methoden des maschinellen Lernens und adressiert dabei die genannten Herausforderungen der Interpretierbarkeit und Reproduzierbarkeit. Ansatz: Es wird eine Forschungsplattform entwickelt, die medizinische Daten, Programmcode und Ausführungsumgebungen verknüpft und damit Methodenreproduzierbarkeit ermöglicht. Die Plattform bindet zur Beschleunigung des ressourcenintensiven Modelltrainings verteilte Rechenressourcen in Form eines Clusters oder einer Cloud an. Künstliche neuronale Netze werden zur Bestimmung des Alters und des Geschlechts anhand der physiologischen Daten einer gesunden Kontrollgruppe trainiert, um den Einfluss der Alters- und Geschlechtsabhängigkeiten zu untersuchen. In einer Folgestudie werden die Unterschiede im physiologischen Netzwerk einer Gruppe mit obstruktivem Schlafapnoesyndrom und einer gesunden Kontrollgruppe klassifiziert. DeepLift, eine Interpretationsmethode nach aktuellem Stand der Technik, wird zur Erklärung der Modellvorhersagen angewendet. Ergebnisse: Eine existierende Forschungsplattform wurde für die Verarbeitung schlafbezogener Forschungsdaten erweitert und Containertechnologien ermöglichen die Bereitstellung der Ausführungsumgebung eines Experiments in einem Cluster. Künstliche neuronale Netze können anhand der physiologischen Daten das Alter einer Person bis auf eine Dekade genau bestimmen und eine Geschlechtsklassifikation erreicht eine Genauigkeit von 91%. Die Ergebnisse bestätigen den Einfluss der Alters- und Geschlechtsabhängigkeiten, sodass für Schlafapnoeklassifikationen zunächst eine Datenbasis geschaffen wird, in der die Geschlechts- und Altersverteilung zwischen gesunden und kranken Gruppen ausgeglichen ist. Die resultierenden Modelle erreichen eine Klassifikationsgenauigkeit von 87%. DeepLift weist auf Biomarker und mögliche physiologische Schlafapnoe-Phänotypen im Tiefschlaf hin. Signifikanz: Die vorgestellte Plattform ist für zukünftige Anwendungsfälle erweiterbar und ermöglicht Methodenreproduzierbarkeit, was über den Einsatz in der Medizin hinaus auch in anderen Disziplinen von Bedeutung ist. Maschinelles Lernen bietet sinnvolle Ansätze für die Analyse hochdimensionaler Daten und neue Interpretationstechniken schaffen die notwendige Transparenz für medizinische Anwendungszwecke

    SIMURG_CITIES: Meta-Analysis for KPI's of Layer-Based Approach in Sustainability Assessment

    Get PDF
    SIMURG_CITIES, is the research and development project that is developed under the main project named as SIMURG: “A performance-based and Sustainability-oriented Integration Model Using Relational database architecture to increase Global competitiveness of Turkish construction industry in industry 5.0 era”, is the relational database model that is currently being developed in a dissertation for performance-based development and assessment of sustainable and sophisticated solutions for the built environment. This study aims to analyze the key performance indicators (KPIs) at «Cities Level» for the smart city concept that is referred to as «Layers» in the master project. KPIs for the concept of a smart city is determined by using the meta-analysis technique. Hence, the three most reputable urban journals issued from 2017 through 2020 are reviewed in this study. In addition to this, models of smart city frameworks/assessment tools/KPIs are reviewed within the context of this paper; environment, economy and governance were found to have domain themes on the urban sustainability according to the literature review. Consequently, efficient and integrated urban management, environmental monitoring and management, public and social services of urban development and sustainability are found to be the most important dimensions in urban and regional planning. SIMURG_CITIES evaluation models for urban projects can use the findings of this paper

    Implementation of smart tools in Belgrade's transportation system: lessons from Copenhagen and Madrid

    Get PDF
    Big cities confront several transportation issues, including traffic congestion, air pollution, public transportation, and so on. The most important aspect of any smart city initiative should be smart transportation systems. This system provides people with high-quality, environmentally friendly transportation based on their needs. As a result, the subjects covered in this study are diverse, including transportation models, information technology integration in transportation reform, and the development of environmentally friendly transportation modes. The goal of this research is to examine how digital technologies are used in the transportation systems of Copenhagen and Madrid. The research question for the study is: can digital technology use in transport assist Belgrade in resolving its transportation issues? At the outset of this research, we will provide smart transportation concepts from the cities of Copenhagen and Madrid as a starting point for smart transportation development. These cities' experiences could serve as a model for smart transportation systems. We'll look at a variety of strategy documents and plans that outline the current state of transportation, projected applications of ICT in transportation, and several transportation models that focus on environmental protection and reducing the use of fossil fuels and air pollution. This data should help Belgrade become a smart city in terms of transportation. Belgrade has started to deploy smart mobility solutions, but there is still a lot to learn from other cities that are forerunners in this field. We shall devise the ideal scenario for Belgrade that is best for its people

    The Impact of Code Ownership of DevOps Artefacts on the Outcome of DevOps CI Builds

    Get PDF
    This study focuses on factors that may influence the outcomes of CI builds triggered by commits modifying and/or adding DevOps artefacts to the projects, i.e., DevOps-related CI builds. In particular, code ownership of DevOps artefacts is one such factor that could impact DevOps-related CI builds. There are two main strategies as suggested in prior work: (1) all project developers need to contribute to DevOps artefacts, and (2) a dedicated group of developers needs to be authoring DevOps artefacts. To analyze which strategy works best for OSS projects, we conduct an empirical analysis on a dataset of 892,193 CircleCI builds spanning 1,689 Open-Source Software projects. First, we investigate the impact of code ownership of DevOps artefacts on the outcome of a CI build on a build level. Second, we study the impact of the Skewness of DevOps contributions on the success rate of CI builds at the project level. Our findings reveal that, in general, larger code ownership and higher Skewness values of DevOps contributions \shane{are related} to more successful build outcomes and higher rates of successful build outcomes, respectively. However, we also find that projects with low skewness values could have high build success rates if the number of developers in the project is relatively small. Thus, our results suggest that while larger software organizations are better off having dedicated DevOps developers, smaller organizations would benefit from having all developers involved in DevOps

    Enhancing trustability in MMOGs environments

    Get PDF
    Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds (VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more autonomic, security, and trust mechanisms in a way similar to humans do in the real life world. As known, this is a difficult matter because trusting in humans and organizations depends on the perception and experience of each individual, which is difficult to quantify or measure. In fact, these societal environments lack trust mechanisms similar to those involved in humans-to-human interactions. Besides, interactions mediated by compute devices are constantly evolving, requiring trust mechanisms that keep the pace with the developments and assess risk situations. In VW/MMOGs, it is widely recognized that users develop trust relationships from their in-world interactions with others. However, these trust relationships end up not being represented in the data structures (or databases) of such virtual worlds, though they sometimes appear associated to reputation and recommendation systems. In addition, as far as we know, the user is not provided with a personal trust tool to sustain his/her decision making while he/she interacts with other users in the virtual or game world. In order to solve this problem, as well as those mentioned above, we propose herein a formal representation of these personal trust relationships, which are based on avataravatar interactions. The leading idea is to provide each avatar-impersonated player with a personal trust tool that follows a distributed trust model, i.e., the trust data is distributed over the societal network of a given VW/MMOG. Representing, manipulating, and inferring trust from the user/player point of view certainly is a grand challenge. When someone meets an unknown individual, the question is “Can I trust him/her or not?”. It is clear that this requires the user to have access to a representation of trust about others, but, unless we are using an open source VW/MMOG, it is difficult —not to say unfeasible— to get access to such data. Even, in an open source system, a number of users may refuse to pass information about its friends, acquaintances, or others. Putting together its own data and gathered data obtained from others, the avatar-impersonated player should be able to come across a trust result about its current trustee. For the trust assessment method used in this thesis, we use subjective logic operators and graph search algorithms to undertake such trust inference about the trustee. The proposed trust inference system has been validated using a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy increase in evaluating trustability of avatars. Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g., graph search methods), on an individual basis, rather than based on usual centralized reputation systems. In particular, and unlike other trust discovery methods, our methods run at interactive rates.MMOGs (Massively Multiplayer Online Games, como por exemplo, World of Warcraft), mundos virtuais (VW, como por exemplo, o Second Life) e redes sociais (como por exemplo, Facebook) necessitam de mecanismos de confiança mais autónomos, capazes de assegurar a segurança e a confiança de uma forma semelhante à que os seres humanos utilizam na vida real. Como se sabe, esta não é uma questão fácil. Porque confiar em seres humanos e ou organizações depende da percepção e da experiência de cada indivíduo, o que é difícil de quantificar ou medir à partida. Na verdade, esses ambientes sociais carecem dos mecanismos de confiança presentes em interacções humanas presenciais. Além disso, as interacções mediadas por dispositivos computacionais estão em constante evolução, necessitando de mecanismos de confiança adequados ao ritmo da evolução para avaliar situações de risco. Em VW/MMOGs, é amplamente reconhecido que os utilizadores desenvolvem relações de confiança a partir das suas interacções no mundo com outros. No entanto, essas relações de confiança acabam por não ser representadas nas estruturas de dados (ou bases de dados) do VW/MMOG específico, embora às vezes apareçam associados à reputação e a sistemas de reputação. Além disso, tanto quanto sabemos, ao utilizador não lhe é facultado nenhum mecanismo que suporte uma ferramenta de confiança individual para sustentar o seu processo de tomada de decisão, enquanto ele interage com outros utilizadores no mundo virtual ou jogo. A fim de resolver este problema, bem como os mencionados acima, propomos nesta tese uma representação formal para essas relações de confiança pessoal, baseada em interacções avatar-avatar. A ideia principal é fornecer a cada jogador representado por um avatar uma ferramenta de confiança pessoal que segue um modelo de confiança distribuída, ou seja, os dados de confiança são distribuídos através da rede social de um determinado VW/MMOG. Representar, manipular e inferir a confiança do ponto de utilizador/jogador, é certamente um grande desafio. Quando alguém encontra um indivíduo desconhecido, a pergunta é “Posso confiar ou não nele?”. É claro que isto requer que o utilizador tenha acesso a uma representação de confiança sobre os outros, mas, a menos que possamos usar uma plataforma VW/MMOG de código aberto, é difícil — para não dizer impossível — obter acesso aos dados gerados pelos utilizadores. Mesmo em sistemas de código aberto, um número de utilizadores pode recusar partilhar informações sobre seus amigos, conhecidos, ou sobre outros. Ao juntar seus próprios dados com os dados obtidos de outros, o utilizador/jogador representado por um avatar deve ser capaz de produzir uma avaliação de confiança sobre o utilizador/jogador com o qual se encontra a interagir. Relativamente ao método de avaliação de confiança empregue nesta tese, utilizamos lógica subjectiva para a representação da confiança, e também operadores lógicos da lógica subjectiva juntamente com algoritmos de procura em grafos para empreender o processo de inferência da confiança relativamente a outro utilizador. O sistema de inferência de confiança proposto foi validado através de um número de cenários Open-Simulator (opensimulator.org), que mostrou um aumento na precisão na avaliação da confiança de avatares. Resumindo, a nossa proposta visa, assim, introduzir uma teoria de confiança para mundos virtuais, conjuntamente com métricas de avaliação de confiança (por exemplo, a lógica subjectiva) e em métodos de procura de caminhos de confiança (com por exemplo, através de métodos de pesquisa em grafos), partindo de uma base individual, em vez de se basear em sistemas habituais de reputação centralizados. Em particular, e ao contrário de outros métodos de determinação do grau de confiança, os nossos métodos são executados em tempo real

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective
    corecore