4,269 research outputs found

    A speculative computation approach for conflict styles assessment with incomplete information

    Get PDF
    Thispaperanalysesawaytocopewithincompleteinformation,namely information regarding the conflict style used by parties. This analysis is important because it enables us to develop a more accurate and informed conflict style classification method to promote better strategies. To develop this proposal, an experiment using a combination of Bayesian Networks with Speculative Computation is depicted. Thus, in this work, was firstly identified and applied a set of methods for classifying conflict styles with incomplete information; secondly, the approach was validating opposing data collected from a web-based negotiationgame.Fromtheexperimentoutcomes,wecanconcludedthatitispossibleto copewithincompleteinformationbyproducingvalidconflictstyledefaultvalues and, particularly, to anticipate competing postures through the dynamic generation of recommendations for a conflict manager. The findings suggest that this approach is suitable for handling incomplete information in this context and can be applied in a viable and feasible way.This work has been supported by COMPETE:POCI-01-0145-FEDER-007043 and FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within the Project Scope UID/CEC/00319/2013.info:eu-repo/semantics/publishedVersio

    Application of machine learning techniques to the flexible assessment and improvement of requirements quality

    Get PDF
    It is already common to compute quantitative metrics of requirements to assess their quality. However, the risk is to build assessment methods and tools that are both arbitrary and rigid in the parameterization and combination of metrics. Specifically, we show that a linear combination of metrics is insufficient to adequately compute a global measure of quality. In this work, we propose to develop a flexible method to assess and improve the quality of requirements that can be adapted to different contexts, projects, organizations, and quality standards, with a high degree of automation. The domain experts contribute with an initial set of requirements that they have classified according to their quality, and we extract their quality metrics. We then use machine learning techniques to emulate the implicit expert’s quality function. We provide also a procedure to suggest improvements in bad requirements. We compare the obtained rule-based classifiers with different machine learning algorithms, obtaining measurements of effectiveness around 85%. We show as well the appearance of the generated rules and how to interpret them. The method is tailorable to different contexts, different styles to write requirements, and different demands in quality. The whole process of inferring and applying the quality rules adapted to each organization is highly automatedThis research has received funding from the CRYSTAL project–Critical System Engineering Acceleration (European Union’s Seventh Framework Program FP7/2007-2013, ARTEMIS Joint Undertaking grant agreement no 332830); and from the AMASS project–Architecture-driven, Multi-concern and Seamless Assurance and Certification of Cyber-Physical Systems (H2020-ECSEL grant agreement no 692474; Spain’s MINECO ref. PCIN-2015-262)

    Creativity: Can Artistic Perspectives Contribute to Management Questions?

    Get PDF
    Today creativity is considered as a necessity in ail aspects of management. This working paper mirrors the artistic and managerial conceptions of creativity. Although there are shared points in bath applications, however deep-seated and radically opposed traits account for the divergence between the two fields. This exploratory analysis opens up new research questions and insights into practices.Creativity; Management; Art

    Time to rebuild and reaggregate fluctuations: Minsky, complexity and agent-based modelling

    Get PDF

    Clinical decision support: Knowledge representation and uncertainty management

    Get PDF
    Programa Doutoral em Engenharia BiomédicaDecision-making in clinical practice is faced with many challenges due to the inherent risks of being a health care professional. From medical error to undesired variations in clinical practice, the mitigation of these issues seems to be tightly connected to the adherence to Clinical Practice Guidelines as evidence-based recommendations The deployment of Clinical Practice Guidelines in computational systems for clinical decision support has the potential to positively impact health care. However, current approaches to Computer-Interpretable Guidelines evidence a set of issues that leave them wanting. These issues are related with the lack of expressiveness of their underlying models, the complexity of knowledge acquisition with their tools, the absence of support to the clinical decision making process, and the style of communication of Clinical Decision Support Systems implementing Computer-Interpretable Guidelines. Such issues pose as obstacles that prevent these systems from showing properties like modularity, flexibility, adaptability, and interactivity. All these properties reflect the concept of living guidelines. The purpose of this doctoral thesis is, thus, to provide a framework that enables the expression of these properties. The modularity property is conferred by the ontological definition of Computer-Interpretable Guidelines and the assistance in guideline acquisition provided by an editing tool, allowing for the management of multiple knowledge patterns that can be reused. Flexibility is provided by the representation primitives defined in the ontology, meaning that the model is adjustable to guidelines from different categories and specialities. On to adaptability, this property is conferred by mechanisms of Speculative Computation, which allow the Decision Support System to not only reason with incomplete information but to adapt to changes of state, such as suddenly knowing the missing information. The solution proposed for interactivity consists in embedding Computer-Interpretable Guideline advice directly into the daily life of health care professionals and provide a set of reminders and notifications that help them to keep track of their tasks and responsibilities. All these solutions make the CompGuide framework for the expression of Clinical Decision Support Systems based on Computer-Interpretable Guidelines.A tomada de decisão na prática clínica enfrenta inúmeros desafios devido aos riscos inerentes a ser um profissional de saúde. Desde o erro medico até às variações indesejadas da prática clínica, a atenuação destes problemas parece estar intimamente ligada à adesão a Protocolos Clínicos, uma vez que estes são recomendações baseadas na evidencia. A operacionalização de Protocolos Clínicos em sistemas computacionais para apoio à decisão clínica apresenta o potencial de ter um impacto positivo nos cuidados de saúde. Contudo, as abordagens atuais a Protocolos Clínicos Interpretáveis por Maquinas evidenciam um conjunto de problemas que as deixa a desejar. Estes problemas estão relacionados com a falta de expressividade dos modelos que lhes estão subjacentes, a complexidade da aquisição de conhecimento utilizando as suas ferramentas, a ausência de suporte ao processo de decisão clínica e o estilo de comunicação dos Sistemas de Apoio à Decisão Clínica que implementam Protocolos Clínicos Interpretáveis por Maquinas. Tais problemas constituem obstáculos que impedem estes sistemas de apresentarem propriedades como modularidade, flexibilidade, adaptabilidade e interatividade. Todas estas propriedades refletem o conceito de living guidelines. O propósito desta tese de doutoramento é, portanto, o de fornecer uma estrutura que possibilite a expressão destas propriedades. A modularidade é conferida pela definição ontológica dos Protocolos Clínicos Interpretáveis por Maquinas e pela assistência na aquisição de protocolos fornecida por uma ferramenta de edição, permitindo assim a gestão de múltiplos padrões de conhecimento que podem ser reutilizados. A flexibilidade é atribuída pelas primitivas de representação definidas na ontologia, o que significa que o modelo é ajustável a protocolos de diferentes categorias e especialidades. Quanto à adaptabilidade, esta é conferida por mecanismos de Computação Especulativa que permitem ao Sistema de Apoio à Decisão não só raciocinar com informação incompleta, mas também adaptar-se a mudanças de estado, como subitamente tomar conhecimento da informação em falta. A solução proposta para a interatividade consiste em incorporar as recomendações dos Protocolos Clínicos Interpretáveis por Maquinas diretamente no dia a dia dos profissionais de saúde e fornecer um conjunto de lembretes e notificações que os auxiliam a rastrear as suas tarefas e responsabilidades. Todas estas soluções constituem a estrutura CompGuide para a expressão de Sistemas de Apoio à Decisão Clínica baseados em Protocolos Clínicos Interpretáveis por Máquinas.The work of the PhD candidate Tiago José Martins Oliveira is supported by a grant from FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) with the reference SFRH/BD/85291/ 2012

    Artificial intelligence and architectural design : an introduction

    Get PDF
    Descripció del recurs: 27 juliol 2022The aim of this book on artificial intelligence for architects and designers is to guide future designers, in general, and architects, in particular, to support the social and cultural wellbeing of the humanity in a digital and global environment. This objective is today essential but also extremely large, interdisciplinary and interartistic, so we have done just a brief introduction of the subject. We will start with the argument fixed by the Professor Jonas Langer in his web some years ago, that we have defined as: “The Langer’s Tree”.Primera edició

    Visual Analytics for the Exploratory Analysis and Labeling of Cultural Data

    Get PDF
    Cultural data can come in various forms and modalities, such as text traditions, artworks, music, crafted objects, or even as intangible heritage such as biographies of people, performing arts, cultural customs and rites. The assignment of metadata to such cultural heritage objects is an important task that people working in galleries, libraries, archives, and museums (GLAM) do on a daily basis. These rich metadata collections are used to categorize, structure, and study collections, but can also be used to apply computational methods. Such computational methods are in the focus of Computational and Digital Humanities projects and research. For the longest time, the digital humanities community has focused on textual corpora, including text mining, and other natural language processing techniques. Although some disciplines of the humanities, such as art history and archaeology have a long history of using visualizations. In recent years, the digital humanities community has started to shift the focus to include other modalities, such as audio-visual data. In turn, methods in machine learning and computer vision have been proposed for the specificities of such corpora. Over the last decade, the visualization community has engaged in several collaborations with the digital humanities, often with a focus on exploratory or comparative analysis of the data at hand. This includes both methods and systems that support classical Close Reading of the material and Distant Reading methods that give an overview of larger collections, as well as methods in between, such as Meso Reading. Furthermore, a wider application of machine learning methods can be observed on cultural heritage collections. But they are rarely applied together with visualizations to allow for further perspectives on the collections in a visual analytics or human-in-the-loop setting. Visual analytics can help in the decision-making process by guiding domain experts through the collection of interest. However, state-of-the-art supervised machine learning methods are often not applicable to the collection of interest due to missing ground truth. One form of ground truth are class labels, e.g., of entities depicted in an image collection, assigned to the individual images. Labeling all objects in a collection is an arduous task when performed manually, because cultural heritage collections contain a wide variety of different objects with plenty of details. A problem that arises with these collections curated in different institutions is that not always a specific standard is followed, so the vocabulary used can drift apart from another, making it difficult to combine the data from these institutions for large-scale analysis. This thesis presents a series of projects that combine machine learning methods with interactive visualizations for the exploratory analysis and labeling of cultural data. First, we define cultural data with regard to heritage and contemporary data, then we look at the state-of-the-art of existing visualization, computer vision, and visual analytics methods and projects focusing on cultural data collections. After this, we present the problems addressed in this thesis and their solutions, starting with a series of visualizations to explore different facets of rap lyrics and rap artists with a focus on text reuse. Next, we engage in a more complex case of text reuse, the collation of medieval vernacular text editions. For this, a human-in-the-loop process is presented that applies word embeddings and interactive visualizations to perform textual alignments on under-resourced languages supported by labeling of the relations between lines and the relations between words. We then switch the focus from textual data to another modality of cultural data by presenting a Virtual Museum that combines interactive visualizations and computer vision in order to explore a collection of artworks. With the lessons learned from the previous projects, we engage in the labeling and analysis of medieval illuminated manuscripts and so combine some of the machine learning methods and visualizations that were used for textual data with computer vision methods. Finally, we give reflections on the interdisciplinary projects and the lessons learned, before we discuss existing challenges when working with cultural heritage data from the computer science perspective to outline potential research directions for machine learning and visual analytics of cultural heritage data

    How Scenarios Became Corporate Strategies: Alternative Futures and Uncertainty in Strategic Management

    Get PDF
    How Scenarios Became Corporate Strategies tracks the transformation of scenario planning, a non-calculative technique for imagining alternative futures, from postwar American thermonuclear defence projects to corporate planning efforts beginning in the late 1960s. Drawing on archival research, the dissertation tells a history of how different corporate strategists in the second half of the twentieth century attempted to engage with future uncertainties by drawing heterogeneous and sometimes contradictory rational and intuitive techniques together in their developments of corporate scenario planning. By tracing the heterogeneity of methodologies and intellectual influences in three case studies from corporate scenario planning efforts in the United States and Britain, the dissertation demonstrates how critical and countercultural philosophies that emphasized irrational human capacities like imagination, consciousness, and intuitionoften assumed to be antithetical to the rule-bound, quantitative rationalities of corporate planning effortsbecame crucial tools, rather than enemies, of corporate strategy under uncertainty after 1960. The central argument of the dissertation is that corporate scenario planning projects were non-calculative speculative attempts to augment the calculative techniques of traditional mid-century strategic decision-making with diverse human reasoning tools in order to explore and understand future uncertainties. Consequently, these projects were intertwined with an array of sometimes contradictory genealogies, from technical postwar military planning practices to countercultural intellectual resources that questioned the technological imperatives of modern life. Yet, by the mid-1980s, corporate scenario planning efforts transformed from contemplative strategies for exploring uncertainties into a method associated with the capacities of thought leaders. It was through the rising thought leadership industry of the late-twentieth-century that scenarios gained legitimacy, enabling multinational corporations to rely upon the charismatic authority of scenario practitioners in the face of unknowable futures. In making this argument, the dissertation revises assumptions in the history of postwar science and technology and science studies that pivot on the importance of impersonal, calculative strategies and technical capacities in uncertain conditions
    corecore