3,867 research outputs found

    Credibility: A multidisciplinary framework

    Full text link
    No Abstract.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/61241/1/1440410114_ftp.pd

    A cognitive perspective on learning, decision-making, and technology evaluations in organisations

    Get PDF
    This dissertation examines how firms’ selection of technological and R&D opportunities shape the performance of their innovation efforts. Managers select R&D investments in complex and uncertain environments where it is difficult to learn from past decisions. I examine this challenge using empirical and agent-based modelling methods and by focusing on three interrelated aspects: managers’ individual learning processes, the adaptation of mental representations in complex environments, and the role of distributed expertise in group evaluations. In the first chapter, I propose an alternative explanation to how managers learn from experience that does not involve feedback and that is thus applicable to contexts where learning from feedback is difficult. I test this novel learning mechanism, termed ‘representation learning’, by analysing a large proprietary dataset of patent evaluations and termination decisions made by managers at a Fortune 500 firm. The second chapter explores further implications for performance of representation learning by means of an agent-based model of representation and policy search in rugged landscapes. This study examines how different representation search strategies affect decision-makers’ adaptation in complex environments. Finally, the third chapter explores the performance of group evaluation processes when evaluators differ in the depth and breadth of their knowledge of the technologies being evaluated. This research contributes to management literature by shedding light on the cognitive processes underlying learning and decision-making in uncertain and complex environments. These findings also have practical implications for strategy research and practice concerning the management of uncertain R&D and technology investments.Open Acces

    Process Information and Creative Mindsets: An Examination of Their Role in the Evaluation of Creativity

    Get PDF
    Evaluating creativity is a key role for any organization interested in innovation and how that evaluation occurs has been a focal point of researchers. Although creativity scholars have made strides in understanding creativity evaluations, questions remain about the role that process information plays in the evaluation. While most creativity research involves some type of outcome, such as an idea or product, the evaluators often have no description of the creator’s work process or any understanding of the idea or product’s creation. In this dissertation, I build upon the existing evaluation literature and critically examine how process information may influence the evaluation of an outcome’s creativity. In doing so, I investigate narratives of both iteration and insight process information, both of which are representative of creative work and likely to influence an evaluator’s perception. I validated materials to manipulate the narratives of creative process information and conducted an experimental study to determine how they affected perceptions of creativity. In doing so, I also considered the role of an evaluator’s growth creative mindset and how evaluators may differentially interpret and perceive the process information and final product depending on their mindset. The results offer some support that an evaluator’s growth creative mindset matters for creativity evaluations, but the findings do not support the interaction effect hypotheses between an evaluator’s growth creative mindset and process information on a product’s perceived creativity. Post-hoc analyses suggest that the effects of growth creative mindset occur predominantly via the utility of the product, while not affecting the perceived novelty. Post-hoc analyses also found a significantly negative effect of iteration process information on a product’s perceived utility. This dissertation has implications for any creators who need to discuss or describe their work to potential evaluators like colleagues or managers, as well as for researchers interested in understanding more about the multi-faceted nature of creative evaluations. The implications of this work also has the potential to increase in relevance as work from home policies and organizational norms change in a Post-Pandemic world where individuals have more autonomy and control about what others see and know abouttheir work process

    Improving the process of analysis and comparison of results in dependability benchmarks for computer systems

    Full text link
    Tesis por compendioLos dependability benchmarks (o benchmarks de confiabilidad en español), están diseñados para evaluar, mediante la categorización cuantitativa de atributos de confiabilidad y prestaciones, el comportamiento de sistemas en presencia de fallos. En este tipo de benchmarks, donde los sistemas se evalúan en presencia de perturbaciones, no ser capaces de elegir el sistema que mejor se adapta a nuestras necesidades puede, en ocasiones, conllevar graves consecuencias (económicas, de reputación, o incluso de pérdida de vidas). Por esa razón, estos benchmarks deben cumplir ciertas propiedades, como son la no-intrusión, la representatividad, la repetibilidad o la reproducibilidad, que garantizan la robustez y precisión de sus procesos. Sin embargo, a pesar de la importancia que tiene la comparación de sistemas o componentes, existe un problema en el ámbito del dependability benchmarking relacionado con el análisis y la comparación de resultados. Mientras que el principal foco de investigación se ha centrado en el desarrollo y la mejora de procesos para obtener medidas en presencia de fallos, los aspectos relacionados con el análisis y la comparación de resultados quedaron mayormente desatendidos. Esto ha dado lugar a diversos trabajos en este ámbito donde el proceso de análisis y la comparación de resultados entre sistemas se realiza de forma ambigua, mediante argumentación, o ni siquiera queda reflejado. Bajo estas circunstancias, a los usuarios de los benchmarks se les presenta una dificultad a la hora de utilizar estos benchmarks y comparar sus resultados con los obtenidos por otros usuarios. Por tanto, extender la aplicación de los benchmarks de confiabilidad y realizar la explotación cruzada de resultados es una tarea actualmente poco viable. Esta tesis se ha centrado en el desarrollo de una metodología para dar soporte a los desarrolladores y usuarios de benchmarks de confiabilidad a la hora de afrontar los problemas existentes en el análisis y comparación de resultados. Diseñada para asegurar el cumplimiento de las propiedades de estos benchmarks, la metodología integra el proceso de análisis de resultados en el flujo procedimental de los benchmarks de confiabilidad. Inspirada en procedimientos propios del ámbito de la investigación operativa, esta metodología proporciona a los evaluadores los medios necesarios para hacer su proceso de análisis explícito, y más representativo para el contexto dado. Los resultados obtenidos de aplicar esta metodología en varios casos de estudio de distintos dominios de aplicación, mostrará las contribuciones de este trabajo a mejorar el proceso de análisis y comparación de resultados en procesos de evaluación de la confiabilidad para sistemas basados en computador.Dependability benchmarks are designed to assess, by quantifying through quantitative performance and dependability attributes, the behavior of systems in presence of faults. In this type of benchmarks, where systems are assessed in presence of perturbations, not being able to select the most suitable system may have serious implications (economical, reputation or even lost of lives). For that reason, dependability benchmarks are expected to meet certain properties, such as non-intrusiveness, representativeness, repeatability or reproducibility, that guarantee the robustness and accuracy of their process. However, despite the importance that comparing systems or components has, there is a problem present in the field of dependability benchmarking regarding the analysis and comparison of results. While the main focus in this field of research has been on developing and improving experimental procedures to obtain the required measures in presence of faults, the processes involving the analysis and comparison of results were mostly unattended. This has caused many works in this field to analyze and compare results of different systems in an ambiguous way, as the process followed in the analysis is based on argumentation, or not even present. Hence, under these circumstances, benchmark users will have it difficult to use these benchmarks and compare their results with those from others. Therefore extending the application of these dependability benchmarks and perform cross-exploitation of results among works is not likely to happen. This thesis has focused on developing a methodology to assist dependability benchmark performers to tackle the problems present in the analysis and comparison of results of dependability benchmarks. Designed to guarantee the fulfillment of dependability benchmark's properties, this methodology seamlessly integrates the process of analysis of results within the procedural flow of a dependability benchmark. Inspired on procedures taken from the field of operational research, this methodology provides evaluators with the means not only to make their process of analysis explicit to anyone, but also more representative for the context being. The results obtained from the application of this methodology to several case studies in different domains, will show the actual contributions of this work to improving the process of analysis and comparison of results in dependability benchmarking for computer systems.Els dependability benchmarks (o benchmarks de confiabilitat, en valencià), són dissenyats per avaluar, mitjançant la categorització quantitativa d'atributs de confiabilitat i prestacions, el comportament de sistemes en presència de fallades. En aquest tipus de benchmarks, on els sistemes són avaluats en presència de pertorbacions, el no ser capaços de triar el sistema que millor s'adapta a les nostres necessitats pot tenir, de vegades, greus conseqüències (econòmiques, de reputació, o fins i tot pèrdua de vides). Per aquesta raó, aquests benchmarks han de complir certes propietats, com són la no-intrusió, la representativitat, la repetibilitat o la reproductibilitat, que garanteixen la robustesa i precisió dels seus processos. Així i tot, malgrat la importància que té la comparació de sistemes o components, existeix un problema a l'àmbit del dependability benchmarking relacionat amb l'anàlisi i la comparació de resultats. Mentre que el principal focus d'investigació s'ha centrat en el desenvolupament i la millora de processos per a obtenir mesures en presència de fallades, aquells aspectes relacionats amb l'anàlisi i la comparació de resultats es van desatendre majoritàriament. Açò ha donat lloc a diversos treballs en aquest àmbit on els processos d'anàlisi i comparació es realitzen de forma ambigua, mitjançant argumentació, o ni tan sols queden reflectits. Sota aquestes circumstàncies, als usuaris dels benchmarks se'ls presenta una dificultat a l'hora d'utilitzar aquests benchmarks i comparar els seus resultats amb els obtinguts per altres usuaris. Per tant, estendre l'aplicació dels benchmarks de confiabilitat i realitzar l'explotació creuada de resultats és una tasca actualment poc viable. Aquesta tesi s'ha centrat en el desenvolupament d'una metodologia per a donar suport als desenvolupadors i usuaris de benchmarks de confiabilitat a l'hora d'afrontar els problemes existents a l'anàlisi i comparació de resultats. Dissenyada per a assegurar el compliment de les propietats d'aquests benchmarks, la metodologia integra el procés d'anàlisi de resultats en el flux procedimental dels benchmarks de confiabilitat. Inspirada en procediments propis de l'àmbit de la investigació operativa, aquesta metodologia proporciona als avaluadors els mitjans necessaris per a fer el seu procés d'anàlisi explícit, i més representatiu per al context donat. Els resultats obtinguts d'aplicar aquesta metodologia en diversos casos d'estudi de distints dominis d'aplicació, mostrarà les contribucions d'aquest treball a millorar el procés d'anàlisi i comparació de resultats en processos d'avaluació de la confiabilitat per a sistemes basats en computador.Martínez Raga, M. (2018). Improving the process of analysis and comparison of results in dependability benchmarks for computer systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/111945TESISCompendi

    A comparison of primary and secondary relevance judgements for real-life topics

    Get PDF
    The notion of relevance is fundamental to the field of Information Retrieval. Within the field a generally accepted conception of relevance as inherently subjective has emerged, with an individual’s assessment of relevance influenced by numerous contextual factors. In this paper we present a user study that examines in detail the differences between primary and secondary assessors on a set of “real-world” topics which were gathered specifically for the work. By gathering topics which are representative of the staff and students at a major university, at a particular point in time, we aim to explore differences between primary and secondary relevance judgements for real-life search tasks. Findings suggest that while secondary assessors may find the assessment task challenging in various ways (they generally possess less interest and knowledge in secondary topics and take longer to assess documents), agreement between primary and secondary assessors is high

    Older adults’ motivations in game based systems: Heuristic definition and its connection with fun

    Get PDF
    Efforts are currently being made to generate wellbeing in the elderly population in order to achieve a good quality of life through the improvement of health, social interaction and psychological health. This is achieved, in addition to other options, through the application of game-based systems, presenting positive results that have been evidenced in several studies. These types of approaches are not only applied for entertainment and leisure, but also for learning and generating positive feelings, as a means of escape from loneliness, isolation, health improvement and support in daily life. Although these experiences are gradually being applied to the older adult population, they have usually been oriented to a young population with different characteristics, needs and motivations, where technological mastery is taken for granted. This makes an older adult feel limited when initially interacting with this type of experiences, which prevents them from fully using and enjoying these technological solutions. In this article, different motivational aspects that encourage older adults to use gamebased systems (for learning, fun, health, etc.) were identified and characterized in order to increase the use of this type of technologies, and to improve the design and evaluation of these experiences to obtain greater enjoyment from the end users. These aspects were represented by a motivational model and then established as a set of heuristics. These heuristics were evaluated by means of an expert judgment focused on the design of game experiences, obtaining positive results for the use of these elements as guides in the design and construction of Game-Based Systems oriented to older adults. This set of heuristics and their application were published in the PL/PX web platform for detailed explanation, access and use by the academic communityMINCIENCIAS of government of ColombiaFCT – Fundaçao para a Ciencia e a Tecnologia, I.P. [Project UIDB/05105/202

    Advancing Objectives-Oriented Evaluation With Participatory Evaluation Methodology – A Mixed Methods Study

    Get PDF
    The ability to complete program evaluations of educational programming is typically restricted by the availability of resources, such as time, money and a trained evaluator. A mixed methods study was completed to explore the use of a participatory evaluation program evaluation with the use of the program objectives as an advanced organizer. Participatory evaluation is purported to increase organizational learning and promote evaluative thinking within an organization (Cousins & Whitmore, 1998). Objectives oriented evaluation is an easily understood evaluation method which provides a refined focus program outcome (Madaus & Stufflebeam, 1989). An explanatory sequential design was employed utilizing quantitative findings to collect qualitative data to further explore the participants’ experiences completing the program evaluation. The findings indicated that this combined evaluation methodology met the criteria posited in Daigneault and Jacob (2009) and Toal (2009) to be considered participatory in its implementation. It also involved participants in ways which provided them experiences which helped develop evaluative thinking, skills, and beliefs

    Crowdsourcing for Engineering Design: Objective Evaluations and Subjective Preferences

    Full text link
    Crowdsourcing enables designers to reach out to large numbers of people who may not have been previously considered when designing a new product, listen to their input by aggregating their preferences and evaluations over potential designs, aiming to improve ``good'' and catch ``bad'' design decisions during the early-stage design process. This approach puts human designers--be they industrial designers, engineers, marketers, or executives--at the forefront, with computational crowdsourcing systems on the backend to aggregate subjective preferences (e.g., which next-generation Brand A design best competes stylistically with next-generation Brand B designs?) or objective evaluations (e.g., which military vehicle design has the best situational awareness?). These crowdsourcing aggregation systems are built using probabilistic approaches that account for the irrationality of human behavior (i.e., violations of reflexivity, symmetry, and transitivity), approximated by modern machine learning algorithms and optimization techniques as necessitated by the scale of data (millions of data points, hundreds of thousands of dimensions). This dissertation presents research findings suggesting the unsuitability of current off-the-shelf crowdsourcing aggregation algorithms for real engineering design tasks due to the sparsity of expertise in the crowd, and methods that mitigate this limitation by incorporating appropriate information for expertise prediction. Next, we introduce and interpret a number of new probabilistic models for crowdsourced design to provide large-scale preference prediction and full design space generation, building on statistical and machine learning techniques such as sampling methods, variational inference, and deep representation learning. Finally, we show how these models and algorithms can advance crowdsourcing systems by abstracting away the underlying appropriate yet unwieldy mathematics, to easier-to-use visual interfaces practical for engineering design companies and governmental agencies engaged in complex engineering systems design.PhDDesign ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133438/1/aburnap_1.pd
    corecore