150 research outputs found

    Bodies of Seeing: A video ethnography of academic x-ray image interpretation training and professional vision in undergraduate radiology and radiography education

    Get PDF
    This thesis reports on a UK-based video ethnography of academic x-ray image interpretation training across two undergraduate courses in radiology and radiography. By studying the teaching and learning practices of the classroom, I initially explore the professional vision of x-ray image interpretation and how its relation to normal radiographic anatomy founds the practice of being ‘critical’. This criticality accomplishes a faculty of perceptual norms that is coded and organised and also, therefore, of a specific radiological vision. Professionals’ commitment to the cognitivist rhetoric of ‘looking at’/‘pattern recognition’ builds this critical perception, a perception that deepens in organisation when professionals endorse a ‘systematic approach’ that mediates matter-of-fact thoroughness and offers a helpful critical commentary towards the image. In what follows, I explore how x-ray image interpretation is constituted in case presentations. During training, x-ray images are treated with suspicion and as misleading and are aligned with a commitment to discursive contexts of ‘missed abnormality’, ‘interpretive risk’, and ‘technical error’. The image is subsequently constructed as ambiguous and that what is shown cannot be taken at face value. This interconnects with reenacting ideals around ‘seeing clearly’ that are explained through the teaching practices and material world of the academic setting and how, if misinterpretation is established, the ambiguity of the image is reduced by embodied gestures and technoscientific knowledge. By making this correction, the ambiguous image is reenacted and the misinterpretation of image content is explained. To conclude, I highlight how the professional vision of academic x-ray image interpretation prepares students for the workplace, shapes the classificatory interpretation of ab(normal) anatomy, manages ambiguity through embodied expectations and bodily norms, and cultivates body-machine relations

    It's the intention that matters : neural representations of learning from intentional harm in social interactions

    Get PDF
    As a social species, humans are not only driven by the pursuit of necessities such as food and shelter, but also complex processes such as social interactions. To navigate our everyday life, we use information gathered throughout a lifetime of social interactions in which we learn from others and their actions but also, and not less importantly, about others. To create a complete picture of a social interaction, we assess the individual we interact with, make judgements about them and their actions, and integrate what we know with the consequences of their actions. This way, we learn the relationship between events (e.g. others’ actions) and environmental stimuli, such as other individuals that predict the actions. As we encounter more people and go through more interactions, we continuously update information stored in memory from previous experiences. A common task, for example, going through the busy corridor in our workplace in a hurry does not only include avoiding physical harm caused by bumping into the coffee machine with a sharp corner, but also avoiding a co-worker we are in a feud with, and whom we believed knowingly spilled hot coffee on another co-worker the week before. How social information is processed is key in understanding rarer but more impactful events that can have lifelong impact on an individual’s life. Interpersonal trauma, a type of trauma that is acquired from harm received from another individual, leads more often to post-traumatic stress disorder (PTSD) than non-socially related trauma, for example, a car crash (Kleim, Ehlers, & Glucksman, 2007). To understand why a specific social harm affect us negatively, it is crucial to study how the brain integrates social, as well as nonsocial (physical) information during the harmful event. In Study I, II, and III we investigated how different streams of information (social and physical) are integrated during a social interaction. We were interested in how intentionality of an action that has direct aversive consequences on an individual can change the individuals’ judgements of the action and the person performing it. Using a time-based neuroimaging approach, we investigated how the value of an action is integrated with that of the intention behind it. Study I revealed evidence that suggests that intentionality of a directly experienced aversive action is represented throughout the cortex in neural activity patterns that form over time. Study II highlighted the importance of timing and sample size in similar paradigms, and that neural pattern formation in response to aversive actions regardless of the intentions behind them are robustly replicated. In Study III we asked questions about how these learned action outcomes and knowledge about the people performing the harmful action change neural connectivity, and how this translates into changes in perception and memory 24-hours later. We found an increased connectivity between the hippocampus and the amygdala, which correlated with generalized memory responses to images associated with shocks from an intentional harm do-er, and increased connectivity between the FFA and the insula, as well as the FFA and the dorsomedial prefrontal cortex (dmPFC) correlated with facilitated recognition of the intentional harm do-er’s face

    Concepts, Attention, And The Contents Of Conscious Visual Experience

    Get PDF
    Ph.D. Thesis. University of Hawaiʻi at Mānoa 2018

    Towards an epistemology of medical imaging

    Get PDF
    Tese de doutoramento (co-tutela), História e Filosofia das Ciências (Filosofia), Faculdade de Ciências da Universidade de Lisboa, Università degli Studi di Milano, 201

    Healthcare data heterogeneity and its contribution to machine learning performance

    Full text link
    Tesis por compendio[EN] The data quality assessment has many dimensions, from those so obvious as the data completeness and consistency to other less evident such as the correctness or the ability to represent the target population. In general, it is possible to classify them as those produced by an external effect, and those that are inherent in the data itself. This work will be focused on those inherent to data, such as the temporal and the multisource variability applied to healthcare data repositories. Every process is usually improved over time, and that has a direct impact on the data distribution. Similarly, how a process is executed in different sources may vary due to many factors, such as the diverse interpretation of standard protocols by human beings or different previous experiences of experts. Artificial Intelligence has become one of the most widely extended technological paradigms in almost all the scientific and industrial fields. Advances not only in models but also in hardware have led to their use in almost all areas of science. Although the solved problems using this technology often have the drawback of not being interpretable, or at least not as much as other classical mathematical or statistical techniques. This motivated the emergence of the "explainable artificial intelligence" concept, that study methods to quantify and visualize the training process of models based on machine learning. On the other hand, real systems may often be represented by large networks (graphs), and one of the most relevant features in such networks is the community or clustering structure. Since sociology, biology, or clinical situations could usually be modeled using graphs, community detection algorithms are becoming more and more extended in a biomedical field. In the present doctoral thesis, contributions have been made in the three above mentioned areas. On the one hand, temporal and multisource variability assessment methods based on information geometry were used to detect variability in data distribution that may hinder data reuse and, hence, the conclusions which can be extracted from them. This methodology's usability was proved by a temporal variability analysis to detect data anomalies in the electronic health records of a hospital over 7 years. Besides, it showed that this methodology could have a positive impact if it applied previously to any study. To this end, firstly, we extracted the variables that highest influenced the intensity of headache in migraine patients using machine learning techniques. One of the principal characteristics of machine learning algorithms is its capability of fitting the training set. In those datasets with a small number of observations, the model can be biased by the training sample. The observed variability, after the application of the mentioned methodology and considering as sources the registries of migraine patients with different headache intensity, served as evidence for the truthfulness of the extracted features. Secondly, such an approach was applied to measure the variability among the gray-level histograms of digital mammographies. We demonstrated that the acquisition device produced the observed variability, and after defining an image preprocessing step, the performance of a deep learning model, which modeled a marker of breast cancer risk estimation, increased. Given a dataset containing the answers to a survey formed by psychometric scales, or in other words, questionnaires to measure psychologic factors, such as depression, cope, etcetera, two deep learning architectures that used the data structure were defined. Firstly, we designed a deep learning architecture using the conceptual structure of such psychometric scales. This architecture was trained to model the happiness degree of the participants, improved the performance compared to classical statistical approaches. A second architecture, automatically designed using community detection in graphs, was not only a contribution[ES] El análisis de la calidad de los datos abarca muchas dimensiones, desde aquellas tan obvias como la completitud y la coherencia, hasta otras menos evidentes como la correctitud o la capacidad de representar a la población objetivo. En general, es posible clasificar estas dimensiones como las producidas por un efecto externo y las que son inherentes a los propios datos. Este trabajo se centrará en la evaluación de aquellas inherentes a los datos en repositorios de datos sanitarios, como son la variabilidad temporal y multi-fuente. Los procesos suelen evolucionar con el tiempo, y esto tiene un impacto directo en la distribución de los datos. Análogamente, la subjetividad humana puede influir en la forma en la que un mismo proceso, se ejecuta en diferentes fuentes de datos, influyendo en su cuantificación o recogida. La inteligencia artificial se ha convertido en uno de los paradigmas tecnológicos más extendidos en casi todos los campos científicos e industriales. Los avances, no sólo en los modelos sino también en el hardware, han llevado a su uso en casi todas las áreas de la ciencia. Es cierto que, los problemas resueltos mediante esta tecnología, suelen tener el inconveniente de no ser interpretables, o al menos, no tanto como otras técnicas de matemáticas o de estadística clásica. Esta falta de interpretabilidad, motivó la aparición del concepto de "inteligencia artificial explicable", que estudia métodos para cuantificar y visualizar el proceso de entrenamiento de modelos basados en aprendizaje automático. Por otra parte, los sistemas reales pueden representarse a menudo mediante grandes redes (grafos), y una de las características más relevantes de esas redes, es la estructura de comunidades. Dado que la sociología, la biología o las situaciones clínicas, usualmente pueden modelarse mediante grafos, los algoritmos de detección de comunidades se están extendiendo cada vez más en el ámbito biomédico. En la presente tesis doctoral, se han hecho contribuciones en los tres campos anteriormente mencionados. Por una parte, se han utilizado métodos de evaluación de variabilidad temporal y multi-fuente, basados en geometría de la información, para detectar la variabilidad en la distribución de los datos que pueda dificultar la reutilización de los mismos y, por tanto, las conclusiones que se puedan extraer. Esta metodología demostró ser útil tras ser aplicada a los registros electrónicos sanitarios de un hospital a lo largo de 7 años, donde se detectaron varias anomalías. Además, se demostró el impacto positivo que este análisis podría añadir a cualquier estudio. Para ello, en primer lugar, se utilizaron técnicas de aprendizaje automático para extraer las características más relevantes, a la hora de clasificar la intensidad del dolor de cabeza en pacientes con migraña. Una de las propiedades de los algoritmos de aprendizaje automático es su capacidad de adaptación a los datos de entrenamiento, en bases de datos en los que el número de observaciones es pequeño, el estimador puede estar sesgado por la muestra de entrenamiento. La variabilidad observada, tras la utilización de la metodología y considerando como fuentes, los registros de los pacientes con diferente intensidad del dolor, sirvió como evidencia de la veracidad de las características extraídas. En segundo lugar, se aplicó para medir la variabilidad entre los histogramas de los niveles de gris de mamografías digitales. Se demostró que esta variabilidad estaba producida por el dispositivo de adquisición, y tras la definición de un preproceso de imagen, se mejoró el rendimiento de un modelo de aprendizaje profundo, capaz de estimar un marcador de imagen del riesgo de desarrollar cáncer de mama. Dada una base de datos que recogía las respuestas de una encuesta formada por escalas psicométricas, o lo que es lo mismo cuestionarios que sirven para medir un factor psicológico, tales como depresión, resiliencia, etc., se definieron nuevas arquitecturas de aprendizaje profundo utilizando la estructura de los datos. En primer lugar, se dise˜no una arquitectura, utilizando la estructura conceptual de las citadas escalas psicom´etricas. Dicha arquitectura, que trataba de modelar el grado de felicidad de los participantes, tras ser entrenada, mejor o la precisión en comparación con otros modelos basados en estadística clásica. Una segunda aproximación, en la que la arquitectura se diseño de manera automática empleando detección de comunidades en grafos, no solo fue una contribución de por sí por la automatización del proceso, sino que, además, obtuvo resultados comparables a su predecesora.[CA] L'anàlisi de la qualitat de les dades comprén moltes dimensions, des d'aquelles tan òbvies com la completesa i la coherència, fins a altres menys evidents com la correctitud o la capacitat de representar a la població objectiu. En general, és possible classificar estes dimensions com les produïdes per un efecte extern i les que són inherents a les pròpies dades. Este treball se centrarà en l'avaluació d'aquelles inherents a les dades en reposadors de dades sanitaris, com són la variabilitat temporal i multi-font. Els processos solen evolucionar amb el temps i açò té un impacte directe en la distribució de les dades. Anàlogament, la subjectivitat humana pot influir en la forma en què un mateix procés, s'executa en diferents fonts de dades, influint en la seua quantificació o arreplega. La intel·ligència artificial s'ha convertit en un dels paradigmes tecnològics més estesos en quasi tots els camps científics i industrials. Els avanços, no sols en els models sinó també en el maquinari, han portat al seu ús en quasi totes les àrees de la ciència. És cert que els problemes resolts per mitjà d'esta tecnologia, solen tindre l'inconvenient de no ser interpretables, o almenys, no tant com altres tècniques de matemàtiques o d'estadística clàssica. Esta falta d'interpretabilitat, va motivar l'aparició del concepte de "inteligencia artificial explicable", que estudia mètodes per a quantificar i visualitzar el procés d'entrenament de models basats en aprenentatge automàtic. D'altra banda, els sistemes reals poden representar-se sovint per mitjà de grans xarxes (grafs) i una de les característiques més rellevants d'eixes xarxes, és l'estructura de comunitats. Atés que la sociologia, la biologia o les situacions clíniques, poden modelar-se usualment per mitjà de grafs, els algoritmes de detecció de comunitats s'estan estenent cada vegada més en l'àmbit biomèdic. En la present tesi doctoral, s'han fet contribucions en els tres camps anteriorment mencionats. D'una banda, s'han utilitzat mètodes d'avaluació de variabilitat temporal i multi-font, basats en geometria de la informació, per a detectar la variabilitat en la distribució de les dades que puga dificultar la reutilització dels mateixos i, per tant, les conclusions que es puguen extraure. Esta metodologia va demostrar ser útil després de ser aplicada als registres electrònics sanitaris d'un hospital al llarg de 7 anys, on es van detectar diverses anomalies. A més, es va demostrar l'impacte positiu que esta anàlisi podria afegir a qualsevol estudi. Per a això, en primer lloc, es van utilitzar tècniques d'aprenentatge automàtic per a extraure les característiques més rellevants, a l'hora de classificar la intensitat del mal de cap en pacients amb migranya. Una de les propietats dels algoritmes d'aprenentatge automàtic és la seua capacitat d'adaptació a les dades d'entrenament, en bases de dades en què el nombre d'observacions és xicotet, l'estimador pot estar esbiaixat per la mostra d'entrenament. La variabilitat observada després de la utilització de la metodologia, i considerant com a fonts els registres dels pacients amb diferent intensitat del dolor, va servir com a evidència de la veracitat de les característiques extretes. En segon lloc, es va aplicar per a mesurar la variabilitat entre els histogrames dels nivells de gris de mamografies digitals. Es va demostrar que esta variabilitat estava produïda pel dispositiu d'adquisició i després de la definició d'un preprocés d'imatge, es va millorar el rendiment d'un model d'aprenentatge profund, capaç d'estimar un marcador d'imatge del risc de desenrotllar càncer de mama. Donada una base de dades que arreplegava les respostes d'una enquesta formada per escales psicomètriques, o el que és el mateix qüestionaris que servixen per a mesurar un factor psicològic, com ara depressió, resiliència, etc., es van definir noves arquitectures d'aprenentatge profund utilitzant l’estructura de les dades. En primer lloc, es disseny`a una arquitectura, utilitzant l’estructura conceptual de les esmentades escales psicom`etriques. La dita arquitectura, que tractava de modelar el grau de felicitat dels participants, despr´es de ser entrenada, va millorar la precisió en comparació amb altres models basats en estad´ıstica cl`assica. Una segona aproximació, en la que l’arquitectura es va dissenyar de manera autoàtica emprant detecció de comunitats en grafs, no sols va ser una contribució de per si per l’automatització del procés, sinó que, a més, va obtindre resultats comparables a la seua predecessora.También me gustaría mencionar al Instituto Tecnológico de la Informáica, en especial al grupo de investigación Percepción, Reconocimiento, Aprendizaje e Inteligencia Artificial, no solo por darme la oportunidad de seguir creciendo en el mundo de la ciencia, sino también, por apoyarme en la consecución de mis objetivos personalesPérez Benito, FJ. (2020). Healthcare data heterogeneity and its contribution to machine learning performance [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/154414TESISCompendi

    Data infrastructures and digital labour : the case of teleradiology

    Get PDF
    In this thesis, I investigate the effects of digitalisation in teleradiology, the practice of outsourcing radiology diagnosis, through an analysis of the role of infrastructures that enable the transfer, storage, and processing of digital medical data. Consisting of standards, code, protocols and hardware, these infrastructures contribute to the making of complex supply chains that intervene into existing labour processes and produce interdependent relations among radiologists, patients, data engineers, and auxiliary workers. My analysis focuses on three key infrastructures that facilitate teleradiology: Picture Archiving and Communication Systems (PACS), the Digital Imaging and Communication in Medicine (DICOM) standard, and the Health Level 7 (HL7) standard. PACS is a system of four interconnected components: imaging hardware, a secure network, viewing stations for reading images, and data storage facilities. All of these components use DICOM, which specifies data formats and network protocols for the transfer of data within PACS. HL7 is a standard that defines data structures for the purposes of transfer between medical information systems. My research draws on fieldwork in teleradiology companies in Sydney, Australia, and Bangalore, India, which specialise in international outsourcing of medical imaging diagnostics and provide services for hospitals in Europe, USA, and Singapore, among others. I argue that PACS, DICOM, and HL7 establish a technopolitical context that erodes boundaries between social institutions of labour management and material infrastructures of data control. This intertwining of bureaucratic and infrastructural modes of regulation gives rise to a variety of strategies deployed by companies for maximising productivity, as well as counter-strategies of workers in leveraging mobility and qualifications to their advantage

    Advancing efficiency and robustness of neural networks for imaging

    Get PDF
    Enabling machines to see and analyze the world is a longstanding research objective. Advances in computer vision have the potential of influencing many aspects of our lives as they can enable machines to tackle a variety of tasks. Great progress in computer vision has been made, catalyzed by recent progress in machine learning and especially the breakthroughs achieved by deep artificial neural networks. Goal of this work is to alleviate limitations of deep neural networks that hinder their large-scale adoption for real-world applications. To this end, it investigates methodologies for constructing and training deep neural networks with low computational requirements. Moreover, it explores strategies for achieving robust performance on unseen data. Of particular interest is the application of segmenting volumetric medical scans because of the technical challenges it imposes, as well as its clinical importance. The developed methodologies are generic and of relevance to a broader computer vision and machine learning audience. More specifically, this work introduces an efficient 3D convolutional neural network architecture, which achieves high performance for segmentation of volumetric medical images, an application previously hindered by high computational requirements of 3D networks. It then investigates sensitivity of network performance on hyper-parameter configuration, which we interpret as overfitting the model configuration to the data available during development. It is shown that ensembling a set of models with diverse configurations mitigates this and improves generalization. The thesis then explores how to utilize unlabelled data for learning representations that generalize better. It investigates domain adaptation and introduces an architecture for adversarial networks tailored for adaptation of segmentation networks. Finally, a novel semi-supervised learning method is proposed that introduces a graph in the latent space of a neural network to capture relations between labelled and unlabelled samples. It then regularizes the embedding to form a compact cluster per class, which improves generalization.Open Acces

    A situated method for modelling and analysing the efficiency of cognitive activity during the radiology reporting workflow using eye-tracking

    Get PDF
    The success of modern medical imaging systems has created a data overload problem, where an ever-increasing number of examinations, generate more images per study, which all need to be evaluated by radiologists or other reporting practitioners. This operational bottleneck hasthe potentialto create fatigue and burnout due to the high mental workload that is required to keep up with the demand. The focus of this problem centres around the cognitive complexity of the radiology reporting workflow, and the associated workstation interactions involved in diagnostic report generation. There has been a significant body of work evaluating the behaviour of radiologists using controlled laboratory-based techniques, but these non-naturalistic studies fail to address the highly context dependant nature of the radiology reporting workflow. For example, the early eye-tracking work of Charmody et al; the psychometric studies by Krupinksi et al; and also the workstation interaction evaluations of Moise et al; whilst highly principled, can be all be questioned on the grounds of ecological validity and authenticity. This thesis asserts that the only way to truly understand and resolve the radiology data overload problem, is by developing a situated method for observing the reporting workflow that can evaluate the behaviours of the reporting clinicians in relation to their authentic reporting context. To this end, this study has set out to develop a new approach for observing and analysing the cognitive activities of the reporters relative to the demands of their genuine working environment, and supported through the application of a Critical Realist’s perspective to naturalistic workplace observations. This goal was achieved through the development of four key project deliverables: • An in-depth exploratory study of the radiology overload problem based on an extensive literature review and situated observations of authentic reporting workflows. • A descriptive hierarchical activity modelof the reporting workflow that can be understood by both clinicians, application designers and researchers. • A generalised methodology and research protocolfor conducting situated observations of the radiology reporting workflow, using an analysis based on the process tracing of sequencesof Object Related Actions, captured with eye-tracking and multimodal recordings. • A set of case studies demonstrating the applicability of the research protocol involving 5 Radiology Consultants, 2 Radiology Registrars and one Reporting Radiographer at a single NHS Hospital within the UK. The final workflow evaluation of the case studies demonstrated that activities such as error correction, and the collection of supporting radiological information from previous studies is complex, time consuming and cognitively demanding. These types of activities are characterised by long, low utility actions that correspond to what Kahneman refers to as “Thinking Slow”. Also, the participants appeared to be self-optimising their workflow via a sparse use of complex functionality and system tools. From these observations, the author recommends that any intervention that can reduce the number and the duration of the object related actions used to produce radiology reports, will reduce cognitive load, increase overall efficiency, and go some way to alleviate the data overload problem. 4 This study establishes a new set of situated techniques that are able to capture and quantify the complex dynamicactivities that make up the radiology reporting workflow. Itis hoped that the ability to distil usefuland impactful insightsfrom the user’s workstation behaviours can be used as the basis for further development in the area of workflow analysis and redesign, which will ultimately improve the working lives of Radiologists and other Reporting Clinicians. Lastly, the generic nature of these techniques make them amenable for use within any type of complex sociotechnical human factors study related to the cognitive efficiency of the user

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    View on education:I see; therefore, I learn

    Get PDF
    corecore