108 research outputs found

    Design Ltd.: Renovated Myths for the Development of Socially Embedded Technologies

    Full text link
    This paper argues that traditional and mainstream mythologies, which have been continually told within the Information Technology domain among designers and advocators of conceptual modelling since the 1960s in different fields of computing sciences, could now be renovated or substituted in the mould of more recent discourses about performativity, complexity and end-user creativity that have been constructed across different fields in the meanwhile. In the paper, it is submitted that these discourses could motivate IT professionals in undertaking alternative approaches toward the co-construction of socio-technical systems, i.e., social settings where humans cooperate to reach common goals by means of mediating computational tools. The authors advocate further discussion about and consolidation of some concepts in design research, design practice and more generally Information Technology (IT) development, like those of: task-artifact entanglement, universatility (sic) of End-User Development (EUD) environments, bricolant/bricoleur end-user, logic of bricolage, maieuta-designers (sic), and laissez-faire method to socio-technical construction. Points backing these and similar concepts are made to promote further discussion on the need to rethink the main assumptions underlying IT design and development some fifty years later the coming of age of software and modern IT in the organizational domain.Comment: This is the peer-unreviewed of a manuscript that is to appear in D. Randall, K. Schmidt, & V. Wulf (Eds.), Designing Socially Embedded Technologies: A European Challenge (2013, forthcoming) with the title "Building Socially Embedded Technologies: Implications on Design" within an EUSSET editorial initiative (www.eusset.eu/

    Human-Data Interaction in Healthcare

    Full text link
    In this paper, we focus on an emerging strand of IT-oriented research, namely Human-Data Interaction (HDI) and how this can be applied to healthcare. HDI regards both how humans create and use data by means of interactive systems, which can both assist and constrain them, as well as to passively collect and proactively generate data. Healthcare provides a challenging arena to test the potential of HDI to provide a new, user-centered perspective on how data work should be supported and assessed, especially in the light of the fact that data are becoming increasingly big and that many tools are now available for the lay people, including doctors and nurses, to interact with health-related data.Comment: 10 pages, 4 Figure

    Machine Learning in Orthopedics: A Literature Review

    Get PDF
    In this paper we present the findings of a systematic literature review covering the articles published in the last two decades in which the authors described the application of a machine learning technique and method to an orthopedic problem or purpose. By searching both in the Scopus and Medline databases, we retrieved, screened and analyzed the content of 70 journal articles, and coded these resources following an iterative method within a Grounded Theory approach. We report the survey findings by outlining the articles' content in terms of the main machine learning techniques mentioned therein, the orthopedic application domains, the source data and the quality of their predictive performance

    MAKING PEOPLE AWARE OF DEVIATIONS FROM STANDARDS IN HEALTH CARE

    Get PDF
    In this paper we consider the role of standards as a means for interoperability among members of different communities. If we consider, in particular, the healthcare domain, there is an increasing number of efforts to develop explicit and formal representations of medical concepts so as to provide a common infrastructure for the reuse of clinical information and for the integration and the sharing of medical knowledge across the world. A critical issue raises when local customizations of standards are used as standards. If this occurs, standards are no more able to guarantee their supportive function to interoperability. To overcome this problem we propose a solution aiming at making members of different facilities aware of the changes occurred locally in a standard. At architectural level, we propose to build a layer that acts upon the interface of the application by which the articulation of activities across organizational boundaries is mediated (e.g., an handing over between different healthcare facilities). At application level, we provide practitioners with a common visual notation allowing them enrich the artifacts that mediate inter-articulation, by means of a reference to a standard, e.g. a schema of intervention. We claim that this increased awareness can support different people in aligning practices with standards and making standards effective means for coordination and interoperability. Furthermore, we report a case focusing on such a layer and visual notation by which to enrich the interface of the information system that mediates the handingover between an Emergency Service and a hospital emergency department

    The unbearable (technical) unreliability of automated facial emotion recognition

    Get PDF
    Emotion recognition, and in particular acial emotion recognition (FER), is among the most controversial applications of machine learning, not least because of its ethical implications for human subjects. In this article, we address the controversial conjecture that machines can read emotions from our facial expressions by asking whether this task can be performed reliably. This means, rather than considering the potential harms or scientific soundness of facial emotion recognition systems, focusing on the reliability of the ground truths used to develop emotion recognition systems, assessing how well different human observers agree on the emotions they detect in subjects' faces. Additionally, we discuss the extent to which sharing context can help observers agree on the emotions they perceive on subjects' faces. Briefly, we demonstrate that when large and heterogeneous samples of observers are involved, the task of emotion detection from static images crumbles into inconsistency. We thus reveal that any endeavour to understand human behaviour from large sets of labelled patterns is over-ambitious, even if it were technically feasible. We conclude that we cannot speak of actual accuracy for facial emotion recognition systems for any practical purposes

    A Proposal For COVID-19 Applications Enabling Extensive Epidemiological Studies

    Get PDF
    During the next phase of COVID-19 outbreak, mobile applications could be the most used and proposed technical solution for monitoring and tracking, by acquiring data from subgroups of the population. A possible problem could be data fragmentation, which could lead to three harmful effects: i) data could not cover the minimum percentage of the people for monitoring efficacy, ii) it could be heavily biased due to different data collection policies, and iii) the app could not monitor subjects moving across different zones or countries. A common approach could solve these problems, defining requirements for the selection of observed data and technical specifications for the complete interoperability between different solutions. This work aims to integrate the international framework of requirements in order to mitigate the known issues and to suggest a method for clinical data collection that ensures to researchers and public health institution significant and reliable data. First, we propose to identify which data is relevant for COVID-19 monitoring through literature and guidelines review. Then we analysed how the currently available guidelines for COVID-19 monitoring applications drafted by European Union and World Health Organization face the issues listed before. Eventually we proposed the first draft of integration of current guidelines

    IGV Short Scale to Assess Implicit Value of Visualizations through Explicit Interaction

    Get PDF
    This paper reports the assessment of the infographics-value (IGV) short scale, designed to measure the value in the use of infographics. The scale was made to assess the implicit quality dimensions of infographics. These dimensions were experienced during the execution of tasks in a contextualized scenario. Users were asked to retrieve a piece of information by explicitly interacting with the infographics. After usage, they were asked to rate quality dimensions of infographics, namely, usefulness, intuitiveness, clarity, informativity, and beauty; the overall value perceived from interacting with infographics was also included in the survey. Each quality dimension was coded as a six-point rating scale item, with overall value included. The proposed IGV short scale model was validated with 650 people. Our analysis confirmed that all considered dimensions in our scale were independently significant and contributed to assessing the implicit value of infographics. The IGV short scale is a lightweight but exhaustive tool to rapidly assess the implicit value of an explicit interaction with infographics in daily tasks, where value in use is crucial to measuring the situated effectiveness of visual tools. View Full-Tex

    Artificial intelligence-based tools to control healthcare associated infections: A systematic review of the literature

    Get PDF
    Background: Healthcare-associated infections (HAIs) are the most frequent adverse events in healthcare and a global public health concern. Surveillance is the foundation for effective HAIs prevention and control. Manual surveillance is labor intensive, costly and lacks standardization. Artificial Intelligence (AI) and machine learning (ML) might support the development of HAI surveillance algorithms aimed at understanding HAIs risk factors, improve patient risk stratification, identification of transmission pathways, timely or real-time detection. Scant evidence is available on AI and ML implementation in the field of HAIs and no clear patterns emerges on its impact. Methods: We conducted a systematic review following the PRISMA guidelines to systematically retrieve, quantitatively pool and critically appraise the available evidence on the development, implementation, performance and impact of ML-based HAIs detection models. Results: Of 3445 identified citations, 27 studies were included in the review, the majority published in the US (n = 15, 55.6%) and on surgical site infections (SSI, n = 8, 29.6%). Only 1 randomized controlled trial was included. Within included studies, 17 (63%) ML approaches were classified as predictive and 10 (37%) as retrospective. Most of the studies compared ML algorithms' performance with non-ML logistic regression statistical algorithms, 18.5% compared different ML models' performance, 11.1% assessed ML algorithms' performance in comparison with clinical diagnosis scores, 11.1% with standard or automated surveillance models. Overall, there is moderate evidence that ML-based models perform equal or better as compared to non-ML approaches and that they reach relatively high-performance standards. However, heterogeneity amongst the studies is very high and did not dissipate significantly in subgroup analyses, by type of infection or type of outcome. Discussion: Available evidence mainly focuses on the development and testing of HAIs detection and prediction models, while their adoption and impact for research, healthcare quality improvement, or national surveillance purposes is still far from being explored
    • …
    corecore