912 research outputs found

    Legal Aspects of the Use Artificial Intelligence in Telemedicine

    Get PDF
    Objective: the rapid expansion of the use of telemedicine in clinical practice and the increasing use of Artificial Intelligence has raised many privacy issues and concerns among legal scholars. Due to the sensitive nature of the data involved particular attention should be paid to the legal aspects of those systems. This article aimed to explore the legal implication of the use of Artificial Intelligence in the field of telemedicine, especially when continuous learning and automated decision-making systems are involved; in fact, providing personalized medicine through continuous learning systems may represent an additional risk. Particular attention is paid to vulnerable groups, such as children, the elderly, and severely ill patients, due to both the digital divide and the difficulty of expressing free consent.Methods: comparative and formal legal methods allowed to analyze current regulation of the Artificial Intelligence and set up its correlations with the regulation on telemedicine, GDPR and others.Results: legal implications of the use of Artificial Intelligence in telemedicine, especially when continuous learning and automated decision-making systems are involved were explored; author concluded that providing personalized medicine through continuous learning systems may represent an additional risk and offered the ways to minimize it. Author also focused on the issues of informed consent of vulnerable groups (children, elderly, severely ill patients).Scientific novelty: existing risks and issues that are arising from the use of Artificial Intelligence in telemedicine with particular attention to continuous learning systems are explored.Practical significance: results achieved in this paper can be used for lawmaking process in the sphere of use of Artificial Intelligence in telemedicine and as base for future research in this area as well as contribute to limited literature on the topic

    Regulating Smart Robots and Artificial Intelligence in the European Union

    Get PDF
    Objective: In recent years, the need for regulation of robots and Artificial Intelligence has become apparent in Europe. European Union needs a standardized regulation that will ensure a high level of security in robotics systems to prevent potential breaches. Therefore a new regulation should make clear that it is the responsibility of producers to identify the blind spots in these systems, exposing their flaws, or, when a vulnerability is discovered in a later stage, to update the system even if that model is not on the market anymore. This article aims at suggesting some possible revisions of the existing legal provisions in the EU.Methods: The author employed the Kestemont legal methodology, analyzing legal text, comparing them, and connecting them with technical elements regarding smart robots, resulting in the highlighting of the critical provisions to be updated.Results: This article suggests some revisions to the existing regulatory proposals: according to the author, although the AI Act and the Cyberresilience Act represent a first step towards this direction, their general principles are not sufficiently detailed to guide programmers on how to implement them in practice, and policymakers should carefully assess in what cases lifelong learning models should be allowed to the market. The author suggests that the current proposal regarding mandatory updates should be expanded, as five years are a short time frame that would not cover the risks associated with long-lasting products, such as vehicles.ScientiïŹc novelty: The author has examined the existing regulatory framework regarding AI systems and devices with digital elements, highlighted the risks of the current legal framework, and suggested possible amendments to the existing regulatory proposals.Practical signiïŹcance: The article can be employed to update the existing proposals for the AI Act and the Cyber-resilience Act

    Action comprehension: deriving spatial and functional relations.

    Get PDF
    A perceived action can be understood only when information about the action carried out and the objects used are taken into account. It was investigated how spatial and functional information contributes to establishing these relations. Participants observed static frames showing a hand wielding an instrument and a potential target object of the action. The 2 elements could either match or mismatch, spatially or functionally. Participants were required to judge only 1 of the 2 relations while ignoring the other. Both irrelevant spatial and functional mismatches affected judgments of the relevant relation. Moreover, the functional relation provided a context for the judgment of the spatial relation but not vice versa. The results are discussed in respect to recent accounts of action understanding

    A new approach to the front-end readout of cryogenic ionization detectors

    Full text link
    We present a novel approach to the readout of ionization detectors. The solution allows to minimize the number of components and the space occupation close to the detector. This way a minimal impact is added on the radioactive background in those experiments where very low signal rates are expected, such as GERDA and MAJORANA. The circuit consists in a JFET transistor and a remote second stage. The DC feedback path is closed using a diode. Two signal cables are only necessary for biasing and readout.Comment: 14 pages, 15 figures and 15 equation

    MSR32 COVID-19 Beds’ Occupancy and Hospital Complaints: A Predictive Model

    Get PDF
    Objectives COVID-19 pandemic limited the number of patients that could be promptly and adequately taken in charge. The proposed research aims at predicting the number of patients requiring any type of hospitalizations, considering not only patients affected by COVID-19, but also other severe viral diseases, including untreated chronic and frail patients, and also oncological ones, to estimate potential hospital lawsuits and complaints. Methods An unsupervised learning approach of artificial neural network’s called Self-Organizing Maps (SOM), grounding on the prediction of the existence of specific clusters and useful to predict hospital behavioral changes, has been designed to forecast the hospital beds’ occupancy, using pre and post COVID-19 time-series, and supporting the prompt prediction of litigations and potential lawsuits, so that hospital managers and public institutions could perform an impacts’ analysis to decide whether to invest resources to increase or allocate differentially hospital beds and humans capacity. Data came from the UK National Health Service (NHS) statistic and digital portals, concerning a 4-year time horizon, related to 2 pre and 2 post COVID-19 years. Results Clusters revealed two principal behaviors in selecting the resources allocation. In case of increase of non-COVID hospitalized patients, a reduction in the number of complaints (-55%) emerged. A higher number of complaints was registered (+17%) against a considerable reduction in the number of beds occupied (-26%). Based on the above, the management of hospital beds is a crucial factor which can influence the complaints trend. Conclusions The model could significantly support in the management of hospital capacity, helping decision-makers in taking rational decisions under conditions of uncertainty. In addition, this model is highly replicable also in the estimation of current hospital beds, healthcare professionals, equipment, and other resources, extremely scarce during emergency or pandemic crises, being able to be adapted for different local and national settings

    Shared multisensory experience affects Others' boundary: The enfacement illusion in schizophrenia

    Get PDF
    Schizophrenia has been described as a psychiatric condition characterized by deficits in one's own and others' face recognition, as well as by a disturbed sense of body-ownership. To date, no study has integrated these two lines of research with the aim of investigating Enfacement Illusion (EI) proneness in schizophrenia. To accomplish this goal, the classic EI protocol was adapted to test the potential plasticity of both Self-Other and Other-Other boundaries. Results showed that EI induced the expected malleability of Self-Other boundary among both controls and patients. Interestingly, for the first time, the present study demonstrates that also the Other-Other boundary was influenced by EI. Furthermore, comparing the two groups, the malleability of the Other-Other boundary showed an opposite modulation. These results suggest that, instead of greater Self-Other boundary plasticity, a qualitative difference can be detected between schizophrenia patients and controls in the malleability of the Other-Other boundary. The present study points out a totally new aspect about body-illusions and schizophrenia disorder, demonstrating that EI is not only confined to self-sphere but it also affects the way we discriminate others, representing a potential crucial aspect in the social domain

    Moving Toward Emotions in the Aesthetic Experience

    Get PDF
    In this essay, we comment on our original review published in 2009 inCurrent Opinion in Neurobiologywhere, as we build a general theoretical framework that encompasses major empirical work in the field of neuroesthetics since then, we also emphasize the role of the motor system and emotions in building an aesthetic experience. Here we extend our previous view with further empirical evidence, including from clinical and developmental psychology, thus supporting the idea that our perception is not a mere "visual" copy of what is before our eyes, but the result of a complex construction, whose outcome depends on the contribution of our body and its motor potential, our senses and emotions, imagination and memories.While we offer some food for thought for future research, we conclude by introducing a fairly recent line of study that explores the role of embodiment in architecture

    Frontal Functional Connectivity of Electrocorticographic Delta and Theta Rhythms during Action Execution Versus Action Observation in Humans

    Get PDF
    We have previously shown that in seven drug-resistant epilepsy patients, both reaching-grasping of objects and the mere observation of those actions did desynchronize subdural electrocorticographic (ECoG) alpha (8–13 Hz) and beta (14–30) rhythms as a sign of cortical activation in primary somatosensory-motor, lateral premotor and ventral prefrontal areas (Babiloni et al., 2016a). Furthermore, that desynchronization was greater during action execution than during its observation. In the present exploratory study, we reanalyzed those ECoG data to evaluate the proof-of-concept that lagged linear connectivity (LLC) between primary somatosensory-motor, lateral premotor and ventral prefrontal areas would be enhanced during the action execution compared to the mere observation due to a greater flow of visual and somatomotor information. Results showed that the delta-theta (<8 Hz) LLC between lateral premotor and ventral prefrontal areas was higher during action execution than during action observation. Furthermore, the phase of these delta-theta rhythms entrained the local event-related connectivity of alpha and beta rhythms. It was speculated the existence of a multi-oscillatory functional network between high-order frontal motor areas which should be more involved during the actual reaching-grasping of objects compared to its mere observation. Future studies in a larger population should cross-validate these preliminary results

    Neural correlates of the processing of co-speech gestures

    Get PDF
    In communicative situations, speech is often accompanied by gestures. For example, speakers tend to illustrate certain contents of speech by means of iconic gestures which are hand movements that bear a formal relationship to the contents of speech. The meaning of an iconic gesture is determined both by its form as well as the speech context in which it is performed. Thus, gesture and speech interact in comprehension. Using fMRI, the present study investigated what brain areas are involved in this interaction process. Participants watched videos in which sentences containing an ambiguous word (e.g. She touched the mouse) were accompanied by either a meaningless grooming movement, a gesture supporting the more frequent dominant meaning (e.g. animal) or a gesture supporting the less frequent subordinate meaning (e.g. computer device). We hypothesized that brain areas involved in the interaction of gesture and speech would show greater activation to gesture-supported sentences as compared to sentences accompanied by a meaningless grooming movement. The main results are that when contrasted with grooming, both types of gestures (dominant and subordinate) activated an array of brain regions consisting of the left posterior superior temporal sulcus (STS), the inferior parietal lobule bilaterally and the ventral precentral sulcus bilaterally. Given the crucial role of the STS in audiovisual integration processes, this activation might reflect the interaction between the meaning of gesture and the ambiguous sentence. The activations in inferior frontal and inferior parietal regions may reflect a mechanism of determining the goal of co-speech hand movements through an observation-execution matching process
    • 

    corecore