1,437 research outputs found

    Reconstructive nature of temporal memory for movie scenes

    Get PDF
    Remembering when events took place is a key component of episodic memory. Using a sensitive behavioral measure, the present study investigates whether spontaneous event segmentation and script-based prior knowledge affect memory for the time of movie scenes. In three experiments, different groups of participants were asked to indicate when short video clips extracted from a previously encoded movie occurred on a horizontal timeline that represented the video duration. When participants encoded the entire movie, they were more precise at judging the temporal occurrence of clips extracted from the beginning and the end of the film compared to its middle part, but also at judging clips that were closer to event boundaries. Removing the final part of the movie from the encoding session resulted in a systematic bias in memory for time. Specifically, participants increasingly underestimated the time of occurrence of the video clips as a function of their proximity to the missing part of the movie. An additional experiment indicated that such an underestimation effect generalizes to different audio-visual material and does not necessarily reflect poor temporal memory. By showing that memories are moved in time to make room for missing information, the present study demonstrates that narrative time can be adapted to fit a standard template regardless of what has been effectively encoded, in line with reconstructive theories of memory

    Evaluation of MU-MIMO Digital Beamforming Algorithms in B5G/6G LEO Satellite Systems

    Get PDF
    Satellite Communication (SatCom) systems will be a key component of 5G and 6G networks to achieve the goal of providing unlimited and ubiquitous communications and deploying smart and sustainable networks. To meet the ever-increasing demand for higher throughput in 5G and beyond, aggressive frequency reuse schemes (i.e., full frequency reuse), combined with digital beamforming techniques to cope with the massive co-channel interference, are recognized as a key solution. Aimed at (i) eliminating the joint optimization problem among the beamforming vectors of all users, (ii) splitting it into distinct ones, and (iii) finding a closed-form solution, we propose a beamforming algorithm based on maximizing the users' Signal-to-Leakage-and-Noise Ratio (SLNR) served by a Low Earth Orbit (LEO) satellite. We investigate and assess the performance of several beamforming algorithms, including both those based on Channel State Information (CSI) at the transmitter, i.e., Minimum Mean Square Error (MMSE) and Zero-Forcing (ZF), and those only requiring the users' locations, i.e., Switchable Multi-Beam (MB). Through a detailed numerical analysis, we provide a thorough comparison of the performance in terms of per-user achievable spectral efficiency of the aforementioned beamforming schemes, and we show that the proposed SLNR beamforming technique is able to outperform both MMSE and ZF schemes in the presented SatCom scenario

    Evaluation of multi-user multiple-input multiple-output digital beamforming algorithms in B5G/6G low Earth orbit satellite systems

    Get PDF
    Satellite communication systems will be a key component of 5G and 6G networks to achieve the goal of providing unlimited and ubiquitous communications and deploying smart and sustainable networks. To meet the ever-increasing demand for higher throughput in 5G and beyond, aggressive frequency reuse schemes (i.e., full frequency reuse), combined with digital beamforming techniques to cope with the massive co-channel interference, are recognized as a key solution. Aimed at (i) eliminating the joint optimization problem among the beamforming vectors of all users, (ii) splitting it into distinct ones, and (iii) finding a closed-form solution, we propose a beamforming algorithm based on maximizing the users' signal-to-leakage-and-noise ratio served by a low Earth orbit satellite. We investigate and assess the performance of several beamforming algorithms, including both those based on channel state information at the transmitter, that is, minimum mean square error and zero forcing, and those only requiring the users' locations, that is, switchable multi-beam. Through a detailed numerical analysis, we provide a thorough comparison of the performance in terms of per-user achievable spectral efficiency of the aforementioned beamforming schemes, and we show that the proposed signal to-leakage-plus-noise ratio beamforming technique is able to outperform both minimum mean square error and multi-beam schemes in the presented satellite communication scenario

    GLocalX - From Local to Global Explanations of Black Box AI Models

    Get PDF
    Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLOCALX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLOCALX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLOCALX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLOCALX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications.Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLOCALX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLOCALX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLOCALX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLOCALX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications

    Investigation of Squaramide Catalysts in the Aldol Reaction En Route to Funapide

    Get PDF
    Funapide is a 3,3’-spirocyclic oxindole with promising analgesic activity. A reported pilot-plant scale synthesis of this chiral compound involves an asymmetric aldol reaction, catalyzed by a common bifunctional thiourea structure. In this work, we show that the swapping of the thiourea unit of the catalyst for a tailored squaramide group provides an equally active, but rewardingly more selective, catalyst for this aldol reaction (from 70.5 to 85 % ee). The reaction was studied first on a model oxindole compound. Then, the set of optimal conditions was applied to the target funapide intermediate. The applicability of these conditions seems limited to oxindoles bearing the 3-substituent of funapide. Exemplifying the characteristics of target-focused methodological development, this study highlights how a wide-range screening of catalysts and reaction conditions can provide non-negligible improvements in an industrially viable asymmetric transformation

    Aclees Cf. Sp. Foveatus (Coleoptera Curculionidae), an exotic pest of ficus carica in Italy : a sustainable approach to defence based on aluminosilicate minerals as host plant masking solids

    Get PDF
    The exceptionally frequent entries of alien pest are a major source of concern for the farmers who have to protect their crops from unknown insects, often without natural enemies in the new areas. A new pest belonging to the Molytinae family (Coleoptera: Curculionidae), tribe Hylobiini, reported as Aclees sp. cf. foveatus Voss, was recently introduced in Italy. The species is responsible for severe damages in many Italian fig nurseries and orchards, particularly in the Italian Central Northern regions, i.e. Tuscany, Ligury and Latium. Currently, no active ingredients are registered against this insect on fig crops. An innovative and eco-friendly approach for controlling this exotic weevil infestation was investigated, by using montmorillonite-based clays, either in their native state or containing copper(II) species, and clinoptilolite zeolites, in order to check the perception of the adults\u2019 weevil towards the different solid materials and, subsequently, to evaluate the capability of these innovative products to act as masking agent with respect to the host plant and/or as repellent upon contact. The formulations containing copper(II)-exchanged clay and clinoptilolite zeolite showed preliminary promising results in terms of efficacy and environmental sustainability

    Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability

    Full text link
    Post-hoc model-agnostic interpretation methods such as partial dependence plots can be employed to interpret complex machine learning models. While these interpretation methods can be applied regardless of model complexity, they can produce misleading and verbose results if the model is too complex, especially w.r.t. feature interactions. To quantify the complexity of arbitrary machine learning models, we propose model-agnostic complexity measures based on functional decomposition: number of features used, interaction strength and main effect complexity. We show that post-hoc interpretation of models that minimize the three measures is more reliable and compact. Furthermore, we demonstrate the application of these measures in a multi-objective optimization approach which simultaneously minimizes loss and complexity

    Effects of the blending ratio on the design of keratin/poly (Butylene succinate) nanofibers for drug delivery applications

    Get PDF
    In recent years there has been a growing interest in the use of proteins as biocompatible and environmentally friendly biomolecules for the design of wound healing and drug delivery sys-tems. Keratin is a fascinating protein, obtainable from several keratinous biomasses such as wool, hair or nails, with intrinsic bioactive properties including stimulatory effects on wound repair and excellent carrier capability. In this work keratin/poly (butylene succinate) blend solutions with functional properties tunable by manipulating the polymer blending ratios were prepared by using 1,1,1,3,3,3‐hexafluoroisopropanol as common solvent. Afterwards, these solutions doped with rho-damine B (RhB), were electrospun into blend mats and the drug release mechanism and kinetics as a function of blend composition was studied, in order to understand the potential of such mem-branes as drug delivery systems. The electrophoresis analysis carried out on keratin revealed that the solvent used does not degrade the protein. Moreover, all the blend solutions showed a non‐ Newtonian behavior, among which the Keratin/PBS 70/30 and 30/70 ones showed an amplified orientation ability of the polymer chains when subjected to a shear stress. Therefore, the resulting nan-ofibers showed thinner mean diameters and narrower diameter distributions compared to the Ker-atin/PBS 50/50 blend solution. The thermal stability and the mechanical properties of the blend elec-trospun mats improved by increasing the PBS content. Finally, the RhB release rate increased by increasing the keratin content of the mats and the drug diffused as drug‐protein complex

    Empowerment or Engagement? Digital Health Technologies for Mental Healthcare

    Get PDF
    We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare
    • 

    corecore