149 research outputs found

    Baroque Optics and the Disappearance of the Observer: From Kepler’s Optics to Descartes’ Doubt

    Get PDF
    In the seventeenth century the human observer gradually disappeared from optical treatises. It was a paradoxical process: the naturalization of the eye estranged the mind from its objects. Turned into a material optical instrument, the eye no longer furnished the observer with genuine representations of visible objects. It became a mere screen, on which rested a blurry array of light stains, accidental effects of a purely causal process. It thus befell the intellect to decipher one natural object—a flat image of no inherent epistemic value—as the vague, reversed reflection of another, wholly independent object. In reflecting on and trespassing the boundaries between natural and artificial, orderly and disorderly, this optical paradox was a Baroque intellectual phenomenon; and it was the origin of Descartes’ celebrated doubt— whether we know anything at all

    A generalized light-driven model of community transitions along coral reef depth gradients

    Get PDF
    Aim: Coral reefs shift between distinct communities with depth throughout the world. Yet, despite over half a century of research on coral reef depth gradients, researchers have not addressed the driving force of these patterns. We present a theoretical, process-based model of light’s influence on the shallow to mesophotic reef transition as a single quantitative framework. We also share an interactive web application. Moving beyond depth as an ecological proxy will enhance research conducted on deeper coral reefs. Location: Global; subtropical and tropical coral reefs, oligotrophic and turbid coastal waters. Time period: Present day (2020). Major taxa: Scleractinia. Methods: We constructed ordinary differential equations representing the preferred light environments of shallow and mesophotic Scleractinia. We projected these as depth bands using light attenuation coefficients from around the world, and performed a sensitivity analysis. Results: We found light relationships alone are sufficient to capture major ecological features across coral reef depth gradients. Our model supports the depth limits currently used in coral reef ecology, predicting a global range for the shallow-upper mesophotic boundary at 36.1 m ± 5.6 and the upper-lower mesophotic boundary at 61.9 m ± 9.6. However, our model allows researchers to move past these fixed depth limits, and quantitatively predict the depths of reef zones in locations around the world. Main conclusions: The use of depth as a proxy for changes in coral reef communities offers no guidance for environmental variation between sites. We have shown it is possible to use light to predict the depth boundaries of reef zones as a continuous variable, and to accommodate this variability. Predicting the depths of reef zones in unusual light environments suggests that shallow-water turbid reefs should be considered as mesophotic coral ecosystems. Nonetheless, the current depth-based heuristics are relatively accurate at a global level

    Explainable Multi-View Deep Networks Methodology for Experimental Physics

    Full text link
    Physical experiments often involve multiple imaging representations, such as X-ray scans and microscopic images. Deep learning models have been widely used for supervised analysis in these experiments. Combining different image representations is frequently required to analyze and make a decision properly. Consequently, multi-view data has emerged - datasets where each sample is described by views from different angles, sources, or modalities. These problems are addressed with the concept of multi-view learning. Understanding the decision-making process of deep learning models is essential for reliable and credible analysis. Hence, many explainability methods have been devised recently. Nonetheless, there is a lack of proper explainability in multi-view models, which are challenging to explain due to their architectures. In this paper, we suggest different multi-view architectures for the vision domain, each suited to another problem, and we also present a methodology for explaining these models. To demonstrate the effectiveness of our methodology, we focus on the domain of High Energy Density Physics (HEDP) experiments, where multiple imaging representations are used to assess the quality of foam samples. We apply our methodology to classify the foam samples quality using the suggested multi-view architectures. Through experimental results, we showcase the improvement of accurate architecture choice on both accuracy - 78% to 84% and AUC - 83% to 93% and present a trade-off between performance and explainability. Specifically, we demonstrate that our approach enables the explanation of individual one-view models, providing insights into the decision-making process of each view. This understanding enhances the interpretability of the overall multi-view model. The sources of this work are available at: https://github.com/Scientific-Computing-Lab-NRCN/Multi-View-Explainability

    Depth electrode neurofeedback with a virtual reality interface

    Get PDF
    Invasive brain–computer interfaces (BCI) provide better signal quality in terms of spatial localization, frequencies and signal/noise ratio, in addition to giving access to deep brain regions that play important roles in cognitive or affective processes. Despite some anecdotal attempts, little work has explored the possibility of integrating such BCI input into more sophisticated interactive systems like those which can be developed with game engines. In this article, we integrated an amygdala depth electrode recorder with a virtual environment controlling a virtual crowd. Subjects were asked to down regulate their amygdala using the level of unrest in the virtual room as feedback on how successful they were. We report early results which suggest that users adapt very easily to this paradigm and that the timing and fluctuations of amygdala activity during self-regulation can be matched by crowd animation in the virtual room. This suggests that depth electrodes could also serve as high-performance affective interfaces, notwithstanding their strictly limited availability, justified on medical grounds only

    Transfer learning of deep neural network representations for fMRI decoding

    Get PDF
    Background: Deep neural networks have revolutionised machine learning, with unparalleled performance in object classification. However, in brain imaging (e.g., fMRI), the direct application of Convolutional Neural Networks (CNN) to decoding subject states or perception from imaging data seems impractical given the scarcity of available data. New method: In this work we propose a robust method to transfer information from deep learning (DL) features to brain fMRI data with the goal of decoding. By adopting Reduced Rank Regression with Ridge Regularisation we establish a multivariate link between imaging data and the fully connected layer (fc7) of a CNN. We exploit the reconstructed fc7 features by performing an object image classification task on two datasets: one of the largest fMRI databases, taken from different scanners from more than two hundred subjects watching different movie clips, and another with fMRI data taken while watching static images. Results: The fc7 features could be significantly reconstructed from the imaging data, and led to significant decoding performance. Comparison with existing methods: The decoding based on reconstructed fc7 outperformed the decoding based on imaging data alone. Conclusion: In this work we show how to improve fMRI-based decoding benefiting from the mapping between functional data and CNN features. The potential advantage of the proposed method is twofold: the extraction of stimuli representations by means of an automatic procedure (unsupervised) and the embedding of high-dimensional neuroimaging data onto a space designed for visual object discrimination, leading to a more manageable space from dimensionality point of view

    Towards empathic neurofeedback for interactive storytelling

    Get PDF
    Interactive Narrative is a form of digital entertainment based on AI techniques which support narrative generation and user interaction. Despite recent progress in the field, there is still a lack of unified models integrating narrative generation, user response and interaction. This paper addresses this issue by revisiting existing Interactive Narrative paradigms, granting explicit status to users’ disposition towards story characters. We introduce a novel Brain-Computer Interface (BCI) design, which attempts to capture empathy for the main character in a way that is compatible with filmic theories of emotion. Results from two experimental studies with a fully-implemented system demonstrate the effectiveness of a neurofeedback-based approach, showing that subjects can successfully modulate their emotional support for a character who is confronted with challenging situations. A preliminary fMRI analysis also shows activation during user interaction, in regions of the brain associated with emotional control

    A novel socially assistive robotic platform for cognitive-motor exercises for individuals with Parkinson's Disease: a participatory-design study from conception to feasibility testing with end users

    Get PDF
    The potential of socially assistive robots (SAR) to assist in rehabilitation has been demonstrated in contexts such as stroke and cardiac rehabilitation. Our objective was to design and test a platform that addresses specific cognitive-motor training needs of individuals with Parkinson’s disease (IwPD). We used the participatory design approach, and collected input from a total of 62 stakeholders (IwPD, their family members and clinicians) in interviews, brainstorming sessions and in-lab feasibility testing of the resulting prototypes. The platform we developed includes two custom-made mobile desktop robots, which engage users in concurrent cognitive and motor tasks. IwPD (n = 16) reported high levels of enjoyment when using the platform (median = 5/5) and willingness to use the platform in the long term (median = 4.5/5). We report the specifics of the hardware and software design as well as the detailed input from the stakeholders

    Spectral Diversity and Regulation of Coral Fluorescence in a Mesophotic Reef Habitat in the Red Sea

    Get PDF
    The phenomenon of coral fluorescence in mesophotic reefs, although well described for shallow waters, remains largely unstudied. We found that representatives of many scleractinian species are brightly fluorescent at depths of 50–60 m at the Interuniversity Institute for Marine Sciences (IUI) reef in Eilat, Israel. Some of these fluorescent species have distribution maxima at mesophotic depths (40–100 m). Several individuals from these depths displayed yellow or orange-red fluorescence, the latter being essentially absent in corals from the shallowest parts of this reef. We demonstrate experimentally that in some cases the production of fluorescent pigments is independent of the exposure to light; while in others, the fluorescence signature is altered or lost when the animals are kept in darkness. Furthermore, we show that green-to-red photoconversion of fluorescent pigments mediated by short-wavelength light can occur also at depths where ultraviolet wavelengths are absent from the underwater light field. Intraspecific colour polymorphisms regarding the colour of the tissue fluorescence, common among shallow water corals, were also observed for mesophotic species. Our results suggest that fluorescent pigments in mesophotic reefs fulfil a distinct biological function and offer promising application potential for coral-reef monitoring and biomedical imaging
    • 

    corecore