7,735 research outputs found

    Structure of super-families

    Get PDF
    At present the study of nuclear interactions induced by cosmic rays is the unique source of information on the nuclear interactions in the energy region above 10 to the 15th power eV. The phenomena in this energy region are observed by air shower arrays or emulsion chambers installed at high mountain. An emulsion chamber is the pile of lead plates and photo-sensitive layers (nuclear emulsion plates and/or X-ray films) used to detect electron showers. High spatial resolution of photographic material used in the emulsion chamber enables the observation of the phenomena in detail, and recent experiments of emulsion chamber with large area are being carried out at high mountain altitudes by several groups in the world

    Extremely high energy hadron and gamma-ray families(3). Core structure of the halo of superfamily

    Get PDF
    The study of the core structure seen in the halo of Mini-Andromeda 3(M.A.3), which was observed in the Chacaltaya emulsion chamber, is presented. On the assumption that lateral distribution of darkness of the core is exponential type, i.e., D=D0exp(-R/r0), subtraction of D from halo darkness is performed until the cores are gone. The same quantity on cores obtained by this way are summarized. The analysis is preliminary and is going to be developed

    Feature space analysis for human activity recognition in smart environments

    Get PDF
    Activity classification from smart environment data is typically done employing ad hoc solutions customised to the particular dataset at hand. In this work we introduce a general purpose collection of features for recognising human activities across datasets of different type, size and nature. The first experimental test of our feature collection achieves state of the art results on well known datasets, and we provide a feature importance analysis in order to compare the potential relevance of features for activity classification in different datasets

    Adaptive saccade controller inspired by the primates' cerebellum

    Get PDF
    Saccades are fast eye movements that allow humans and robots to bring the visual target in the center of the visual field. Saccades are open loop with respect to the vision system, thus their execution require a precise knowledge of the internal model of the oculomotor system. In this work, we modeled the saccade control, taking inspiration from the recurrent loops between the cerebellum and the brainstem. In this model, the brainstem acts as a fixed-inverse model of the oculomotor system, while the cerebellum acts as an adaptive element that learns the internal model of the oculomotor system. The adaptive filter is implemented using a state-of-the-art neural network, called I-SSGPR. The proposed approach, namely recurrent architecture, was validated through experiments performed both in simulation and on an antropomorphic robotic head. Moreover, we compared the recurrent architecture with another model of the cerebellum, the feedback error learning. Achieved results show that the recurrent architecture outperforms the feedback error learning in terms of accuracy and insensitivity to the choice of the feedback controller

    Learning the visual–oculomotor transformation: effects on saccade control and space representation

    Get PDF
    Active eye movements can be exploited to build a visuomotor representation of the surrounding environment. Maintaining and improving such representation requires to update the internal model involved in the generation of eye movements. From this perspective, action and perception are thus tightly coupled and interdependent. In this work, we encoded the internal model for oculomotor control with an adaptive filter inspired by the functionality of the cerebellum. Recurrent loops between a feed-back controller and the internal model allow our system to perform accurate binocular saccades and create an implicit representation of the nearby space. Simulation results show that this recurrent architecture outperforms classical feedback-error-learning in terms of both accuracy and sensitivity to system parameters. The proposed approach was validated implementing the framework on an anthropomorphic robotic head

    Unsupervised grounding of textual descriptions of object features and actions in video

    Get PDF
    We propose a novel method for learning visual concepts and their correspondence to the words of a natural language. The concepts and correspondences are jointly inferred from video clips depicting simple actions involving multiple objects, together with corresponding natural language commands that would elicit these actions. Individual objects are first detected, together with quantitative measurements of their colour, shape, location and motion. Visual concepts emerge from the co-occurrence of regions within a measurement space and words of the language. The method is evaluated on a set of videos generated automatically using computer graphics from a database of initial and goal configurations of objects. Each video is annotated with multiple commands in natural language obtained from human annotators using crowd sourcing

    Decoding information for grasping from the macaque dorsomedial visual stream

    Get PDF
    Neurodecoders have been developed by researchers mostly to control neuroprosthetic devices, but also to shed new light on neural functions. In this study, we show that signals representing grip configurations can be reliably decoded from neural data acquired from area V6A of the monkey medial posterior parietal cortex. Two Macaca fascicularis monkeys were trained to perform an instructed-delay reach-to-grasp task in the dark and in the light toward objects of different shapes. Population neural activity was extracted at various time intervals on vision of the objects, the delay before movement, and grasp execution. This activity was used to train and validate a Bayes classifier used for decoding objects and grip types. Recognition rates were well over chance level for all the epochs analyzed in this study. Furthermore, we detected slightly different decoding accuracies, depending on the task's visual condition. Generalization analysis was performed by training and testing the system during different time intervals. This analysis demonstrated that a change of code occurred during the course of the task. Our classifier was able to discriminate grasp types fairly well in advance with respect to grasping onset. This feature might be important when the timing is critical to send signals to external devices before the movement start. Our results suggest that the neural signals from the dorsomedial visual pathway can be a good substrate to feed neural prostheses for prehensile actions
    • …
    corecore