1,201 research outputs found
Recommended from our members
A management architecture for active networks
In this paper we present an architecture for network and applications management, which is based on the Active Networks paradigm and shows the advantages of network programmability. The stimulus to develop this architecture arises from an actual need to manage a cluster of active nodes, where it is often required to redeploy network assets and modify nodes connectivity. In our architecture, a remote front-end of the managing entity allows the operator to design new network topologies, to check the status of the nodes and to configure them. Moreover, the proposed framework allows to explore an active network, to monitor the active applications, to query each node and to install programmable traps. In order to take advantage of the Active Networks technology, we introduce active SNMP-like MIBs and agents, which are dynamic and programmable. The programmable management agents make tracing distributed applications a feasible task. We propose a general framework that can inter-operate with any active execution environment. In this framework, both the manager and the monitor front-ends communicate with an active node (the Active Network Access Point) through the XML language. A gateway service performs the translation of the queries from XML to an active packet language and injects the code in the network. We demonstrate the implementation of an active network gateway for PLAN (Packet Language for Active Networks) in a forty active nodes testbed. Finally, we discuss an application of the active management architecture to detect the causes of network failures by tracing network events in time
Archiving Software Surrogates on the Web for Future Reference
Software has long been established as an essential aspect of the scientific
process in mathematics and other disciplines. However, reliably referencing
software in scientific publications is still challenging for various reasons. A
crucial factor is that software dynamics with temporal versions or states are
difficult to capture over time. We propose to archive and reference surrogates
instead, which can be found on the Web and reflect the actual software to a
remarkable extent. Our study shows that about a half of the webpages of
software are already archived with almost all of them including some kind of
documentation.Comment: TPDL 2016, Hannover, German
Binocular eye movements are adapted to the natural environment
Humans and many animals make frequent saccades requiring coordinated movements of the eyes. When landing on the new fixation point, the eyes must converge accurately or double images will be perceived. We asked whether the visual system uses statistical regularities in the natural environment to aid eye alignment at the end of saccades. We measured the distribution of naturally occurring disparities in different parts of the visual field. The central tendency of the distributions was crossed (nearer than fixation) in the lower field and uncrossed (farther) in the upper field in male and female participants. It was uncrossed in the left and right fields. We also measured horizontal vergence after completion of vertical, horizontal, and oblique saccades.Whenthe eyes first landed near the eccentric target, vergence was quite consistent with the natural-disparity distribution. For example, when making an upward saccade, the eyes diverged to be aligned with the most probable uncrossed disparity in that part of the visual field. Likewise, when making a downward saccade, the eyes converged to enable alignment with crossed disparity in that part of the field. Our results show that rapid binocular eye movements are adapted to the statistics of the 3D environment, minimizing the need for large corrective vergence movements at the end of saccades. The results are relevant to the debate about whether eye movements are derived from separate saccadic and vergence neural commands that control both eyes or from separate monocular commands that control the eyes independently
Constructing living buildings: a review of relevant technologies for a novel application of biohybrid robotics
Biohybrid robotics takes an engineering approach to the expansion and exploitation of biological behaviours for application to automated tasks. Here, we identify the construction of living buildings and infrastructure as a high-potential application domain for biohybrid robotics, and review technological advances relevant to its future development. Construction, civil infrastructure maintenance and building occupancy in the last decades have comprised a major portion of economic production, energy consumption and carbon emissions. Integrating biological organisms into automated construction tasks and permanent building components therefore has high potential for impact. Live materials can provide several advantages over standard synthetic construction materials, including self-repair of damage, increase rather than degradation of structural performance over time, resilience to corrosive environments, support of biodiversity, and mitigation of urban heat islands. Here, we review relevant technologies, which are currently disparate. They span robotics, self-organizing systems, artificial life, construction automation, structural engineering, architecture, bioengineering, biomaterials, and molecular and cellular biology. In these disciplines, developments relevant to biohybrid construction and living buildings are in the early stages, and typically are not exchanged between disciplines. We, therefore, consider this review useful to the future development of biohybrid engineering for this highly interdisciplinary application.publishe
Crossed–uncrossed projections from primate retina are adapted to disparities of natural scenes
In mammals with frontal eyes, optic-nerve fibers from nasal retina project to the contralateral hemisphere of the brain, and fibers from temporal retina project ipsilaterally. The division between crossed and uncrossed projections occurs at or near the vertical meridian. If the division was precise, a problem would arise. Small objects near midline, but nearer or farther than current fixation, would produce signals that travel to opposite hemispheres, making the binocular disparity of those objects difficult to compute. However, in species that have been studied, the division is not precise. Rather, there are overlapping crossed and uncrossed projections such that some fibers from nasal retina project ipsilaterally as well as contralaterally and some from temporal retina project contralaterally as well as ipsilaterally. This increases the probability that signals from an object near vertical midline travel to the same hemisphere, thereby aiding disparity estimation. We investigated whether there is a deficit in binocular vision near the vertical meridian in humans and found no evidence for one. We also investigated the effectiveness of the observed decussation pattern, quantified from anatomical data in monkeys and humans. We used measurements of naturally occurring disparities in humans to determine disparity distributions across the visual field. We then used those distributions to calculate the probability of natural disparities transmitting to the same hemisphere, thereby aiding disparity computation. We found that the pattern of overlapping projections is quite effective. Thus, crossed and uncrossed projections from the retinas are well designed for aiding disparity estimation and stereopsis
Evaluation of the Tobii EyeX Eye tracking controller and Matlab toolkit for research
The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings
The blur horopter: Retinal conjugate surface in binocular viewing
From measurements of wavefront aberrations in 16 emmetropic eyes, we calculated where objects in the world create best-focused images across the central 27◦ (diameter) of the retina. This is the retinal conjugate surface.We calculated how the surface changes as the eye accommodates from near to far and found that it mostly maintains its shape. The conjugate surface is pitched top-back, meaning that the upper visual field is relatively hyperopic compared to the lower field.We extended the measurements of best image quality into the binocular domain by considering how the retinal conjugate surfaces for the two eyes overlap in binocular viewing. We call this binocular extension the blur horopter.We show that in combining the two images with possibly different sharpness, the visual system creates a larger depth of field of apparently sharp images than occurs with monocular viewing. We examined similarities between the blur horopter and its analog in binocular vision: the binocular horopter.We compared these horopters to the statistics of the natural visual environment. The binocular horopter and scene statistics are strikingly similar. The blur horopter and natural statistics are qualitatively, but not quantitatively, similar. Finally, we used the measurements to refine what is commonly referred to as the zone of clear single binocular vision
Integrating High Fidelity Eye, Head and World Tracking in a Wearable Device
A challenge in mobile eye tracking is balancing the quality of data collected with the ability for a subject to move freely and naturally through their environment. This challenge is exacerbated when an experiment necessitates multiple data streams recorded simultaneously and in high fidelity. Given these constraints, previous devices have had limited spatial and temporal resolution, as well as compression artifacts. To address this, we have designed a wearable device capable of recording a subject's body, head, and eye positions, simultaneously with RGB and depth data from the subject's visual environment, measured in high spatial and temporal resolution. The sensors include a binocular eye tracker, an RGB-D scene camera, a high-frame-rate scene camera, and two visual odometry sensors, which we synchronize and record from, with a total incoming data rate of over 700 MB/s. All sensors are operated by a mini-PC optimized for fast data collection, and powered by a small battery pack. The headset weighs only 1.4 kg, the remainder just 3.9kg, and can be comfortably worn by the subject in a small backpack, allowing full mobility
A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO-GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction
A Child-Friendly Wearable Device for Quantifying Environmental Risk Factors for Myopia
Purpose: In the past few decades, the prevalence of myopia, where the eye grows too long, has increased dramatically. The visual environment appears to be critical to regulating the eye growth. Thus, it is very important to determine the properties of the environment that put children at risk for myopia. Researchers have suggested that the intensity of illumination and range of distances to which a child's eyes are exposed are important, but this has not been confirmed. Methods: We designed, built, and tested an inexpensive, child-friendly, head-mounted device that can measure the intensity and spectral content of illumination approaching the eyes and can also measure the distances to which the central visual field of the eyes are exposed. The device is mounted on a child's bicycle helmet. It includes a camera that measures distances over a substantial range and a six-channel spectral sensor. The sensors are hosted by a light-weight, battery-powered microcomputer. We acquired pilot data from children while they were engaged in various indoor and outdoor activities. Results: The device proved to be comfortable, easy, and safe to wear, and able to collect very useful data on the statistics of illumination and distances. Conclusions: The designed device is an ideal tool to be used in a population of young children, some of whom will later develop myopia and some of whom will not. Translational Relevance: Such data would be critical for determining the properties of the visual environment that put children at risk for becoming myopic
- …
