12,468 research outputs found
Recommended from our members
Design and Freeform Fabrication of Deployable Structures with Lattice Skins
Frontier environments—such as battlefields, hostile territories, remote locations, or outer
space—drive the need for lightweight, deployable structures that can be stored in a compact
configuration and deployed quickly and easily in the field. We introduce the concept of lattice
skins to enable the design, solid freeform fabrication (SFF), and deployment of customizable
structures with nearly arbitrary surface profile and lightweight multi-functionality. Using
Duraform FLEX® material in a selective laser sintering machine, large deployable structures are
fabricated in a nominal build chamber by either virtually collapsing them into a condensed form
or decomposing them into smaller parts. Before fabrication, lattice sub-skins are added
strategically beneath the surface of the part. The lattices provide elastic energy for folding and
deploying the structure or constrain expansion upon application of internal air pressure. Nearly
arbitrary surface profiles are achievable and internal space is preserved for subsequent usage. In
this paper, we present the results of a set of experimental and computational models that are
designed to provide proof of concept for lattice skins as a deployment mechanism in SFF and to
demonstrate the effect of lattice structure on deployed shape.Mechanical Engineerin
A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes
Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Recommended from our members
Getting the best outcomes from epilepsy surgery.
Neurosurgery is an underutilized treatment that can potentially cure drug-refractory epilepsy. Careful, multidisciplinary presurgical evaluation is vital for selecting patients and to ensure optimal outcomes. Advances in neuroimaging have improved diagnosis and guided surgical intervention. Invasive electroencephalography allows the evaluation of complex patients who would otherwise not be candidates for neurosurgery. We review the current state of the assessment and selection of patients and consider established and novel surgical procedures and associated outcome data. We aim to dispel myths that may inhibit physicians from referring and patients from considering neurosurgical intervention for drug-refractory focal epilepsies. Ann Neurol 2018;83:676-690
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
- …