545 research outputs found

    Time-locked perceptual fading induced by visual transients

    Get PDF
    After prolonged fixation, a stationary object placed in the peripheral visual field fades and disappears from our visual awareness, especially at low luminance contrast (the Troxler effect). Here, we report that similar fading can be triggered by visual transients, such as additional visual stimuli flashed near the object, apparent motion, or a brief removal of the object itself (blinking). The fading occurs even without prolonged adaptation and is time-locked to the presentation of the visual transients. Experiments show that the effect of a flashed object decreased monotonically as a function of the distance from the target object. Consistent with this result, when apparent motion, consisting of a sequence of flashes was presented between stationary disks, these target disks perceptually disappeared as if erased by the moving object. Blinking the target disk, instead of flashing an additional visual object, turned out to be sufficient to induce the fading. The effect of blinking peaked around a blink duration of 80 msec. Our findings reveal a unique mechanism that controls the visibility of visual objects in a spatially selective and time-locked manner in response to transient visual inputs. Possible mechanisms underlying this phenomenon will be discussed

    3D Object Discovery and Modeling Using Single RGB-D Images Containing Multiple Object Instances

    Full text link
    Unsupervised object modeling is important in robotics, especially for handling a large set of objects. We present a method for unsupervised 3D object discovery, reconstruction, and localization that exploits multiple instances of an identical object contained in a single RGB-D image. The proposed method does not rely on segmentation, scene knowledge, or user input, and thus is easily scalable. Our method aims to find recurrent patterns in a single RGB-D image by utilizing appearance and geometry of the salient regions. We extract keypoints and match them in pairs based on their descriptors. We then generate triplets of the keypoints matching with each other using several geometric criteria to minimize false matches. The relative poses of the matched triplets are computed and clustered to discover sets of triplet pairs with similar relative poses. Triplets belonging to the same set are likely to belong to the same object and are used to construct an initial object model. Detection of remaining instances with the initial object model using RANSAC allows to further expand and refine the model. The automatically generated object models are both compact and descriptive. We show quantitative and qualitative results on RGB-D images with various objects including some from the Amazon Picking Challenge. We also demonstrate the use of our method in an object picking scenario with a robotic arm

    Motion Priority Optimization Framework towards Automated and Teleoperated Robot Cooperation in Industrial Recovery Scenarios

    Full text link
    In this study, we present an optimization framework for efficient motion priority design between automated and teleoperated robots in an industrial recovery scenario. Although robots have recently become increasingly common in industrial sites, there are still challenges in achieving human-robot collaboration/cooperation (HRC), where human workers and robots are engaged in collaborative and cooperative tasks in a shared workspace. For example, the corresponding factory cell must be suspended for safety when an industrial robot drops an assembling part in the workspace. After that, a human worker is allowed to enter the robot workspace to address the robot recovery. This process causes non-continuous manufacturing, which leads to a productivity reduction. Recently, robotic teleoperation technology has emerged as a promising solution to enable people to perform tasks remotely and safely. This technology can be used in the recovery process in manufacturing failure scenarios. Our proposition involves the design of an appropriate priority function that aids in collision avoidance between the manufacturing and recovery robots and facilitates continuous processes with minimal production loss within an acceptable risk level. This paper presents a framework, including an HRC simulator and an optimization formulation, for finding optimal parameters of the priority function. Through quantitative and qualitative experiments, we address the proof of our novel concept and demonstrate its feasibility

    Generic decoding of seen and imagined objects using hierarchical visual features

    Get PDF
    Object recognition is a key function in both human and machine vision. While recent studies have achieved fMRI decoding of seen and imagined contents, the prediction is limited to training examples. We present a decoding approach for arbitrary objects, using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features including those from a convolutional neural network can be predicted from fMRI patterns and that greater accuracy is achieved for low/high-level features with lower/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, the decoding of imagined objects reveals progressive recruitment of higher to lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval

    Attention modulates neural representation to render reconstructions according to subjective appearance

    Get PDF
    Stimulus images can be reconstructed from visual cortical activity. However, our perception of stimuli is shaped by both stimulus-induced and top-down processes, and it is unclear whether and how reconstructions reflect top-down aspects of perception. Here, we investigate the effect of attention on reconstructions using fMRI activity measured while subjects attend to one of two superimposed images. A state-of-the-art method is used for image reconstruction, in which brain activity is translated (decoded) to deep neural network (DNN) features of hierarchical layers then to an image. Reconstructions resemble the attended rather than unattended images. They can be modeled by superimposed images with biased contrasts, comparable to the appearance during attention. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses, modulating neural representations to render reconstructions in accordance with subjective appearance

    Process Fault Diagnosis using Neural Networks and Fault Tree Analysis Information

    Get PDF
    Neural nets have recently become the focus of much attention, largely because of their wide range of complex and nonlinear problems. This paper presents a new integrated approach using neural networks for diagnosing process failures. The fault propagation in process is modeled by causal relationships from the fault tree and its minimal cut sets. The measurement patterns required for training and testing the neural network were obtained from fault propagation model. The network is able to diagnose even in the presence of malfunction of certain sensors. We demonstrate via a nitric acid cooler process how the neural network can learn and successfully diagnose the faults

    Inter-subject neural code converter for visual image representation.

    Get PDF
    Brain activity patterns differ from person to person, even for an identical stimulus. In functional brain mapping studies, it is important to align brain activity patterns between subjects for group statistical analyses. While anatomical templates are widely used for inter-subject alignment in functional magnetic resonance imaging (fMRI) studies, they are not sufficient to identify the mapping between voxel-level functional responses representing specific mental contents. Recent work has suggested that statistical learning methods could be used to transform individual brain activity patterns into a common space while preserving representational contents. Here, we propose a flexible method for functional alignment, "neural code converter, " which converts one subject's brain activity pattern into another's representing the same content. The neural code converter was designed to learn statistical relationships between fMRI activity patterns of paired subjects obtained while they saw an identical series of stimuli. It predicts the signal intensity of individual voxels of one subject from a pattern of multiple voxels of the other subject. To test this method, we used fMRI activity patterns measured while subjects observed visual images consisting of random and structured patches. We show that fMRI activity patterns for visual images not used for training the converter could be predicted from those of another subject where brain activity was recorded for the same stimuli. This confirms that visual images can be accurately reconstructed from the predicted activity patterns alone. Furthermore, we show that a classifier trained only on predicted fMRI activity patterns could accurately classify measured fMRI activity patterns. These results demonstrate that the neural code converter can translate neural codes between subjects while preserving contents related to visual images. While this method is useful for functional alignment and decoding, it may also provide a basis for brain-to-brain communication using the converted pattern for designing brain stimulation

    Correlation Power Analysis with Companding Methods

    Get PDF
    AbstractCompanding methods have been profoundly applied in signal processing for quantization. And various companding schemes have been proposed to improve the PAPR (Peak to Average Power Ratio) of OFDM systems. In this paper, based on the exploration of the features of μ-law functions, we propose Correlation Power Analysis (CPA) with μ-law companding methods. μ-law expanding function is used to preprocess the power traces collected during AES encryption on ASIC and FPGA respectively. Experiments show that it reduces the number of power traces to recover all the key bytes as much as 25.8% than the conventional CPA
    corecore