2,977 research outputs found

    Do lemurs know when they could be wrong? An investigation of information seeking in three species of lemur (<i>Lemur catta, Eulemur rubriventer, </i>and<i> Varecia variegata</i>)

    Get PDF
    Sixteen lemurs, including representatives from three species (Lemur catta, Eulemur rubriventer, Varecia variegata), were presented with a food seeking task where information about the rewards location, in one of two plastic tubes, was either known or not known. We evaluated whether lemurs would first look into the tube prior to making a choice. This information-seeking task aimed to assess whether subjects would display memory awareness, seeking additional information when they became aware they lacked knowledge of the rewards location. We predicted lemurs would be more likely to look into the tube when they had insufficient knowledge about the rewards position. Lemurs successfully gained the reward on most trials. However, they looked on the majority of trials regardless of whether they had all the necessary information to make a correct choice. The minimal cost to looking may have resulted in checking behaviour both to confirm what they already knew and to gain knowledge they did not have. When the cost of looking increased (elevating end of tube requiring additional energy expenditure to look inside - Experiment 2), lemurs still looked into tubes on both seen and unseen trials; however, the frequency of looking increased when opaque tubes were used (where they could not see the rewards location after baiting). This could suggest they checked more when they were less sure of their knowledge state

    A whitehead product for track groups II

    Get PDF
    Two direct relations are exhibited between the Whitehead product for track groups studied in [4] and the generalized Whitehead product in the sense of Arkowitz. The problem of determining the order of the Whitehead square is posed and some computations given

    Bivariant long exact sequences II

    Get PDF
    Given a pair of short exact sequences 1) 0 → X → Y → Z → 0, 0 → A → B → C → 0 in an abelian category A, with sufficiently many projectives and injectives, and given an additive bifunctor T we show that T applied to the pair (1) gives rise to a diagram of a type described by C. T. C. Wall that contains 15 interlocking long exact sequences involving the derived functors of T at (A, X), (A, Y), etc. and also involving the derived functors of Tp and Tq which are two functors with domain A2 that arise through the failure of T to preserve pullbacks and pushouts. In the case of Hom (respectively ø) in the category of G-modules for a group G the derived functors of Tp (respectively Tq) are expressed in terms of group cohomology (respectively homology)

    Robust Super-resolution by Fusion of Interpolated Frames for Color and Grayscale Images

    Get PDF
    Multi-frame super-resolution (SR) processing seeks to overcome undersampling issues that can lead to undesirable aliasing artifacts in imaging systems. A key factor in effective multi-frame SR is accurate subpixel inter-frame registration. Accurate registration is more difficult when frame-to-frame motion does not contain simple global translation and includes locally moving scene objects. SR processing is further complicated when the camera captures full color by using a Bayer color filter array (CFA). Various aspects of these SR challenges have been previously investigated. Fast SR algorithms tend to have difficulty accommodating complex motion and CFA sensors. Furthermore, methods that can tolerate these complexities tend to be iterative in nature and may not be amenable to real-time processing. In this paper, we present a new fast approach for performing SR in the presence of these challenging imaging conditions. We refer to the new approach as Fusion of Interpolated Frames (FIF) SR. The FIF SR method decouples the demosaicing, interpolation, and restoration steps to simplify the algorithm. Frames are first individually demosaiced and interpolated to the desired resolution. Next, FIF uses a novel weighted sum of the interpolated frames to fuse them into an improved resolution estimate. Finally, restoration is applied to improve any degrading camera effects. The proposed FIF approach has a lower computational complexity than many iterative methods, making it a candidate for real-time implementation. We provide a detailed description of the FIF SR method and show experimental results using synthetic and real datasets in both constrained and complex imaging scenarios. Experiments include airborne grayscale imagery and Bayer CFA image sets with affine background motion plus local motion

    Robust Super-resolution by Fusion of Interpolated Frames for Color and Grayscale Images

    Get PDF
    Multi-frame super-resolution (SR) processing seeks to overcome undersampling issues that can lead to undesirable aliasing artifacts in imaging systems. A key factor in effective multi-frame SR is accurate subpixel inter-frame registration. Accurate registration is more difficult when frame-to-frame motion does not contain simple global translation and includes locally moving scene objects. SR processing is further complicated when the camera captures full color by using a Bayer color filter array (CFA). Various aspects of these SR challenges have been previously investigated. Fast SR algorithms tend to have difficulty accommodating complex motion and CFA sensors. Furthermore, methods that can tolerate these complexities tend to be iterative in nature and may not be amenable to real-time processing. In this paper, we present a new fast approach for performing SR in the presence of these challenging imaging conditions. We refer to the new approach as Fusion of Interpolated Frames (FIF) SR. The FIF SR method decouples the demosaicing, interpolation, and restoration steps to simplify the algorithm. Frames are first individually demosaiced and interpolated to the desired resolution. Next, FIF uses a novel weighted sum of the interpolated frames to fuse them into an improved resolution estimate. Finally, restoration is applied to improve any degrading camera effects. The proposed FIF approach has a lower computational complexity than many iterative methods, making it a candidate for real-time implementation. We provide a detailed description of the FIF SR method and show experimental results using synthetic and real datasets in both constrained and complex imaging scenarios. Experiments include airborne grayscale imagery and Bayer CFA image sets with affine background motion plus local motion

    A semi-classical over-barrier model for charge exchange between highly charged ions and one-optical electron atoms

    Get PDF
    Absolute total cross sections for electron capture between slow, highly charged ions and alkali targets have been recently measured. It is found that these cross sections follow a scaling law with the projectile charge which is different from the one previously proposed basing on a classical over-barrier model (OBM) and verified using rare gases and molecules as targets. In this paper we develop a "semi-classical" (i.e. including some quantal features) OBM attempting to recover experimental results. The method is then applied to ion-hydrogen collisions and compared with the result of a sophisticated quantum-mechanical calculation. In the former case the accordance is very good, while in the latter one no so satisfactory results are found. A qualitative explanation for the discrepancies is attempted.Comment: RevTeX, uses epsf; 6 pages text + 3 EPS figures Journal of Physics B (scehduled March 2000). This revision corrects fig.

    Block Matching and Wiener Filtering Approach to Optical Turbulence Mitigation and Its Application to Simulated and Real Imagery with Quantitative Error Analysis

    Get PDF
    We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study

    Hybrid machine learning architecture for automated detection and grading of retinal images for diabetic retinopathy

    Get PDF
    Purpose: Diabetic retinopathy is the leading cause of blindness, affecting over 93 million people. An automated clinical retinal screening process would be highly beneficial and provide a valuable second opinion for doctors worldwide. A computer-aided system to detect and grade the retinal images would enhance the workflow of endocrinologists. Approach: For this research, we make use of a publicly available dataset comprised of 3662 images. We present a hybrid machine learning architecture to detect and grade the level of diabetic retinopathy (DR) severity. We also present and compare simple transfer learning-based approaches using established networks such as AlexNet, VGG16, ResNet, Inception-v3, NASNet, DenseNet, and GoogLeNet for DR detection. For the grading stage (mild, moderate, proliferative, or severe), we present an approach of combining various convolutional neural networks with principal component analysis for dimensionality reduction and a support vector machine classifier. We study the performance of these networks under different preprocessing conditions. Results: We compare these results with various existing state-of-the-art approaches, which include single-stage architectures.We demonstrate that this architecture is more robust to limited training data and class imbalance. We achieve an accuracy of 98.4% for DR detection and an accuracy of 96.3% for distinguishing severity of DR, thereby setting a benchmark for future research efforts using a limited set of training images. Conclusions: Results obtained using the proposed approach serve as a benchmark for future research efforts. We demonstrate as a proof-of-concept that an automated detection and grading system could be developed with a limited set of images and labels. This type of independent architecture for detection and grading could be used in areas with a scarcity of trained clinicians based on the necessity
    • …
    corecore