77,095 research outputs found

    Adaptive waveform inversion: theory

    Get PDF
    Conventional full-waveform seismic inversion attempts to find a model of the subsurface that is able to predict observed seismic waveforms exactly; it proceeds by minimizing the difference between the observed and predicted data directly, iterating in a series of linearized steps from an assumed starting model. If this starting model is too far removed from the true model, then this approach leads to a spurious model in which the predicted data are cycle skipped with respect to the observed data. Adaptive waveform inversion (AWI) provides a new form of full-waveform inversion (FWI) that appears to be immune to the problems otherwise generated by cycle skipping. In this method, least-squares convolutional filters are designed that transform the predicted data into the observed data. The inversion problem is formulated such that the subsurface model is iteratively updated to force these Wiener filters toward zero-lag delta functions. As that is achieved, the predicted data evolve toward the observed data and the assumed model evolves toward the true model. This new method is able to invert synthetic data successfully, beginning from starting models and under conditions for which conventional FWI fails entirely. AWI has a similar computational cost to conventional FWI per iteration, and it appears to converge at a similar rate. The principal advantages of this new method are that it allows waveform inversion to begin from less-accurate starting models, does not require the presence of low frequencies in the field data, and appears to provide a better balance between the influence of refracted and reflected arrivals upon the final-velocity model. The AWI is also able to invert successfully when the assumed source wavelet is severely in error

    Complexity Analysis Of Next-Generation VVC Encoding and Decoding

    Full text link
    While the next generation video compression standard, Versatile Video Coding (VVC), provides a superior compression efficiency, its computational complexity dramatically increases. This paper thoroughly analyzes this complexity for both encoder and decoder of VVC Test Model 6, by quantifying the complexity break-down for each coding tool and measuring the complexity and memory requirements for VVC encoding/decoding. These extensive analyses are performed for six video sequences of 720p, 1080p, and 2160p, under Low-Delay (LD), Random-Access (RA), and All-Intra (AI) conditions (a total of 320 encoding/decoding). Results indicate that the VVC encoder and decoder are 5x and 1.5x more complex compared to HEVC in LD, and 31x and 1.8x in AI, respectively. Detailed analysis of coding tools reveals that in LD on average, motion estimation tools with 53%, transformation and quantization with 22%, and entropy coding with 7% dominate the encoding complexity. In decoding, loop filters with 30%, motion compensation with 20%, and entropy decoding with 16%, are the most complex modules. Moreover, the required memory bandwidth for VVC encoding/decoding are measured through memory profiling, which are 30x and 3x of HEVC. The reported results and insights are a guide for future research and implementations of energy-efficient VVC encoder/decoder.Comment: IEEE ICIP 202

    An E-ELT Case Study: Colour-Magnitude Diagrams of an Old Galaxy in the Virgo Cluster

    Get PDF
    One of the key science goals for a diffraction limited imager on an Extremely Large Telescope (ELT) is the resolution of individual stars down to faint limits in distant galaxies. The aim of this study is to test the proposed capabilities of a multi-conjugate adaptive optics (MCAO) assisted imager working at the diffraction limit, in IJHKs_s filters, on a 42m diameter ELT to carry out accurate stellar photometry in crowded images in an Elliptical-like galaxy at the distance of the Virgo cluster. As the basis for realistic simulations we have used the phase A studies of the European-ELT project, including the MICADO imager (Davies & Genzel 2010) and the MAORY MCAO module (Diolaiti 2010). We convolved a complex resolved stellar population with the telescope and instrument performance expectations to create realistic images. We then tested the ability of the currently available photometric packages STARFINDER and DAOPHOT to handle the simulated images. Our results show that deep Colour-Magnitude Diagrams (photometric error, ±\pm0.25 at I≥\ge27.2; H≥\ge25. and Ks≥_s\ge24.6) of old stellar populations in galaxies, at the distance of Virgo, are feasible at a maximum surface brightness, μV∼\mu_V \sim 17 mag/arcsec2^2 (down to MI>−4_I > -4 and MH∼_H \sim MK>−6_K > -6), and significantly deeper (photometric error, ±\pm0.25 at I≥\ge29.3; H≥\ge26.6 and Ks≥_s\ge26.2) for μV∼\mu_V \sim 21 mag/arcsec2^2 (down to MI≥−2_I \ge -2 and MH∼_H \sim MK≥−4.5_K \ge -4.5). The photometric errors, and thus also the depth of the photometry should be improved with photometry packages specifically designed to adapt to an ELT MCAO Point Spread Function. We also make a simple comparison between these simulations and what can be expected from a Single Conjugate Adaptive Optics feed to MICADO and also the James Webb Space Telescope.Comment: 17 pages, 22 figures, accepted on A&

    Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance

    Full text link
    Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016
    • …
    corecore