37,833 research outputs found

    Development of test methodology for dynamic mechanical analysis instrumentation

    Get PDF
    Dynamic mechanical analysis instrumentation was used for the development of specific test methodology in the determination of engineering parameters of selected materials, esp. plastics and elastomers, over a broad range of temperature with selected environment. The methodology for routine procedures was established with specific attention given to sample geometry, sample size, and mounting techniques. The basic software of the duPont 1090 thermal analyzer was used for data reduction which simplify the theoretical interpretation. Clamps were developed which allowed 'relative' damping during the cure cycle to be measured for the fiber-glass supported resin. The correlation of fracture energy 'toughness' (or impact strength) with the low temperature (glassy) relaxation responses for a 'rubber-modified' epoxy system was negative in result because the low-temperature dispersion mode (-80 C) of the modifier coincided with that of the epoxy matrix, making quantitative comparison unrealistic

    Design and quality standards for custom hybrid microcircuits

    Get PDF
    A hybrid microcircuit standard was developed after a thorough review of applicable NASA, military, industry, and technical society specifications and standards and compilation of comments from technical reviewers throughout the hybrid industry. The draft of the standard submitted to the technical reviewers, the comments from the reviewers, and the completed standard are discussed

    Blind deconvolution of medical ultrasound images: parametric inverse filtering approach

    Get PDF
    ©2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2007.910179The problem of reconstruction of ultrasound images by means of blind deconvolution has long been recognized as one of the central problems in medical ultrasound imaging. In this paper, this problem is addressed via proposing a blind deconvolution method which is innovative in several ways. In particular, the method is based on parametric inverse filtering, whose parameters are optimized using two-stage processing. At the first stage, some partial information on the point spread function is recovered. Subsequently, this information is used to explicitly constrain the spectral shape of the inverse filter. From this perspective, the proposed methodology can be viewed as a ldquohybridizationrdquo of two standard strategies in blind deconvolution, which are based on either concurrent or successive estimation of the point spread function and the image of interest. Moreover, evidence is provided that the ldquohybridrdquo approach can outperform the standard ones in a number of important practical cases. Additionally, the present study introduces a different approach to parameterizing the inverse filter. Specifically, we propose to model the inverse transfer function as a member of a principal shift-invariant subspace. It is shown that such a parameterization results in considerably more stable reconstructions as compared to standard parameterization methods. Finally, it is shown how the inverse filters designed in this way can be used to deconvolve the images in a nonblind manner so as to further improve their quality. The usefulness and practicability of all the introduced innovations are proven in a series of both in silico and in vivo experiments. Finally, it is shown that the proposed deconvolution algorithms are capable of improving the resolution of ultrasound images by factors of 2.24 or 6.52 (as judged by the autocorrelation criterion) depending on the type of regularization method used

    Dynamic Denoising of Tracking Sequences

    Get PDF
    ©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2008.920795In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences.Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement "collaborate" in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over "static" approaches, in which the tracking images are enhanced independently of each other

    Image Segmentation Using Active Contours Driven by the Bhattacharyya Gradient Flow

    Get PDF
    ©2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2007.908073This paper addresses the problem of image segmentation by means of active contours, whose evolution is driven by the gradient flow derived froman energy functional that is based on the Bhattacharyya distance. In particular, given the values of a photometric variable (or of a set thereof), which is to be used for classifying the image pixels, the active contours are designed to converge to the shape that results in maximal discrepancy between the empirical distributions of the photometric variable inside and outside of the contours. The above discrepancy is measured by means of the Bhattacharyya distance that proves to be an extremely useful tool for solving the problem at hand. The proposed methodology can be viewed as a generalization of the segmentation methods, in which active contours maximize the difference between a finite number of empirical moments of the "inside" and "outside" distributions. Furthermore, it is shown that the proposed methodology is very versatile and flexible in the sense that it allows one to easily accommodate a diversity of the image features based on which the segmentation should be performed. As an additional contribution, a method for automatically adjusting the smoothness properties of the empirical distributions is proposed. Such a procedure is crucial in situations when the number of data samples (supporting a certain segmentation class) varies considerably in the course of the evolution of the active contour. In this case, the smoothness properties of the empirical distributions have to be properly adjusted to avoid either over- or underestimation artifacts. Finally, a number of relevant segmentation results are demonstrated and some further research directions are discussed

    Optical Dipole Trapping beyond Rotating Wave Approximation: The case of Large Detuning

    Full text link
    We show that the inclusion of counter-rotating terms, usually dropped in evaluations of interaction of an electric dipole of a two level atom with the electromagnetic field, leads to significant modifications of trapping potential in the case of large detuning. The results are shown to be in excellent numerical agreement with recent experimental findings, for the case of modes of Laguerre-Gauss spatial profile.Comment: 13 pages, 2 figure

    Critical evaluation of the computational methods used in the forced polymer translocation

    Full text link
    In forced polymer translocation, the average translocation time, τ\tau, scales with respect to pore force, ff, and polymer length, NN, as τf1Nβ\tau \sim f^{-1} N^{\beta}. We demonstrate that an artifact in Metropolis Monte Carlo method resulting in breakage of the force scaling with large ff may be responsible for some of the controversies between different computationally obtained results and also between computational and experimental results. Using Langevin dynamics simulations we show that the scaling exponent β1+ν\beta \le 1 + \nu is not universal, but depends on ff. Moreover, we show that forced translocation can be described by a relatively simple force balance argument and β\beta to arise solely from the initial polymer configuration

    Forelimb muscle and joint actions in Archosauria: insights from Crocodylus johnstoni (Pseudosuchia) and Mussaurus patagonicus (Sauropodomorpha)

    Get PDF
    Many of the major locomotor transitions during the evolution of Archosauria, the lineage including crocodiles and birds as well as extinct Dinosauria, were shifts from quadrupedalism to bipedalism (and vice versa). Those occurred within a continuum between more sprawling and erect modes of locomotion and involved drastic changes of limb anatomy and function in several lineages, including sauropodomorph dinosaurs. We present biomechanical computer models of two locomotor extremes within Archosauria in an analysis of joint ranges of motion and the moment arms of the major forelimb muscles in order to quantify biomechanical differences between more sprawling, pseudosuchian (represented the crocodile Crocodylus johnstoni) and more erect, dinosaurian (represented by the sauropodomorph Mussaurus patagonicus) modes of forelimb function. We compare these two locomotor extremes in terms of the reconstructed musculoskeletal anatomy, ranges of motion of the forelimb joints and the moment arm patterns of muscles across those ranges of joint motion. We reconstructed the three-dimensional paths of 30 muscles acting around the shoulder, elbow and wrist joints. We explicitly evaluate how forelimb joint mobility and muscle actions may have changed with postural and anatomical alterations from basal archosaurs to early sauropodomorphs. We thus evaluate in which ways forelimb posture was correlated with muscle leverage, and how such differences fit into a broader evolutionary context (i.e. transition from sprawling quadrupedalism to erect bipedalism and then shifting to graviportal quadrupedalism). Our analysis reveals major differences of muscle actions between the more sprawling and erect models at the shoulder joint. These differences are related not only to the articular surfaces but also to the orientation of the scapula, in which extension/flexion movements in Crocodylus (e.g. protraction of the humerus) correspond to elevation/depression in Mussaurus. Muscle action is highly influenced by limb posture, more so than morphology. Habitual quadrupedalism in Mussaurus is not supported by our analysis of joint range of motion, which indicates that glenohumeral protraction was severely restricted. Additionally, some active pronation of the manus may have been possible in Mussaurus, allowing semi-pronation by a rearranging of the whole antebrachium (not the radius against the ulna, as previously thought) via long-axis rotation at the elbow joint. However, the muscles acting around this joint to actively pronate it may have been too weak to drive or maintain such orientations as opposed to a neutral position in between pronation and supination. Regardless, the origin of quadrupedalism in Sauropoda is not only linked to manus pronation but also to multiple shifts of forelimb morphology, allowing greater flexion movements of the glenohumeral joint and a more columnar forelimb posture

    A New Waveform Consistency Test for Gravitational Wave Inspiral Searches

    Get PDF
    Searches for binary inspiral signals in data collected by interferometric gravitational wave detectors utilize matched filtering techniques. Although matched filtering is optimal in the case of stationary Gaussian noise, data from real detectors often contains "glitches" and episodes of excess noise which cause filter outputs to ring strongly. We review the standard \chi^2 statistic which is used to test whether the filter output has appropriate contributions from several different frequency bands. We then propose a new type of waveform consistency test which is based on the time history of the filter output. We apply one such test to the data from the first LIGO science run and show that it cleanly distinguishes between true inspiral waveforms and large-amplitude false signals which managed to pass the standard \chi^2 test.Comment: 10 pages, 6 figures, submitted to Classical and Quantum Gravity for the proceedings of the Eighth Gravitational Wave Data Analysis Workshop (GWDAW-8
    corecore