1,721 research outputs found

    Bioinspired low-frequency material characterisation

    Get PDF
    New-coded signals, transmitted by high-sensitivity broadband transducers in the 40–200 kHz range, allow subwavelength material discrimination and thickness determination of polypropylene, polyvinylchloride, and brass samples. Frequency domain spectra enable simultaneous measurement of material properties including longitudinal sound velocity and the attenuation constant as well as thickness measurements. Laboratory test measurements agree well with model results, with sound velocity prediction errors of less than 1%, and thickness discrimination of at least wavelength/15. The resolution of these measurements has only been matched in the past through methods that utilise higher frequencies. The ability to obtain the same resolution using low frequencies has many advantages, particularly when dealing with highly attenuating materials. This approach differs significantly from past biomimetic approaches where actual or simulated animal signals have been used and consequently has the potential for application in a range of fields where both improved penetration and high resolution are required, such as nondestructive testing and evaluation, geophysics, and medical physics

    Balanced initialisation techniques for coupled ocean-atmosphere models

    Get PDF
    Interactive dynamical ocean and atmosphere models are commonly used for predictions on seasonal timescales, but initialisation of such systems is problematic. In this thesis, idealised coupled models of the El Ni~no Southern Oscillation phenomenon are used to explore potential new initialisation methods. The basic ENSO model is derived using the two-strip concept for tropical ocean dynamics, together with a simple empirical atmosphere. A hierarchy of models is built, beginning with a basic recharge oscillator type model and culminating in a general n-box model. Each model is treated as a dynamical system. An important step is the 10-box model, in which the seasonal cycle is introduced as an extension of the phase space by two dimensions, which paves the way for more complex and occasionally chaotic behaviour. For the simplest 2-box model, analytic approximate solutions are described and used to investigate the parameter dependence of regimes of behaviour. Model space is explored statistically and parametric instability is found for the 10-box and upward versions: while it is by no means a perfect simulation of the real world phenomena, some regimes are found which have features similar to those observed. Initialisation is performed on a system from the n-box model (with n = 94), using dimensional reduction via two separate methods: a linear singular value decomposition approach and a nonlinear slow manifold (approximate inertial manifold) type reduction. The influence of the initialisation methods on predictive skill is tested using a perfect model approach. Data from a model integration are treated as observation, which are perturbed randomly on large and small spatial scales, and used as initial states for both reduced and full model forecasts. Integration of the reduced models provides a continuous initialisation process, ensuring orbits remain close to the attractor for the duration of the forecasts. From sets of ensemble forecasts, statistical measures of skill are calculated. Results are found to depend on the dimensionality of the reduced models and the type of initial perturbations used, and model reduction is found to result in a slight improvement in skill from the full model in each case, as well as a signifi�cant increase in the maximum timestep

    Relating software requirements and architectures using problem frames

    Get PDF
    Problem frames provide a means of analyzing and decomposing problems. They emphasise the world outside of the computer, helping the developer to focus on the problem domain, instead of drifting into inventing solutions. However, even modestly complex problems can force us into detailed consideration of the architecture of the solution. This is counter to the intention of the problem frames approach, which is to delay consideration of the solution space until a good understanding of the problem is gained. We therefore extend problem frames, allowing architectural structures, services and artifacts to be considered as part of the problem domain. Through a case study, we show how this extension enhances the applicability of problem frames in permitting an architecture-based approach to software development. We conclude that, through our extension, the applicability of problem frames is extended to include domains with existing architectural support

    Helicity Analysis of Semileptonic Hyperon Decays Including Lepton Mass Effects

    Full text link
    Using the helicity method we derive complete formulas for the joint angular decay distributions occurring in semileptonic hyperon decays including lepton mass and polarization effects. Compared to the traditional covariant calculation the helicity method allows one to organize the calculation of the angular decay distributions in a very compact and efficient way. In the helicity method the angular analysis is of cascade type, i.e. each decay in the decay chain is analyzed in the respective rest system of that particle. Such an approach is ideally suited as input for a Monte Carlo event generation program. As a specific example we take the decay Ξ0Σ++l+νˉl\Xi^0 \to \Sigma^+ + l^- + \bar{\nu}_l (l=e,μl^-=e^-, \mu^-) followed by the nonleptonic decay Σ+p+π0\Sigma^+ \to p + \pi^0 for which we show a few examples of decay distributions which are generated from a Monte Carlo program based on the formulas presented in this paper. All the results of this paper are also applicable to the semileptonic and nonleptonic decays of ground state charm and bottom baryons, and to the decays of the top quark.Comment: Published version. 40 pages, 11 figures included in the text. Typos corrected, comments added, references added and update

    Integral-based filtering of continuous glucose sensor measurements for glycaemic control in critical care

    Get PDF
    Hyperglycaemia is prevalent in critical illness and increases the risk of further complications and mortality, while tight control can reduce mortality up to 43%. Adaptive control methods are capable of highly accurate, targeted blood glucose regulation using limited numbers of manual measurements due to patient discomfort and labour intensity. Therefore, the option to obtain greater data density using emerging continuous glucose sensing devices is attractive. However, the few such systems currently available can have errors in excess of 20-30%. In contrast, typical bedside testing kits have errors of approximately 7-10%. Despite greater measurement frequency larger errors significantly impact the resulting glucose and patient specific parameter estimates, and thus the control actions determined creating an important safety and performance issue. This paper models the impact of the Continuous Glucose Monitoring System (CGMS, Medtronic, Northridge, CA) on model-based parameter identification and glucose prediction. An integral-based fitting and filtering method is developed to reduce the effect of these errors. A noise model is developed based on CGMS data reported in the literature, and is slightly conservative with a mean Clarke Error Grid (CEG) correlation of R=0.81 (range: 0.68-0.88) as compared to a reported value of R=0.82 in a critical care study. Using 17 virtual patient profiles developed from retrospective clinical data, this noise model was used to test the methods developed. Monte-Carlo simulation for each patient resulted in an average absolute one-hour glucose prediction error of 6.20% (range: 4.97-8.06%) with an average standard deviation per patient of 5.22% (range: 3.26-8.55%). Note that all the methods and results are generalisable to similar applications outside of critical care, such as less acute wards and eventually ambulatory individuals. Clinically, the results show one possible computational method for managing the larger errors encountered in emerging continuous blood glucose sensors, thus enabling their more effective use in clinical glucose regulation studies

    Perturbative spectrum of Trapped Weakly Interacting Bosons in Two Dimensions

    Full text link
    We study a trapped Bose-Einstein condensate under rotation in the limit of weak, translational and rotational invariant two-particle interactions. We use the perturbation-theory approach (the large-N expansion) to calculate the ground-state energy and the excitation spectrum in the asymptotic limit where the total number of particles N goes to infinity while keeping the total angular momentum L finite. Calculating the probabilities of different configurations of angular momentum in the exact eigenstates gives us a clear view of the physical content of excitations. We briefly discuss the case of repulsive contact interaction.Comment: Revtex, 10 pages, 1 table, to appear in Phys. Rev.

    Triple oxygen isotopic composition of the high-<sup>3</sup>He/<sup>4</sup>He mantle

    Get PDF
    Measurements of Xe isotope ratios in ocean island basalts (OIB) suggest that Earth’s mantle accreted heterogeneously, and that compositional remnants of accretion are sampled by modern, high-3He/4He OIB associated with the Icelandic and Samoan plumes. If so, the high-3He/4He source may also have a distinct oxygen isotopic composition from the rest of the mantle. Here, we test if the major elements of the high-3He/4He source preserve any evidence of heterogeneous accretion using measurements of three oxygen isotopes on olivine from a variety of high-3He/4He OIB locations. To high precision, the Δ17O value of high-3He/4He olivines from Hawaii, Pitcairn, Baffin Island and Samoa, are indistinguishable from bulk mantle olivine (Δ17OBulk Mantle − Δ17OHigh 3He/4He olivine = −0.002 ± 0.004 (2 × SEM)‰). Thus, there is no resolvable oxygen isotope evidence for heterogeneous accretion in the high-3He/4He source. Modelling of mixing processes indicates that if an early-forming, oxygen-isotope distinct mantle did exist, either the anomaly was extremely small, or the anomaly was homogenised away by later mantle convection. The δ18O values of olivine with the highest 3He/4He ratios from a variety of OIB locations have a relatively uniform composition (∼5‰). This composition is intermediate to values associated with the depleted MORB mantle and the average mantle. Similarly, δ18O values of olivine from high-3He/4He OIB correlate with radiogenic isotope ratios of He, Sr, and Nd. Combined, this suggests that magmatic oxygen is sourced from the same mantle as other, more incompatible elements and that the intermediate δ18O value is a feature of the high-3He/4He mantle source. The processes responsible for the δ18O signature of high-3He/4He mantle are not certain, but δ18O–87Sr/86Sr correlations indicate that it may be connected to a predominance of a HIMU-like (high U/Pb) component or other moderate δ18O components recycled into the high-3He/4He source

    Relaxation in homogeneous and non-homogeneous polarized systems. A mesoscopic entropy approach

    Full text link
    The dynamics of a degree of freedom associated to an axial vector in contact with a heat bath is decribed by means of a probability distribution function obeying a Fokker-Planck equation. The equation is derived by using mesoscopic non-equilibrium thermodynamics and permits a formulation of a dynamical theory for the axial degree of freedom (orientation, polarization) and its associated order parameter. The theory is used to describe dielectric relaxation in homogeneous and non-homogeneous systems in the presence of strong electric fields. In the homogeneous case, we obtain the dependence of the relaxation time on the external field as observed in experiments. In the non-homogeneous case, our model account for the two observed maxima of the dielectric loss giving a good quantitative description of experimental data at all frequencies, especially for systems with low molecular mass.Comment: 19 pages, 3 table

    Lines, Circles, Planes and Spheres

    Full text link
    Let SS be a set of nn points in R3\mathbb{R}^3, no three collinear and not all coplanar. If at most nkn-k are coplanar and nn is sufficiently large, the total number of planes determined is at least 1+k(nk2)(k2)(nk2)1 + k \binom{n-k}{2}-\binom{k}{2}(\frac{n-k}{2}). For similar conditions and sufficiently large nn, (inspired by the work of P. D. T. A. Elliott in \cite{Ell67}) we also show that the number of spheres determined by nn points is at least 1+(n13)t3orchard(n1)1+\binom{n-1}{3}-t_3^{orchard}(n-1), and this bound is best possible under its hypothesis. (By t3orchard(n)t_3^{orchard}(n), we are denoting the maximum number of three-point lines attainable by a configuration of nn points, no four collinear, in the plane, i.e., the classic Orchard Problem.) New lower bounds are also given for both lines and circles.Comment: 37 page
    corecore