3,864 research outputs found

    Velocity and structural model of the Lower Tagus Basin according to the study of environmental seismic noise

    Get PDF
    Along his history the Lower Tagus Valley (LTV) region was shaken by several earthquakes, some of them produced in large ruptures of offshore structures located southwest of the Portuguese coastline. Among these is the Lisbon earthquake of 1 November 1755 (M∼8.5-8.7), and other moderates earthquakes that were produced by local sources such as the 1344 (M6.0), 1531 (M7.1) and 1909 (M6.0) earthquakes. Previous simulations [1] have shown high velocity amplification in the region. The model used in the simulations was updated from low to high resolution using all the new available geophysical and geotechnical data on the area (seismic reflection, aeromagnetic, gravimetric, deep wells and geological outcrops) [2]. To confirm this model in the areas where it was derived by potential field methods we use broadband ambient noise measurements collected in about 200 points along seven profiles on the LTV basin, six perpendicular and one parallel to the basin axis. We applied the horizontal to vertical (H/V) spectral ratio method [3] to the seismic noise profiles in order to estimate the distribution of amplification in the basin. The H/V curves obtained reveals the existence of two low frequency peaks centered on 0.2 and 1 Hz [4]. These peaks are strongly related with the thickness of Cenozoic and alluvial sediments. The velocity model obtained by inversion of the H/V curves is in good agreement with borehole data, and results obtained using seismic reflection and gravimetric methods. However, aeromagnetic data overestimates the depth of the base of Cenozoic in the areas where it overlies directly the paleozoic basement, which we attribute either to the existence of Mesozoic units or higher magnetic susceptibilities than expected for the Paleozoic. References: [1] Bezzeghoud, M., Borges, J.F., M., Caldeira (2011). Ground motion simulations of the SW Iberia margin: rupture directivity and earth structure effects. Natural Hazards, pages 1–17. doi:10.1007/s11069-011-9925-2 [2] Torres, R.J.G., (2012). Modelo de velocidade da Bacia do Vale do Tejo: uma abordagem baseada no estudo do ruído sísmico ambiental, Master Thesis, Universidade de Évora, 83pp. [3] Nakamura, Y., 1989. A method for dynamic characteristics estimations of subsurface using microtremors on the ground surface, Quarterly Report, RTRI, Japan, v. 30, p. 25-33. [4] J.A. Furtado, Confirmação do modelo da estrutura 3D do Vale Inverior do Tejo a partir de dados de ruído sísmico ambiente, Master Thesis, Universidade de Évora, 136pp, 2010

    Order-of-magnitude speedup for steady states and traveling waves via Stokes preconditioning in Channelflow and Openpipeflow

    Full text link
    Steady states and traveling waves play a fundamental role in understanding hydrodynamic problems. Even when unstable, these states provide the bifurcation-theoretic explanation for the origin of the observed states. In turbulent wall-bounded shear flows, these states have been hypothesized to be saddle points organizing the trajectories within a chaotic attractor. These states must be computed with Newton's method or one of its generalizations, since time-integration cannot converge to unstable equilibria. The bottleneck is the solution of linear systems involving the Jacobian of the Navier-Stokes or Boussinesq equations. Originally such computations were carried out by constructing and directly inverting the Jacobian, but this is unfeasible for the matrices arising from three-dimensional hydrodynamic configurations in large domains. A popular method is to seek states that are invariant under numerical time integration. Surprisingly, equilibria may also be found by seeking flows that are invariant under a single very large Backwards-Euler Forwards-Euler timestep. We show that this method, called Stokes preconditioning, is 10 to 50 times faster at computing steady states in plane Couette flow and traveling waves in pipe flow. Moreover, it can be carried out using Channelflow (by Gibson) and Openpipeflow (by Willis) without any changes to these popular spectral codes. We explain the convergence rate as a function of the integration period and Reynolds number by computing the full spectra of the operators corresponding to the Jacobians of both methods.Comment: in Computational Modelling of Bifurcations and Instabilities in Fluid Dynamics, ed. Alexander Gelfgat (Springer, 2018

    Accurate masses and radii of normal stars: modern results and applications

    Get PDF
    This paper presents and discusses a critical compilation of accurate, fundamental determinations of stellar masses and radii. We have identified 95 detached binary systems containing 190 stars (94 eclipsing systems, and alpha Centauri) that satisfy our criterion that the mass and radius of both stars be known to 3% or better. To these we add interstellar reddening, effective temperature, metal abundance, rotational velocity and apsidal motion determinations when available, and we compute a number of other physical parameters, notably luminosity and distance. We discuss the use of this information for testing models of stellar evolution. The amount and quality of the data also allow us to analyse the tidal evolution of the systems in considerable depth, testing prescriptions of rotational synchronisation and orbital circularisation in greater detail than possible before. The new data also enable us to derive empirical calibrations of M and R for single (post-) main-sequence stars above 0.6 M(Sun). Simple, polynomial functions of T(eff), log g and [Fe/H] yield M and R with errors of 6% and 3%, respectively. Excellent agreement is found with independent determinations for host stars of transiting extrasolar planets, and good agreement with determinations of M and R from stellar models as constrained by trigonometric parallaxes and spectroscopic values of T(eff) and [Fe/H]. Finally, we list a set of 23 interferometric binaries with masses known to better than 3%, but without fundamental radius determinations (except alpha Aur). We discuss the prospects for improving these and other stellar parameters in the near future.Comment: 56 pages including figures and tables. To appear in The Astronomy and Astrophysics Review. Ascii versions of the tables will appear in the online version of the articl

    Portable light transmission measuring system for preserved corneas

    Get PDF
    BACKGROUND: The authors have developed a small portable device for the objective measurement of the transparency of corneas stored in preservative medium, for use by eye banks in evaluation prior to transplantation. METHODS: The optical system consists of a white light, lenses, and pinholes that collimate the white light beams and illuminate the cornea in its preservative medium, and an optical filter (400–700 nm) that selects the range of the wavelength of interest. A sensor detects the light that passes through the cornea, and the average corneal transparency is displayed. In order to obtain only the tissue transparency, an electronic circuit was built to detect a baseline input of the preservative medium prior to the measurement of corneal transparency. The operation of the system involves three steps: adjusting the "0 %" transmittance of the instrument, determining the "100 %" transmittance of the system, and finally measuring the transparency of the preserved cornea inside the storage medium. RESULTS: Fifty selected corneas were evaluated. Each cornea was submitted to three evaluation methods: subjective classification of transparency through a slit lamp, quantification of the transmittance of light using a corneal spectrophotometer previously developed, and measurement of transparency with the portable device. CONCLUSION: By comparing the three methods and using the expertise of eye bank trained personnel, a table for quantifying corneal transparency with the new device has been developed. The correlation factor between the corneal spectrophotometer and the new device is 0,99813, leading to a system that is able to standardize transparency measurements of preserved corneas, which is currently done subjectively

    Real-Time Big Data Analytics in Smart Cities from LoRa-Based IoT Networks

    Get PDF
    The currently burst of the Internet of Things (IoT) tech-nologies implies the emergence of new lines of investigation regarding not only to hardware and protocols but also to new methods of pro-duced data analysis satisfying the IoT environment constraints: a real-time and a big data approach. The Real-time restriction is about the continuous generation of data provided by the endpoints connected to an IoT network; due to the connection and scaling capabilities of an IoT network, the amount of data to process is so high that Big data tech-niques become essential. In this article, we present a system consisting of two main modules. In one hand, the infrastructure, a complete LoRa based network designed, tested and deployment in the Pablo de Olavide University and, on the other side, the analytics, a big data streaming sys-tem that processes the inputs produced by the network to obtain useful, valid and hidden information.Ministerio de Economía y Competitividad TIN2017-88209-C2-1-

    Towards testing of a second-generation bladed receiver

    Get PDF
    A bladed receiver design concept is presented which offers a >2% increase in overall receiver efficiency after considering spillage, reflection, emission and convection losses, based on an integrated optical-thermal model, for a design where the working fluid is conventional molten salt operating in the standard 290–565°C temperature range. A novel testing methodology is described, using air and water to test the receiver when molten salt facilities are not available. Technoeconomic analysis shows that the receiver could achieve a 4 AUD/MWhe saving in levelised cost of energy, but only if the bladed receiver design can be implemented at no additional cost
    corecore