775 research outputs found

    Enhancement of deep epileptiform activity in the EEG via 3-D adaptive spatial filtering,

    Get PDF
    The detection of epileptiform discharges (ED’s) in the electroencephalogram (EEG) is an important component in the diagnosis of epilepsy. However, when the epileptogenic source is located deep in the brain, the ED’s at the scalp are often masked by more superficial, higher-amplitude EEG activity. A noninvasive technique which uses an adaptive “beamformer” spatial filter has been investigated for the enhancement of signals from deep sources in the brain suspected of containing ED’s. A forward three-layer spherical model was used to relate a dipolar source to recorded signals to determine the beamformer’s spatial response constraints. The beamformer adapts, using the least-mean-squares (LMS) algorithm, to reduce signals from sources distant to some arbitrarily defined location in the brain. The beamformer produces three outputs, being the orthogonal components of the signal estimated to have arisen at or near the assumed location. Simulations were performed by using the same forward model to superimpose realistic ED’s on normal EEG recordings. The simulations show the beamformer’s ability to enhance signals emanating from deep foci by way of an enhancement ratio (ER), being the improvement in signal-to-noise ratio (SNR) to that observed at any of the scalp electrodes. The performance of the beamformer has been evaluated for 1) the number of scalp electrodes, 2) the recording montage, 3) dependence on the background EEG, 4) dependence on magnitude, depth, and orientation of epileptogenic focus, and 5) sensitivity to inaccuracies in the estimated location of the focus. Results from the simulations show the beamformer’s performance to be dependent on the number of electrodes and moderately sensitive to variations in the EEG background. Conversely, its performance appears to be largely independent of the amplitude and morphology of the ED. The dependence studies indicated that the beamformer’s performance was moderately dependent on eccentricity with the ER increasing as the dipolar source and the beamformer were moved from the center to the surface of the brain (1.51–2.26 for radial dipoles and 1.17–2.69 for tangential dipoles). The beamformer was also moderately dependent on variations in polar or azimuthal angle for radial and tangential dipoles. Higher ER’s tended to be seen for locations between electrode sites. The beamformer was more sensitive to inaccuracies in both polar and azimuthal location than depth of the dipolar source. For polar locations, an ER > 1.0 was achieved when the beamformer was located within 25 of a radial dipole and 35 of a tangential dipole. Similarly, angular ranges of 37.5 and 45 , respectively, for inaccuracies in azimuthal locations. Preliminary results from real EEG records, comprising 12 definite or questionable epileptiform events, from four patients, demonstrated the beamformer’s ability to enhance these events by a mean 100% (52%–215%) for referential data and a mean 104% (50%–145%) for bipolar data

    Effect of dimples on glancing shock wave turbulent boundary layer interactions

    Get PDF
    An experimental study has been conducted to examine the control effectiveness of dimples on the glancing shock wave turbulent boundary layer interaction produced by a series of hemi-cylindrically blunted fins at Mach numbers 0.8 and 1.4, and at angles of sweep 0°, 15°, 30° and 45°. Schlieren photography, oil flow, pressure sensitive paints, and pressure tappings were employed to examine the characteristics of the induced flow field. The passive control technique used a series of 2 mm diameter, 1 mm deep indents drilled across the hemi-cylindrical leading edge at angles 0°, 45° and 90°. The effects of dimples were highly dependent on their orientation relative to the leading edge apex, and the local boundary layer properties

    Gravitational Coupling and Dynamical Reduction of The Cosmological Constant

    Full text link
    We introduce a dynamical model to reduce a large cosmological constant to a sufficiently small value. The basic ingredient in this model is a distinction which has been made between the two unit systems used in cosmology and particle physics. We have used a conformal invariant gravitational model to define a particular conformal frame in terms of large scale properties of the universe. It is then argued that the contributions of mass scales in particle physics to the vacuum energy density should be considered in a different conformal frame. In this manner, a decaying mechanism is presented in which the conformal factor appears as a dynamical field and plays a key role to relax a large effective cosmological constant. Moreover, we argue that this model also provides a possible explanation for the coincidence problem.Comment: To appear in GR

    When the working day is through: The end of work as identity?

    Get PDF
    This article seeks to present a counter-case to the ‘end of work thesis’ advocated by writers such as Beck, Sennett and Bauman. It argues that work remains a significant locus of personal identity and that the depiction by these writers of endemic insecurity in the workplace is inaccurate and lacks empirical basis. The article draws upon case study data to illustrate how, across a range of workplaces, work remains an importance source of identity, meaning and social affiliation

    Bulk Scale Factor at Very Early Universe

    Full text link
    In this paper we propose a higher dimensional Cosmology based on FRW model and brane-world scenario. We consider the warp factor in the brane-world scenario as a scale factor in 5-dimensional generalized FRW metric, which is called as bulk scale factor, and obtain the evolution of it with space-like and time-like extra dimensions. It is then showed that, additional space-like dimensions can produce exponentially bulk scale factor under repulsive strong gravitational force in the empty universe at a very early stage.Comment: 7 pages, October 201

    The cosmological constant and the coincidence problem in a new cosmological interpretation of the universal constant c

    Full text link
    In a recent paper (Vigoureux et al. Int. J. Theor. Phys. 47:928, 2007) it has been suggested that the velocity of light and the expansion of the universe are two aspects of one single concept connecting space and time in the expanding universe. It has then be shown that solving Friedmann's equations with that interpretation (and keeping c = constant) can explain number of unnatural features of the standard cosmology (for example: the flatness problem, the problem of the observed uniformity in term of temperature and density of the cosmological background radiation, the small-scale inhomogeneity problem...) and leads to reconsider the Hubble diagram of distance moduli and redshifts as obtained from recent observations of type Ia supernovae without having to need an accelerating universe. In the present work we examine the problem of the cosmological constant. We show that our model can exactly generate Λ\Lambda (equation of state Pφ=−ρφc2P_\varphi = - \rho_\varphi c^2 with Λ∝R−2\Lambda \propto R^{-2}) contrarily to the standard model which cannot generate it exactly. We also show how it can solve the so-called cosmic coincidence problem

    Interacting New Agegraphic Dark Energy in a Cyclic Universe

    Full text link
    The main goal of this work is investigation of NADE in the cyclic universe scenario. Since, cyclic universe is explained by a phantom phase (ω<−1\omega<-1), it is shown when there is no interaction between matter and dark energy, ADE and NADE do not produce a phantom phase, then can not describe cyclic universe. Therefore, we study interacting models of ADE and NADE in the modified Friedmann equation. We find out that, in the high energy regime, which it is a necessary part of cyclic universe evolution, only NADE can describe this phantom phase era for cyclic universe. Considering deceleration parameter tells us that the universe has a deceleration phase after an acceleration phase, and NADE is able to produce a cyclic universe. Also it is found valuable to study generalized second law of thermodynamics. Since the loop quantum correction is taken account in high energy regime, it may not be suitable to use standard treatment of thermodynamics, so we turn our attention to the result of \citep{29}, which the authors have studied thermodynamics in loop quantum gravity, and we show that which condition can satisfy generalized second law of thermodynamics.Comment: 8 pages, 3 figure

    Scalar-Tensor Gravity and Quintessence

    Get PDF
    Scalar fields with inverse power-law effective potentials may provide a negative pressure component to the energy density of the universe today, as required by cosmological observations. In order to be cosmologically relevant today, the scalar field should have a mass mϕ=O(10−33eV)m_\phi = O(10^{-33} {\mathrm eV}), thus potentially inducing sizable violations of the equivalence principle and space-time variations of the coupling constants. Scalar-tensor theories of gravity provide a framework for accommodating phenomenologically acceptable ultra-light scalar fields. We discuss non-minimally coupled scalar-tensor theories in which the scalar-matter coupling is a dynamical quantity. Two attractor mechanisms are operative at the same time: one towards the tracker solution, which accounts for the accelerated expansion of the Universe, and one towards general relativity, which makes the ultra-light scalar field phenomenologically safe today. As in usual tracker-field models, the late-time behavior is largely independent on the initial conditions. Strong distortions in the cosmic microwave background anisotropy spectra as well as in the matter power spectrum are expected.Comment: 5 pages, 4 figure

    On the resolution of cosmic coincidence problem and phantom crossing with triple interacting fluids

    Full text link
    We here investigate a cosmological model in which three fluids interact with each other involving certain coupling parameters and energy exchange rates. The motivation of the problem stems from the puzzling `triple coincidence problem' which naively asks why the cosmic energy densities of matter, radiation and dark energy are almost of the same order of magnitude at the present time. In our model, we determine the conditions under triple interacting fluids will cross the phantom divide.Comment: 22 pages, 6 figures, to appear in Eur. Phys. J. C (2009

    Can a matter-dominated model with constant bulk viscosity drive the accelerated expansion of the universe?

    Full text link
    We test a cosmological model which the only component is a pressureless fluid with a constant bulk viscosity as an explanation for the present accelerated expansion of the universe. We classify all the possible scenarios for the universe predicted by the model according to their past, present and future evolution and we test its viability performing a Bayesian statistical analysis using the SCP ``Union'' data set (307 SNe Ia), imposing the second law of thermodynamics on the dimensionless constant bulk viscous coefficient \zeta and comparing the predicted age of the universe by the model with the constraints coming from the oldest globular clusters. The best estimated values found for \zeta and the Hubble constant Ho are: \zeta=1.922 \pm 0.089 and Ho=69.62 \pm 0.59 km/s/Mpc with a \chi^2=314. The age of the universe is found to be 14.95 \pm 0.42 Gyr. We see that the estimated value of Ho as well as of \chi^2 are very similar to those obtained from LCDM model using the same SNe Ia data set. The estimated age of the universe is in agreement with the constraints coming from the oldest globular clusters. Moreover, the estimated value of \zeta is positive in agreement with the second law of thermodynamics (SLT). On the other hand, we perform different forms of marginalization over the parameter Ho in order to study the sensibility of the results to the way how Ho is marginalized. We found that it is almost negligible the dependence between the best estimated values of the free parameters of this model and the way how Ho is marginalized in the present work. Therefore, this simple model might be a viable candidate to explain the present acceleration in the expansion of the universe.Comment: 31 pages, 12 figures and 2 tables. Accepted to be published in the Journal of Cosmology and Astroparticle Physics. Analysis using the new SCP "Union" SNe Ia dataset instead of the Gold 2006 and ESSENCE datasets and without changes in the conclusions. Added references. Related works: arXiv:0801.1686 and arXiv:0810.030
    • 

    corecore