25,756 research outputs found

    Comparative Distribution of System Losses to Market Participants Using Different Loss Allocation Methods

    Get PDF
    A key part of electricity pricing is the fair and equitable allocation of system losses. This paper critically compares several existing loss allocation methods. The methods addressed include existing approaches such as pro rata method, proportional sharing method [1], loss formula [2], and incremental method [3], in addition to a new method proposed by the authors, which allocates losses from a loop-based representation of system behaviour. The distinct numerical allocation of losses in both the IEEE 14-bus network and a modified Nordic 41 bus system is listed for comparison. The similarity between the different loss allocations methods varies considerably, depending upon the system to which the methods are applied. This is primarily a result of the manner in which the different allocation methods address the impact of network structure. Further work is still required to determine which method encourages better system operation

    Comparative effectiveness of loss allocation methods for providing signals to affect market operation

    Get PDF
    The distribution of system losses, an integral part of electricity pricing, can play an important role in the operation of electricity markets. To date, despite the existence of many loss allocation methods, no one method is commonly used in established electricity markets. Furthermore, some markets are still considering using different methods that will provide more efficient treatment of losses and aid in improving market operations and structures. This paper compares the loss allocation methods used in existing markets in Eastern Australia and Great Britain, as well as with the pro rata and proportional sharing approaches. Through implementation of the loss allocation methods on the CIGRE Nordic 32 bus system we examine what behaviour each method encourages. Results suggest that the method used in the Australian market provides the most sophisticated signal to market participants. Similar results, however, can be obtained using the simpler approach taken in Great Britain. This reinforces that the selection of loss allocation will be a market dependent problem

    Detection-Loophole-Free Test of Quantum Nonlocality, and Applications

    Full text link
    We present a source of entangled photons that violates a Bell inequality free of the "fair-sampling" assumption, by over 7 standard deviations. This violation is the first experiment with photons to close the detection loophole, and we demonstrate enough "efficiency" overhead to eventually perform a fully loophole-free test of local realism. The entanglement quality is verified by maximally violating additional Bell tests, testing the upper limit of quantum correlations. Finally, we use the source to generate secure private quantum random numbers at rates over 4 orders of magnitude beyond previous experiments.Comment: Main text: 5 pages, 2 figures, 1 table. Supplementary Information: 7 pages, 2 figure

    Adaptive and coupled continuum-molecular mechanics simulations of amorphous materials

    Get PDF
    A method to reduce the degrees freedom in molecular mechanics simulation is presented. Although the approach is formulated for amorphous materials in mind, it is equally applicable to crystalline materials. The method can be selectively applied to regions where molecular displacements are expected to be small while simultaneously using classical molecular mechanics (MM) for regions undergoing large deformation. The accuracy and computational efficiency of the approach is demonstrated through the simulation of a polymer-like substrate being indented by a rigid hemispherical indentor. The region directly below the indentor is modelled by classical molecular mechanics while the region further away has the degrees of freedom (DOFs) reduced by about 50 times. The results of automatically reverting regions of reduced DOFs back to classical MM also demonstrate the capability of performing adaptive simulations

    An Assessment of Risk of Iodine Deficiency Among Pregnant Women in Sarawak, Malaysia

    Full text link
    Previous findings from a state-wide Iodine Deficiency Disorders (IDD) study among pregnant women (PW) in Sarawak indicated that PW are at risk of IDD and further assessment is needed. This paper describes the methodology used in conducting this study for an assessment of risk of iodine deficiency among pregnant women in Sarawak, Malaysia. A total of 30 maternal child health care clinics (MCHCs) were selected using probability proportional to population size (PPS) sampling technique. The PW sample size was calculated based on 95% confidence interval (CI), relative precision of 5%, design effect of 2, anticipated IDD prevalence of 65.0% and non-response rate of 20%. Thus, the total sample size required was 750 (25 respondents per selected MCHC). The WHO Expanded Programme on Immunization (EPI) surveys approach was used to randomly select the first respondent and subsequent respondents were chosen until the required number of PW was met. The required data were obtained through: face-to-face interviews (socio-demographic and food frequency questionnaire), clinical assessments (thyroid size, and hyper/hypothyroidism) and biochemical analysis (urine and blood serum). A total of 677 PW responded in the study with a response rate of 90.2%. Majority of the PW were at second gravida, aged 25-29 years old and of Malay ethnicity. The methodology used in this study was based on International guidelines which may provide state's estimates. All the necessary steps were taken into consideration to ensure valid and reliable findings on current iodine status among PW

    Monotonic functions in Bianchi models: Why they exist and how to find them

    Full text link
    All rigorous and detailed dynamical results in Bianchi cosmology rest upon the existence of a hierarchical structure of conserved quantities and monotonic functions. In this paper we uncover the underlying general mechanism and derive this hierarchical structure from the scale-automorphism group for an illustrative example, vacuum and diagonal class A perfect fluid models. First, kinematically, the scale-automorphism group leads to a reduced dynamical system that consists of a hierarchy of scale-automorphism invariant sets. Second, we show that, dynamically, the scale-automorphism group results in scale-automorphism invariant monotone functions and conserved quantities that restrict the flow of the reduced dynamical system.Comment: 26 pages, replaced to match published versio

    Parametrization and Classification of 20 Billion LSST Objects: Lessons from SDSS

    Get PDF
    The Large Synoptic Survey Telescope (LSST) will be a large, wide-field ground-based system designed to obtain, starting in 2015, multiple images of the sky that is visible from Cerro Pachon in Northern Chile. About 90% of the observing time will be devoted to a deep-wide-fast survey mode which will observe a 20,000 deg2^2 region about 1000 times during the anticipated 10 years of operations (distributed over six bands, ugrizyugrizy). Each 30-second long visit will deliver 5σ\sigma depth for point sources of r24.5r\sim24.5 on average. The co-added map will be about 3 magnitudes deeper, and will include 10 billion galaxies and a similar number of stars. We discuss various measurements that will be automatically performed for these 20 billion sources, and how they can be used for classification and determination of source physical and other properties. We provide a few classification examples based on SDSS data, such as color classification of stars, color-spatial proximity search for wide-angle binary stars, orbital-color classification of asteroid families, and the recognition of main Galaxy components based on the distribution of stars in the position-metallicity-kinematics space. Guided by these examples, we anticipate that two grand classification challenges for LSST will be 1) rapid and robust classification of sources detected in difference images, and 2) {\it simultaneous} treatment of diverse astrometric and photometric time series measurements for an unprecedentedly large number of objects.Comment: Presented at the "Classification and Discovery in Large Astronomical Surveys" meeting, Ringberg Castle, 14-17 October, 200

    Mimicking diffuse supernova antineutrinos with the Sun as a source

    Full text link
    Measuring the electron antineutrino component of the cosmic diffuse supernova neutrino background (DSNB) is the next ambitious goal for low-energy neutrino astronomy. The largest flux is expected in the lowest accessible energy bin. However, for E < 15 MeV a possible signal can be mimicked by a solar electron antineutrino flux that originates from the usual 8B neutrinos by spin-flavor oscillations. We show that such an interpretation is possible within the allowed range of neutrino electromagnetic transition moments and solar turbulent field strengths and distributions. Therefore, an unambiguous detection of the DSNB requires a significant number of events at E > 15 MeV.Comment: 4 pages, 1 figur

    Herschel imaging of the dust in the Helix Nebula (NGC 7293)

    Get PDF
    In our series of papers presenting the Herschel imaging of evolved planetary nebulae, we present images of the dust distribution in the Helix nebula (NGC 7293). Images at 70, 160, 250, 350, and 500 micron were obtained with the PACS and SPIRE instruments on board the Herschel satellite. The broadband maps show the dust distribution over the main Helix nebula to be clumpy and predominantly present in the barrel wall. We determined the spectral energy distribution of the main nebula in a consistent way using Herschel, IRAS, and Planck flux values. The emissivity index of 0.99 +/- 0.09, in combination with the carbon rich molecular chemistry of the nebula, indicates that the dust consists mainly of amorphous carbon. The dust excess emission from the central star disk is detected at 70 micron and the flux measurement agree with previous measurement. We present the temperature and dust column density maps. The total dust mass across the Helix nebula (without its halo) is determined to be 0.0035 solar mass at a distance of 216 pc. The temperature map shows dust temperatures between 22 and 42 K, which is similar to the kinetic temperature of the molecular gas, strengthening the fact that the dust and gas co-exist in high density clumps. Archived images are used to compare the location of the dust emission in the far infrared (Herschel) with the ionized (GALEX, Hbeta) and molecular hydrogen component. The different emission components are consistent with the Helix consisting of a thick walled barrel-like structure inclined to the line of sight. The radiation field decreases rapidly through the barrel wall.Comment: 8 pages, 9 figures, revised version A&A in pres

    MAX 4 and MAX 5 CMB anisotropy measurement constraints on open and flat-Lambda CDM cosmogonies

    Full text link
    We account for experimental and observational uncertainties in likelihood analyses of cosmic microwave background (CMB) anisotropy data from the MAX 4 and MAX 5 experiments. These analyses use CMB anisotropy spectra predicted in open and spatially-flat Lambda cold dark matter cosmogonies. Amongst the models considered, the combined MAX data set is most consistent with the CMB anisotropy shape in Omega_0 ~ 0.1-0.2 open models and less so with that in old (t_0 >~ 15 - 16 Gyr, i.e., low h), high baryon density (Omega_B >~ 0.0175/h^2), low density (Omega_0 ~ 0.2 - 0.4), flat-Lambda models. The MAX data alone do not rule out any of the models we consider at the 2-sigma level. Model normalizations deduced from the combined MAX data are consistent with those drawn from the UCSB South Pole 1994 data, except for the flat bandpower model where MAX favours a higher normalization. The combined MAX data normalization for open models with Omega_0 ~ 0.1-0.2 is higher than the upper 2-sigma value of the DMR normalization. The combined MAX data normalization for old (low h), high baryon density, low-density flat-Lambda models is below the lower 2-sigma value of the DMR normalization. Open models with Omega_0 ~ 0.4-0.5 are not far from the shape most favoured by the MAX data, and for these models the MAX and DMR normalizations overlap. The MAX and DMR normalizations also overlap for Omega_0 = 1 and some higher h, lower Omega_B, low-density flat-Lambda models.Comment: Latex, 37 pages, uses aasms4 styl
    corecore