9,439 research outputs found

    Quantifying Compliance Costs for Small Businesses in New Zealand

    Get PDF
    This paper reports on a small-scale study of the compliance costs of small New Zealand businesses. Participating firms were asked to keep a record of both time spent and expenditure directly incurred over a thirteen-week period. This differs from previous studies that rely on a firm's recall of how much time has been spent on compliance over the previous year. The results suggest that New Zealand small businesses on average spend less time and money on compliance than has been indicated in previous studies. However a number offirms do perceive compliance to be a major issue and in some cases this perception prevents firms from expanding

    Anomalous optical surface absorption in nominally pure silicon samples at 1550 nm

    Get PDF
    The announcement of the direct detection of Gravitational Waves (GW) by the LIGO and Virgo collaboration in February 2016 has removed any uncertainty around the possibility of GW astronomy. It has demonstrated that future detectors with sensitivities ten times greater than the Advanced LIGO detectors would see thousands of events per year. Many proposals for such future interferometric GW detectors assume the use of silicon test masses. Silicon has low mechanical loss at low temperatures, which leads to low displacement noise for a suspended interferometer mirror. In addition to the low mechanical loss, it is a requirement that the test masses have a low optical loss. Measurements at 1550 nm have indicated that material with a low enough bulk absorption is available; however there have been suggestions that this low absorption material has a surface absorption of > 100 ppm which could preclude its use in future cryogenic detectors. We show in this paper that this surface loss is not intrinsic but is likely to be a result of particular polishing techniques and can be removed or avoided by the correct polishing procedure. This is an important step towards high gravitational wave detection rates in silicon based instruments

    The thermodynamics of prediction

    Full text link
    A system responding to a stochastic driving signal can be interpreted as computing, by means of its dynamics, an implicit model of the environmental variables. The system's state retains information about past environmental fluctuations, and a fraction of this information is predictive of future ones. The remaining nonpredictive information reflects model complexity that does not improve predictive power, and thus represents the ineffectiveness of the model. We expose the fundamental equivalence between this model inefficiency and thermodynamic inefficiency, measured by dissipation. Our results hold arbitrarily far from thermodynamic equilibrium and are applicable to a wide range of systems, including biomolecular machines. They highlight a profound connection between the effective use of information and efficient thermodynamic operation: any system constructed to keep memory about its environment and to operate with maximal energetic efficiency has to be predictive.Comment: 5 pages, 1 figur

    What Does it Take to Reduce Massachusetts Emissions 50% by 2030? Challenges Meeting Climate Goals Under Current Legislation (S.2500)

    Get PDF
    Executive Summary: To do its part in the global fight against climate change, Massachusetts must achieve net zero greenhouse gas emissions by mid-century, and aggressive intermediate goals are essential to ensure that the state is on track for net zero. Senate bill 2500, “An Act setting next generation climate policy,” stipulates that 2030 emissions must “not be less than 50% below the 1990 emissions level.” In 2017, Massachusetts carbon dioxide emissions were 22% below 1990 levels, so the state will need to reduce annual emissions by an additional 28% of 1990 levels by 2030. If enacted, S.2500 would give the state important new tools that would significantly reduce emissions. However, our analysis suggests that additional policies beyond those in S.2500 will likely be necessary to reliably achieve the 2030 goal of cutting emissions in half from 1990 levels. With no new policies enacted (but not accounting for COVID-19), we estimate that 2030 emissions will be roughly 35% below 1990 levels (Figure 1, BAU). We use a range of policy proposals to approximate the key policies in S.2500: the Transportation and Climate Initiative cap and invest program, a net zero stretch building code, and a moderate carbon price (29/MTrisingto29/MT rising to 48 in 2030—roughly similar to one in a recent legislative proposal) in the residential, commercial, and industrial sectors. We use published modeling results to approximate these policies and estimate that they would reduce emissions by an additional 6% below 1990 levels (~41%). This leaves an emissions reductions shortfall of ~9% (or 8 million metric tons of CO2, roughly the equivalent of 1.7 million passenger vehicles) in 2030 (see Fig. 1). To reach a 50% reduction by 2030, Massachusetts could implement a higher carbon price (e.g. 58/MTrisingto58/MT rising to 95 by 2030), which would be possible under S.2500. Some (but not all) models suggest that a higher carbon price alone would be sufficient to reach 50% of 1990 levels by 2030. Another option (not in S.2500) is to enact an ambitious clean electricity standard to reduce electricity emissions. To ensure we reach the 2030 goal, robust policies will be needed in all major sectors of the state\u27s economy, with electricity sector decarbonization particularly important (Fig. 1, Stringent case)

    Saturation Effects in a Tunable Coherent Near-Infrared Source

    Get PDF
    A Saturation Effect in a Tunable Infrared Source Utilizing Four-Wave Parametric Conversion in Potassium Vapor is Reported and is Shown to Be the Result of Parasitic Oscillations. a Hundredfold Increase over Previously Attained Power Levels Has Been Affected Via Elimination of These Oscillations

    Reduced representation bisulfite sequencing for comparative high-resolution DNA methylation analysis

    Get PDF
    We describe a large-scale random approach termed reduced representation bisulfite sequencing (RRBS) for analyzing and comparing genomic methylation patterns. BglII restriction fragments were size-selected to 500–600 bp, equipped with adapters, treated with bisulfite, PCR amplified, cloned and sequenced. We constructed RRBS libraries from murine ES cells and from ES cells lacking DNA methyltransferases Dnmt3a and 3b and with knocked-down (kd) levels of Dnmt1 (Dnmt[1(kd),3a(−/−),3b(−/−)]). Sequencing of 960 RRBS clones from Dnmt[1(kd),3a(−/−),3b(−/−)] cells generated 343 kb of non-redundant bisulfite sequence covering 66212 cytosines in the genome. All but 38 cytosines had been converted to uracil indicating a conversion rate of >99.9%. Of the remaining cytosines 35 were found in CpG and 3 in CpT dinucleotides. Non-CpG methylation was >250-fold reduced compared with wild-type ES cells, consistent with a role for Dnmt3a and/or Dnmt3b in CpA and CpT methylation. Closer inspection revealed neither a consensus sequence around the methylated sites nor evidence for clustering of residual methylation in the genome. Our findings indicate random loss rather than specific maintenance of methylation in Dnmt[1(kd),3a(−/−),3b(−/−)] cells. Near-complete bisulfite conversion and largely unbiased representation of RRBS libraries suggest that random shotgun bisulfite sequencing can be scaled to a genome-wide approach

    Is graphene on copper doped?

    Get PDF
    Angle-resolved photoemission spectroscopy (ARPES) and X-ray photoemission spectroscopy have been used to characterise epitaxially ordered graphene grown on copper foil by low-pressure chemical vapour deposition. A short vacuum anneal to 200 °C allows observation of ordered low energy electron diffraction patterns. High quality Dirac cones are measured in ARPES with the Dirac point at the Fermi level (undoped graphene). Annealing above 300 °C produces n-type doping in the graphene with up to 350 meV shift in Fermi level, and opens a band gap of around 100 meV. Dirac cone dispersion for graphene on Cu foil after vacuum anneals (left: 200 °C, undoped; right: 500 °C, n-doped). Centre: low energy electron diffraction from graphene on Cu foil after 200 °C anneal. Data from Antares (SOLEIL)

    Testing for a large local void by investigating the Near-Infrared Galaxy Luminosity Function

    Full text link
    Recent cosmological modeling efforts have shown that a local underdensity on scales of a few hundred Mpc (out to z ~ 0.1), could produce the apparent acceleration of the expansion of the universe observed via type Ia supernovae. Several studies of galaxy counts in the near-infrared (NIR) have found that the local universe appears under-dense by ~25-50% compared with regions a few hundred Mpc distant. Galaxy counts at low redshifts sample primarily L ~ L* galaxies. Thus, if the local universe is under-dense, then the normalization of the NIR galaxy luminosity function (LF) at z>0.1 should be higher than that measured for z 90%) spectroscopic sample of 1436 galaxies selected in the H-band to study the normalization of the NIR LF at 0.1<z<0.3 and address the question of whether or not we reside in a large local underdensity. We find that for the combination of our six fields, the product phi* L* at 0.1 < z < 0.3 is ~ 30% higher than that measured at lower redshifts. While our statistical errors in this measurement are on the ~10% level, we find the systematics due to cosmic variance may be larger still. We investigate the effects of cosmic variance on our measurement using the COSMOS cone mock catalogs from the Millennium simulation and recent empirical estimates. We find that our survey is subject to systematic uncertainties due to cosmic variance at the 15% level ($1 sigma), representing an improvement by a factor of ~ 2 over previous studies in this redshift range. We conclude that observations cannot yet rule out the possibility that the local universe is under-dense at z<0.1.Comment: Accepted for publication in Ap

    A multifrequency study of giant radio sources-II. Spectral ageing analysis of the lobes of selected sources

    Full text link
    Multifrequency observations with the GMRT and the VLA are used to determine the spectral breaks in consecutive strips along the lobes of a sample of selected giant radio sources (GRSs) in order to estimate their spectral ages. The maximum spectral ages estimated for the detected radio emission in the lobes of our sources range from \sim6 to 36 Myr with a median value of \sim20 Myr using the classical equipartition fields. Using the magnetic field estimates from the Beck & Krause formalism the spectral ages range from \sim5 to 38 Myr with a median value of \sim22 Myr. These ages are significantly older than smaller sources. In all but one source (J1313+6937) the spectral age gradually increases with distance from the hotspot regions, confirming that acceleration of the particles mainly occurs in the hotspots. Most of the GRSs do not exhibit zero spectral ages in the hotspots, as is the case in earlier studies of smaller sources. This is likely to be largely due to contamination by more extended emission due to relatively modest resolutions. The injection spectral indices range from \sim0.55 to 0.88 with a median value of \sim0.6. We discuss these values in the light of theoretical expectations, and show that the injection spectral index appears to be correlated with luminosity and/or redshift as well as with linear size.Comment: 12 Pages, 13 Figures, 9 Tables, Accepted for publication in MNRA
    corecore