1,287 research outputs found
Low-dose alum application trialled as a management tool for internal nutrient loads in Lake Okaro, New Zealand
Aluminium sulfate (alum) was applied to Lake Okaro, a eutrophic New Zealand lake with recurrent cyanobacterial blooms, to evaluate its suitability for reducing trophic status and bloom frequency. The dose yielded 0.6 g aluminium m–3 in the epilimnion. Before dosing, pH exceeded 8 in epilimnetic waters but was optimal for flocculation (6–8) below 4 m depth. After dosing, there was no significant change in water clarity, hypolimnetic pH decreased to 5.5, and soluble aluminium exceeded recommended guidelines for protection of freshwater organisms. Epilimnetic phosphate concentrations decreased from 40 to 5 mg m–3 and total nitrogen (TN):total phosphorus (TP) mass ratios increased from 7:1 to 37:1. The dominant phytoplankton species changed from Anabaena spp. before dosing, to Ceratium hirudinella , then Staurastrum sp. after dosing. Detection of effectiveness of dosing may have been limited by sampling duration and design, as well as the low alum dose. The decrease in hypolimnetic pH and epilimnetic TP, and increase in Al3+ and chlorophyll a, are attributed to the low alkalinity lake water and coincidence of alum dosing with a cyanobacterial bloom and high pH
Linear Redshift Distortions and Power in the PSCz Survey
We present a state-of-the-art linear redshift distortion analysis of the
recently published IRAS Point Source Catalog Redshift Survey (PSCz). The
procedure involves linear compression into 4096 Karhunen-Loeve modes culled
from a potential pool of about 3 x 10^5 modes, followed by quadratic
compression into three separate power spectra, the galaxy-galaxy,
galaxy-velocity, and velocity-velocity power spectra. Least squares fitting to
the decorrelated power spectra yields a linear redshift distortion parameter
beta = Omega_m^0.6/b = 0.41(+0.13,-0.12).Comment: Minor changes to agree with accepted version. Slight changes to power
spectrum, including one more point added at large scales, from binning points
formerly discarded as too noisy. 5 pages, including 4 embedded PostScript
figures. Accepted for publication in MNRAS Letters (pink pages). Power
spectrum data available at http://casa.colorado.edu/~ajsh/pscz
Matching the Scales of Planning and Environmental Risk: an Evaluation of Community Wildfire Protection Plans in the Western US
Theory predicts that effective environmental governance requires that the scales of management account for the scales of environmental processes. A good example is community wildfire protection planning. Plan boundaries that are too narrowly defined may miss sources of wildfire risk originating at larger geographic scales whereas boundaries that are too broadly defined dilute resources. Although the concept of scale (mis)matches is widely discussed in literature on risk mitigation as well as environmental governance more generally, rarely has the concept been rigorously quantified. We introduce methods to address this limitation, and we apply our approach to assess scale matching among Community Wildfire Protection Plans (CWPPs) in the western US. Our approach compares two metrics: (1) the proportion of risk sources encompassed by planning jurisdictions (sensitivity) and (2) the proportion of area in planning jurisdictions in which risk can originate (precision). Using data from 852 CWPPs and a published library of 54 million simulated wildfires, we demonstrate a trade-off between sensitivity and precision. Our analysis reveals that spatial scale match—the product of sensitivity and precision—has an n-shaped relationship with jurisdiction size and is maximal at approximately 500 km2. Bayesian multilevel models further suggest that functional scale match—via neighboring, nested, and overlapping planning jurisdictions—may compensate for low sensitivity. This study provides a rare instance of a quantitative framework to measure scale match in environmental planning and has broad implications for risk mitigation as well as in other environmental governance settings
Programa FORMO para mejorar la lectura de textos en estudiantes de primaria, Institución Educativa San Juan, San Juan de Miraflores, 2016
El objetivo de la presente investigación fue determinar el efecto de la ejecución del Programa FORMO para mejorar la lectura de textos, en estudiantes de primaria, Institución Educativa San Juan, San Juan de Miraflores, 2016‖, la población estuvo constituida por 139 niños y niñas que cursaban el III ciclo de educación Primaria, tercer grado, turno mañana, la muestra no probabilística intencional consideró 58 estudiantes; en los cuales se ha empleado la variable: Programa FORMO: Fortaleciendo la Memoria Operativa
El método empleado en la investigación fue el hipotético deductivo, esta investigación utilizó para su propósito el diseño experimental de nivel explicativo, de clase cuasi experimental, que recogió la información en un período específico, que se desarrolló al aplicar el instrumento: Prueba de Comprensión Lectora de Complejidad Lingüística Progresiva (CPL) 3º Nivel A, cuyos resultados se presentan gráfica y textualmente.
La investigación concluye que no existe evidencia significativa para afirmar que: La aplicación del programa ―FORMO‖ mejora significativamente la lectura de textos en estudiantes del tercer grado de primaria de la Institución Educativa San Juan, San Juan de Miraflores, 2016
OASIS: A Large-Scale Dataset for Single Image 3D in the Wild
Single-view 3D is the task of recovering 3D properties such as depth and
surface normals from a single image. We hypothesize that a major obstacle to
single-image 3D is data. We address this issue by presenting Open Annotations
of Single Image Surfaces (OASIS), a dataset for single-image 3D in the wild
consisting of annotations of detailed 3D geometry for 140,000 images. We train
and evaluate leading models on a variety of single-image 3D tasks. We expect
OASIS to be a useful resource for 3D vision research. Project site:
https://pvl.cs.princeton.edu/OASIS.Comment: Accepted to CVPR 202
Methods for Rapidly Processing Angular Masks of Next-Generation Galaxy Surveys
As galaxy surveys become larger and more complex, keeping track of the
completeness, magnitude limit, and other survey parameters as a function of
direction on the sky becomes an increasingly challenging computational task.
For example, typical angular masks of the Sloan Digital Sky Survey contain
about N=300,000 distinct spherical polygons. Managing masks with such large
numbers of polygons becomes intractably slow, particularly for tasks that run
in time O(N^2) with a naive algorithm, such as finding which polygons overlap
each other. Here we present a "divide-and-conquer" solution to this challenge:
we first split the angular mask into predefined regions called "pixels," such
that each polygon is in only one pixel, and then perform further computations,
such as checking for overlap, on the polygons within each pixel separately.
This reduces O(N^2) tasks to O(N), and also reduces the important task of
determining in which polygon(s) a point on the sky lies from O(N) to O(1),
resulting in significant computational speedup. Additionally, we present a
method to efficiently convert any angular mask to and from the popular HEALPix
format. This method can be generically applied to convert to and from any
desired spherical pixelization. We have implemented these techniques in a new
version of the mangle software package, which is freely available at
http://space.mit.edu/home/tegmark/mangle/, along with complete documentation
and example applications. These new methods should prove quite useful to the
astronomical community, and since mangle is a generic tool for managing angular
masks on a sphere, it has the potential to benefit terrestrial mapmaking
applications as well.Comment: New version 2.1 of the mangle software now available at
http://space.mit.edu/home/tegmark/mangle/ - includes galaxy survey masks and
galaxy lists for the latest SDSS data release and the 2dFGRS final data
release as well as extensive documentation and examples. 14 pages, 9 figures,
matches version accepted by MNRA
Karhunen-Loeve eigenvalue problems in cosmology: how should we tackle large data sets?
Since cosmology is no longer "the data-starved science", the problem of how
to best analyze large data sets has recently received considerable attention,
and Karhunen-Loeve eigenvalue methods have been applied to both galaxy redshift
surveys and Cosmic Microwave Background (CMB) maps. We present a comprehensive
discussion of methods for estimating cosmological parameters from large data
sets, which includes the previously published techniques as special cases. We
show that both the problem of estimating several parameters jointly and the
problem of not knowing the parameters a priori can be readily solved by adding
an extra singular value decomposition step.
It has recently been argued that the information content in a sky map from a
next generation CMB satellite is sufficient to measure key cosmological
parameters (h, Omega, Lambda, etc) to an accuracy of a few percent or better -
in principle. In practice, the data set is so large that both a brute force
likelihood analysis and a direct expansion in signal-to-noise eigenmodes will
be computationally unfeasible. We argue that it is likely that a Karhunen-Loeve
approach can nonetheless measure the parameters with close to maximal accuracy,
if preceded by an appropriate form of quadratic "pre-compression".
We also discuss practical issues regarding parameter estimation from present
and future galaxy redshift surveys, and illustrate this with a generalized
eigenmode analysis of the IRAS 1.2 Jy survey optimized for measuring
beta=Omega^{0.6}/b using redshift space distortions.Comment: 15 pages, with 5 figures included. Substantially expanded with worked
COBE examples for e.g. the multiparameter case. Available from
http://www.sns.ias.edu/~max/karhunen.html (faster from the US), from
http://www.mpa-garching.mpg.de/~max/karhunen.html (faster from Europe) or
from [email protected]
- …