185 research outputs found
Physical measures to inhibit planktonic cyanobacteriae
In a small lake, intermittent destratification was installed after several other physico-chemical and physical in-lake therapy measures (phosphorus immobilization, permanent destratification) had been tested without great success. If an aerobic sediment-water interface can be maintained, intermittent destratification removes cyanobacteria and prevents optimal development of other members of the photoautotrophic plankton. During growing seasons, increasing abundances of small-bodied herbivores (Bosmina) and Daphnia may have accounted for relatively low phytoplankton biomass as well. Intermittent destratification is a very fast-working in-lake measure and seems to be applicable even in relatively shallow lakes (< 15 m), in which permanent destratification seems to be risky
Comparing advanced energy cycles and developing priorities for future R&D
This report lists and discusses the types of information that are
necessary for making decisions about the allocation of R&D funds among
various electric power related energy technologies. The discussion is
divided into two parts: (1) the task of choosing among different
technologies and (2) the task of guiding toward the most important specific
projects within an individual technology. To choose among alternative
energy technologies requires assumptive information, assessment infor-
mation, probabilistic information, and techniques for quantifying the
overall desirability of each alternative. Guidance toward the most
important projects requires information about levels and uncertainties of
certain performance measures and their importance relative to external
thresholds or relative to the performance of competing technologies. Some
simple examples are presented to illustrate the discussion. A bibliography
of more than 200 important references in this field was compiled and is
appended to this report.Sponsored by U.S. Environmental Protection Agency, Contract #68-02-2146
Robust techniques for developing empirical models of fluidized bed combustors
This report is designed to provide a review of those data analysis techniques that are most useful for fitting m-dimensional empirical surfaces to very large sets of data. One issue explored is the improvement
of data (1) using estimates of the relative size of measurement errors and
(2) using known or assumed theoretical relationships. An apparently new concept is developed, named robust weighting, which facilitates the incorporation of a Driori knowledge, based upon the values of input and
response variables, about the relative quality of different experiments.
This is a particularly useful technique for obtaining statistical inferences from the most relevant portions of the data base, such as concentrating on important ranges of variables or extrapolating off the
leading edge of the frontier of knowledge for an emerging technology. The robust weightings are also useful for forcing a priori known asymptotic behaviors, as well as for fighting biases due to shear size of conflicting
data clusters and for formulating separate models for conflicting clusters.
Another new development has evolved from the two very different objectives of the empirical modeling in this project. The first objective is the usual requirement for the best possible predictive mechanism, and standard techniques are useful with their emphasis on model building, specifically
the successive separation of trend techniques. In addition, a second
objective involves the pursuit of high-dimensional, yet simple, models that
could provide insight into analytic gaps and scientific theories that might govern the situation. For this second objective a new stepwise process was developed for rapidly sweeping the data base and producing crude
quantitative measures of the next (or the first) most important m-tuple relationship to incorporate into the empirical model. These quantitative guidelines have been named the fit improvement factors. Some of the
standard statistical techniques reviewed include: graphical displays, resistant models, smoothing processes, nonlinear and nonparametric regressions, stopping rules, and spline functions for model hypothesis; and robust estimators and data splitting are reviewed as polishing and validating procedures. The concepts of setting, depth and scope of the validation process are described along with an array of about sixty
techniques for validating models. Actual data from the recent literature about the performance of fluidized bed combustors is used as an example of some of the methods presented. Also included is a bibliography of more than 150 references on empirical model development and validation.Sponsored by Contract. no. E49-18-2295 from the U.S. Dept. of Energy
Kondo Behavior of U in CaB
Replacing U for Ca in semiconducting CaB at the few at.% level induces
metallic behaviour and Kondo-type phenomena at low temperatures, a rather
unusual feature for U impurities in metallic hosts. For
CaUB, the resistance minimum occurs at = 17 K. The
subsequent characteristic logarithmic increase of the resistivity with
decreasing temperature merges into the expected dependence below 0.8 K.
Data of the low-temperature specific heat and the magnetization are analyzed by
employing a simple resonance-level model. Analogous measurements on LaB
with a small amount of U revealed no traces of Kondo behavior, above 0.4 K.Comment: 4 pages, 4 figures, submitted for publication to Europhysics Letter
Towards a killer app for the Semantic Web
Killer apps are highly transformative technologies that create new markets and widespread patterns of behaviour. IT generally, and the Web in particular, has benefited from killer apps to create new networks of users and increase its value. The Semantic Web community on the other hand is still awaiting a killer app that proves the superiority of its technologies. There are certain features that distinguish killer apps from other ordinary applications. This paper examines those features in the context of the Semantic Web, in the hope that a better understanding of the characteristics of killer apps might encourage their consideration when developing Semantic Web applications
Evaluation of health effects of air pollution in the Chestnut Ridge area : preliminary analysis
This project involves several tasks designed to take advantage of
(1) a very extensive air pollution monitoring system that is operating
..n the Chestnut Ridge.region of Western Pennsylvania and (2) -the very
well developed analytic dispersion models that have been previously
fine-tuned to this particular area.. The major task in this project is
to establish, through several distinct epidemiolopic approaches, health
data to be used to test hypotheses about relations of air pollution
exposures to morbidity and mortality rates in this region. Because
the air quality monitoring network involves no expense to this contract
this project affords a very cost-effective 6pportunity-for state-of-the-art
techniques to be used in both costly areas of air pollution and health
-effects data col1 ection. . The closely spaced network of monitors, plus
the dispersion modeling capabilities,.allow for the investigation- of
health impacts of. various pollutant gradients in neighboring geographic
areas, thus minimizing -the confounding effects of social, ethnic, and
economic factors. The pollutants that are monitored in this network
include total gaseous sulfur, sulfates, total suspended particulates,
NOx, NO, ozone/oxidants, and coefficient of haze. In addition to enabling
the simulation of exposure profiles between monitors, the air quality2
modeling, along with extensive source and background inventories, will
allow for upgrading the quality of the monitored data. as well as
simulating the exposure levels for about 25 additional air pollutants.
Another important goal of this project is to collect and test the many
available models for associating.health effects with air pollution, to
determine their predictive validity and their usefulness in the choice
and siting of future energy facilities
Long-Term Stability of Polymer-Coated Surface Transverse Wave Sensors for the Detection of Organic Solvent Vapors
Arrays with polymer-coated acoustic sensors, such as surface acoustic wave (SAW) and surface transverse wave (STW) sensors, have successfully been applied for a variety of gas sensing applications. However, the stability of the sensors’ polymer coatings over a longer period of use has hardly been investigated. We used an array of eight STW resonator sensors coated with different polymers. This sensor array was used at semi-annual intervals for a three-year period to detect organic solvent vapors of three different chemical classes: a halogenated hydrocarbon (chloroform), an aliphatic hydrocarbon (octane), and an aromatic hydrocarbon (xylene). The sensor signals were evaluated with regard to absolute signal shifts and normalized signal shifts leading to signal patterns characteristic of the respective solvent vapors. No significant time-related changes of sensor signals or signal patterns were observed, i.e., the polymer coatings kept their performance during the course of the study. Therefore, the polymer-coated STW sensors proved to be robust devices which can be used for detecting organic solvent vapors both qualitatively and quantitatively for several years
A meta-analysis of state-of-the-art electoral prediction from Twitter data
Electoral prediction from Twitter data is an appealing research topic. It
seems relatively straightforward and the prevailing view is overly optimistic.
This is problematic because while simple approaches are assumed to be good
enough, core problems are not addressed. Thus, this paper aims to (1) provide a
balanced and critical review of the state of the art; (2) cast light on the
presume predictive power of Twitter data; and (3) depict a roadmap to push
forward the field. Hence, a scheme to characterize Twitter prediction methods
is proposed. It covers every aspect from data collection to performance
evaluation, through data processing and vote inference. Using that scheme,
prior research is analyzed and organized to explain the main approaches taken
up to date but also their weaknesses. This is the first meta-analysis of the
whole body of research regarding electoral prediction from Twitter data. It
reveals that its presumed predictive power regarding electoral prediction has
been rather exaggerated: although social media may provide a glimpse on
electoral outcomes current research does not provide strong evidence to support
it can replace traditional polls. Finally, future lines of research along with
a set of requirements they must fulfill are provided.Comment: 19 pages, 3 table
Epidemic processes in complex networks
In recent years the research community has accumulated overwhelming evidence
for the emergence of complex and heterogeneous connectivity patterns in a wide
range of biological and sociotechnical systems. The complex properties of
real-world networks have a profound impact on the behavior of equilibrium and
nonequilibrium phenomena occurring in various systems, and the study of
epidemic spreading is central to our understanding of the unfolding of
dynamical processes in complex networks. The theoretical analysis of epidemic
spreading in heterogeneous networks requires the development of novel
analytical frameworks, and it has produced results of conceptual and practical
relevance. A coherent and comprehensive review of the vast research activity
concerning epidemic processes is presented, detailing the successful
theoretical approaches as well as making their limits and assumptions clear.
Physicists, mathematicians, epidemiologists, computer, and social scientists
share a common interest in studying epidemic spreading and rely on similar
models for the description of the diffusion of pathogens, knowledge, and
innovation. For this reason, while focusing on the main results and the
paradigmatic models in infectious disease modeling, the major results
concerning generalized social contagion processes are also presented. Finally,
the research activity at the forefront in the study of epidemic spreading in
coevolving, coupled, and time-varying networks is reported.Comment: 62 pages, 15 figures, final versio
ROP: dumpster diving in RNA-sequencing to find the source of 1 trillion reads across diverse adult human tissues.
High-throughput RNA-sequencing (RNA-seq) technologies provide an unprecedented opportunity to explore the individual transcriptome. Unmapped reads are a large and often overlooked output of standard RNA-seq analyses. Here, we present Read Origin Protocol (ROP), a tool for discovering the source of all reads originating from complex RNA molecules. We apply ROP to samples across 2630 individuals from 54 diverse human tissues. Our approach can account for 99.9% of 1 trillion reads of various read length. Additionally, we use ROP to investigate the functional mechanisms underlying connections between the immune system, microbiome, and disease. ROP is freely available at https://github.com/smangul1/rop/wiki
- …