4,928 research outputs found
Shale oil : potential economies of large-scale production, preliminary phase
Producing shale oil on a large scale is one of the possible
alternatives for reducing dependence of the United States on imported
petroleum. Industry is not producing shale oil on a commercial scale now
because costs are too high even though industry dissatisfaction is most
frequently expressed about "non-economic" barriers: innumerable permits,
changing environmental regulations, lease limitations, water rights
conflicts, legal challenges, and so on. The overall purpose of this
study is to estimate whether improved technology might significantly
reduce unit costs for production of shale oil in a planned large-scale
industry as contrasted to the case usually contemplated: a small
industry evolving slowly on a project-by-project basis.
In this preliminary phase of the study, we collected published data
on the costs of present shale oil technology and adjusted them to common
conditions; these data were assembled to help identify the best targets
for cost reduction through improved large-scale technology They show
that the total cost of producing upgraded shale oil (i.e. shale oil
accpetable as a feed to a petroleum refinery) by surface retorting ranges
from about 28/barrel in late '78 dollars with a 20% chance that
the costs would be lower than and 20% higher than that range. The
probability distribution reflects our assumptions about ranges of shale
richness, process performance, rate of return, and other factors that
seem likely in a total industry portfolio of projects.
About 40% of the total median cost is attributable to retorting, 20%
to upgrading, and the remaining 40% to resource acquisition, mining,
crushing, and spent shale disposal and revegetation. Capital charges account for about 70% of the median total cost and operating costs for
the other 30%.
There is a reasonable chance that modified in-situ processes (like
Occidental's) may be able to produce shale oil more cheaply than surface
retorting, but no reliable cost data have been published; in 1978, DOE
estimated a saving of roughly $5/B for in-situ.
Because the total costs of shale oil are spread over many steps in
the production process, improvements in most or all of those steps are
required if we seek a significant reduction in total cost. A June 1979
workshop of industry experts was held to help us identify possible
cost-reduction technologies. Examples of the improved large-scale
technologies proposed (for further evaluation) to the workshop were:
- Instead of hydrotreating raw shale oil to make syncrude capable of
being refined conventionally, rebalance all of a refinery's
processes (or develop new catalysts/processes less sensitive to
feed nitrogen) to accommodate shale oil feed -- a change analogous
to a shift from sweet crude to sour crude.
- Instead of refining at or near the retort site, use heated
pipelines to move raw shale oil to existing major refining areas.
- Instead of operating individual mines, open-pit mine all or much
of the Piceance Creek Basin.
- Instead of building individual retorts, develop new methods for
mass production of hundreds of retorts
Entanglement Enhanced Multiplayer Quantum Games
We investigate the 3-player quantum Prisoner's Dilemma with a certain
strategic space, a particular Nash equilibrium that can remove the original
dilemma is found. Based on this equilibrium, we show that the game is enhanced
by the entanglement of its initial state.Comment: 9 pages, 3 figure
Multi-Player and Multi-Choice Quantum Game
We investigate a multi-player and multi-choice quantum game. We start from
two-player and two-choice game and the result is better than its classical
version. Then we extend it to N-player and N-choice cases. In the quantum
domain, we provide a strategy with which players can always avoid the worst
outcome. Also, by changing the value of the parameter of the initial state, the
probabilities for players to obtain the best payoff will be much higher that in
its classical version.Comment: 4 pages, 1 figur
The relationship between education and food consumption in the 1995 Australian national nutrition survey
Objective: To assess the relationship between education and the intake of a variety of individual foods, as well as groups of foods, for Australian men and women in different age groups. Design: Cross-sectional national survey of free-living men and women. Subjects: A sample of 2501 men and 2739 women aged 18 years and over who completed the National Nutrition Survey (NNS) 1995. Methods: Information about the frequency of consumption of 88 food items was obtained using a food-frequency questionnaire in a nation-wide nutrition survey. Irregular and regular consumers of foods were identified according to whether they consumed individual foods less than or more than once per month. The relationship between single foods and an index of education (no post-school qualifications, vocational, university) was analysed via contingency table chi-square statistics for men and women. Food group variety scores were derived by assigning individual foods to conventional food group taxonomies, and then summing the dichotomised intake scores for individual foods within each food group. Two-way analyses of variance (education by age groups) were performed on food variety scores for men and women, separately. Results: While university-educated men and women consumed many individual foods more regularly than less-educated people, they were less likely to be regular consumers of several meat products. The relationship between education and food consumption was less apparent when individual food scores were aggregated into food group scores. University-educated men and women exhibited higher scores on total food group variety than the other educational groups. Conclusions: Higher education is associated with the regular consumption of a wider variety of foods. Aggregation of individual food consumption indices into food variety scores may mask the apparent effects of educational background on food consumption.<br /
Photometric redshifts and quasar probabilities from a single, data-driven generative model
We describe a technique for simultaneously classifying and estimating the
redshift of quasars. It can separate quasars from stars in arbitrary redshift
ranges, estimate full posterior distribution functions for the redshift, and
naturally incorporate flux uncertainties, missing data, and multi-wavelength
photometry. We build models of quasars in flux-redshift space by applying the
extreme deconvolution technique to estimate the underlying density. By
integrating this density over redshift one can obtain quasar flux-densities in
different redshift ranges. This approach allows for efficient, consistent, and
fast classification and photometric redshift estimation. This is achieved by
combining the speed obtained by choosing simple analytical forms as the basis
of our density model with the flexibility of non-parametric models through the
use of many simple components with many parameters. We show that this technique
is competitive with the best photometric quasar classification
techniques---which are limited to fixed, broad redshift ranges and high
signal-to-noise ratio data---and with the best photometric redshift techniques
when applied to broadband optical data. We demonstrate that the inclusion of UV
and NIR data significantly improves photometric quasar--star separation and
essentially resolves all of the redshift degeneracies for quasars inherent to
the ugriz filter system, even when included data have a low signal-to-noise
ratio. For quasars spectroscopically confirmed by the SDSS 84 and 97 percent of
the objects with GALEX UV and UKIDSS NIR data have photometric redshifts within
0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about
a factor of three improvement over ugriz-only photometric redshifts. Our code
to calculate quasar probabilities and redshift probability distributions is
publicly available
A Letter of Intent to Install a milli-charged Particle Detector at LHC P5
In this LOI we propose a dedicated experiment that would detect
"milli-charged" particles produced by pp collisions at LHC Point 5. The
experiment would be installed during LS2 in the vestigial drainage gallery
above UXC and would not interfere with CMS operations. With 300 fb of
integrated luminosity, sensitivity to a particle with charge
can be achieved for masses of GeV,
and charge for masses of GeV,
greatly extending the parameter space explored for particles with small charge
and masses above 100 MeV.Comment: 19 pages, 7 figure
Ball-Nogues Studio: Interview with Benjamin Ball
Benjamin Ball is a founding partner, along with Gaston Nogues, of Ball-Nogues Studio in Los Angeles. The staff conducted this interview in May 2009
100. Third Commandment
Chapel Sermon by Benjamin Ball on the Third Commandment for Friday, February 14, 2025.
Para obtener una versión de subtítulos en español, vaya a CC en la parte inferior derecha del video y elija 2
Photometric redshifts for the CFHTLS T0004 Deep and Wide fields
We compute photometric redshifts based on the template-fitting method in the
fourth public release of the Canada-France-Hawaii Telescope Legacy Survey. This
unique multi-colour catalogue comprises u*,g',r',i',z' photometry in four deep
fields of 1 deg2 each and 35 deg2 distributed over three Wide fields. Our
photometric redshifts are calibrated with and compared to 16,983 high-quality
spectroscopic redshifts from several surveys. We find a dispersion of 0.028 and
an outlier rate of 3.5% in the Deep field at i'AB < 24 and a dispersion of
0.036 and an outlier rate of 2.8% in the Wide field at i'AB < 22.5. Beyond i'AB
= 22.5 in the Wide field the number of outliers rises from 5% to 10% at i'AB<23
and i'AB<24 respectively. For the Wide sample, we find the systematic redshift
bias keeps below 1% to i'AB < 22.5, whereas we find no significant bias in the
Deep field. We investigated the effect of tile-to-tile photometric variations
and demonstrate that the accuracy of our photometric redshifts is reduced by at
most 21%. We separate stars from galaxies using both the size and colour
information, reducing the contamination by stars in our catalogues from 50% to
8% at i'AB < 22.5 in fields with the highest stellar density while keeping a
complete galaxy sample. Our CFHTLS T0004 photometric redshifts are distributed
to the community. Our release include 592,891 (i'AB < 22.5) and 244,701 (i'AB <
24) reliable galaxy photometric redshifts in the Wide and Deep fields,
respectively.Comment: 18 pages, 17 figure
Crossing borders: new teachers co-constructing professional identity in performative times
This paper draws on a range of theoretical perspectives on the construction of new teachers’ professional identity. It focuses particularly on the impact of the development in many national education systems of a performative culture of the management and regulation of teachers’ work. Whilst the role of interactions with professional colleagues and school managers in the performative school has been extensively researched, less attention has been paid to new teachers’ interactions with students. This paper highlights the need for further research focusing on the process of identity co-construction with students. A key theoretical concept employed is that of liminality, the space within which identities are in transition as teachers adjust to the culture of a new professional workplace, and the nature of the engagement of new teachers, or teachers who change schools, with students. The authors argue that an investigation into the processes of this co-construction of identity offers scope for new insights into the extent to which teachers might construct either a teacher identity at odds with their personal and professional values, or a more ‘authentic’ identity that counters performative discourses. These insights will in turn add to our understanding of the complex range of factors impacting on teacher resilience and motivation
- …
