1,241 research outputs found
Computer program developed for flowsheet calculations and process data reduction
Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation
Business intelligence gap analysis: a user, supplier and academic perspective
Business intelligence (BI) takes many different forms, as indicated by the varying definitions of BI that can be found in industry and academia. These different definitions help us understand of what BI issues are important to the main players in the field of BI; users, suppliers and academics. The goal of this research is to discover gaps and trends from the standpoints of BI users, BI suppliers and academics, and to examine their effects on business and academia. Consultants also play an important role since they can be seen as the link between users and suppliers. Two research methods are combined to accomplish this goal. We examine the BI focus of users and suppliers through a survey, and we gain insight to the BI focus of academics, vendor-neutral consultants (typical representatives like Forrester, Gartner and IDC) and vendor- specific consultants (typical representatives like IBM, Information builders, Microsoft, Oracle and SAP) through their publications. Previous studies indicate that similar article analyses often focus on academic research methods only. That means that the results so far often reveal the academic perspective. Unlike these previous studies, the perspective of this research is not limited to academics. Our results provide insight of the BI trends and BI issue ranking of BI users, suppliers, academics, vendors neutral consultants and vendor specific consultant
An Integrated XRF/XRD Instrument for Mars Exobiology and Geology Experiments
By employing an integrated x-ray instrument on a future Mars mission, data obtained will greatly augment those returned by Viking; details characterizing the past and present environment on Mars and those relevant to the possibility of the origin and evolution of life will be acquired. A combined x-ray fluorescence/x-ray diffraction (XRF/XRD) instrument was breadboarded and demonstrated to accommodate important exobiology and geology experiment objectives outlined for MESUR and future Mars missions. Among others, primary objectives for the exploration of Mars include the intense study of local areas on Mars to establish the chemical, mineralogical, and petrological character of different components of the surface material; to determine the distribution, abundance, and sources and sinks of volatile materials, including an assessment of the biologic potential, now and during past epoches; and to establish the global chemical and physical characteristics of the Martian surface. The XRF/XRD breadboard instrument identifies and quantifies soil surface elemental, mineralogical, and petrological characteristics and acquires data necessary to address questions on volatile abundance and distribution. Additionally, the breadboard is able to characterize the biogenic element constituents of soil samples providing information on the biologic potential of the Mars environment. Preliminary breadboard experiments confirmed the fundamental instrument design approach and measurement performance
Spatial heterogeneity and irreversible vegetation change in semi-arid grazing systems
Recent theoretical studies have shown that spatial redistribution of surface water may explain the occurrence of patterns of alternating vegetated and degraded patches in semiarid grasslands. These results implied, however, that spatial redistribution processes cannot explain the collapse of production on coarser scales observed in these systems. We present a spatially explicit vegetation model to investigate possible mechanisms explaining irreversible vegetation collapse on coarse spatial scales. The model results indicate that the dynamics of vegetation on coarse scales are determined by the interaction of two spatial feedback processes. Loss of plant cover in a certain area results in increased availability of water in remaining vegetated patches through run-on of surface water, promoting within-patch plant production. Hence, spatial redistribution of surface water creates negative feedback between reduced plant cover and increased plant growth in remaining vegetation. Reduced plant cover, however, results in focusing of herbivore grazing in the remaining vegetation. Hence, redistribution of herbivores creates positive feedback between reduced plant cover and increased losses due to grazing in remaining vegetated patches, leading to collapse of the entire vegetation. This may explain irreversible vegetation shifts in semiarid grasslands on coarse spatial scales
Demonstration of the feasibility of an integrated x ray laboratory for planetary exploration
The identification of minerals and elemental compositions is an important component in the geological and exobiological exploration of the solar system. X ray diffraction and fluorescence are common techniques for obtaining these data. The feasibility of combining these analytical techniques in an integrated x ray laboratory compatible with the volume, mass, and power constraints imposed by many planetary missions was demonstrated. Breadboard level hardware was developed to cover the range of diffraction lines produced by minerals, clays, and amorphous; and to detect the x ray fluorescence emissions of elements from carbon through uranium. These breadboard modules were fabricated and used to demonstrate the ability to detect elements and minerals. Additional effort is required to establish the detection limits of the breadboard modules and to integrate diffraction and fluorescence techniques into a single unit. It was concluded that this integrated x ray laboratory capability will be a valuable tool in the geological and exobiological exploration of the solar system
Recommended from our members
Microprocessor system to recover data from a self-scanning photodiode array
A microprocessor system developed at Lawrence Livermore Laboratory has expedited the recovery of data describing the low energy x-ray spectra radiated by laser-fusion targets. An Intel microprocessor controls the digitization and scanning of the data stream of an x-ray-sensitive self-scanning photodiode array incorporated in a crystal diffraction spectrometer. (auth
Algorithmic statistics: forty years later
Algorithmic statistics has two different (and almost orthogonal) motivations.
From the philosophical point of view, it tries to formalize how the statistics
works and why some statistical models are better than others. After this notion
of a "good model" is introduced, a natural question arises: it is possible that
for some piece of data there is no good model? If yes, how often these bad
("non-stochastic") data appear "in real life"?
Another, more technical motivation comes from algorithmic information theory.
In this theory a notion of complexity of a finite object (=amount of
information in this object) is introduced; it assigns to every object some
number, called its algorithmic complexity (or Kolmogorov complexity).
Algorithmic statistic provides a more fine-grained classification: for each
finite object some curve is defined that characterizes its behavior. It turns
out that several different definitions give (approximately) the same curve.
In this survey we try to provide an exposition of the main results in the
field (including full proofs for the most important ones), as well as some
historical comments. We assume that the reader is familiar with the main
notions of algorithmic information (Kolmogorov complexity) theory.Comment: Missing proofs adde
Algorithmic statistics revisited
The mission of statistics is to provide adequate statistical hypotheses
(models) for observed data. But what is an "adequate" model? To answer this
question, one needs to use the notions of algorithmic information theory. It
turns out that for every data string one can naturally define
"stochasticity profile", a curve that represents a trade-off between complexity
of a model and its adequacy. This curve has four different equivalent
definitions in terms of (1)~randomness deficiency, (2)~minimal description
length, (3)~position in the lists of simple strings and (4)~Kolmogorov
complexity with decompression time bounded by busy beaver function. We present
a survey of the corresponding definitions and results relating them to each
other
- …