2,492 research outputs found
A PRODUCTION FUNCTION FOR FLORIDA FOLIAGE NURSERIES FROM TIME-SERIES AND CROSS-SECTION DATA
Crop Production/Industries,
User Applications Driven by the Community Contribution Framework MPContribs in the Materials Project
This work discusses how the MPContribs framework in the Materials Project
(MP) allows user-contributed data to be shown and analyzed alongside the core
MP database. The Materials Project is a searchable database of electronic
structure properties of over 65,000 bulk solid materials that is accessible
through a web-based science-gateway. We describe the motivation for enabling
user contributions to the materials data and present the framework's features
and challenges in the context of two real applications. These use-cases
illustrate how scientific collaborations can build applications with their own
"user-contributed" data using MPContribs. The Nanoporous Materials Explorer
application provides a unique search interface to a novel dataset of hundreds
of thousands of materials, each with tables of user-contributed values related
to material adsorption and density at varying temperature and pressure. The
Unified Theoretical and Experimental x-ray Spectroscopy application discusses a
full workflow for the association, dissemination and combined analyses of
experimental data from the Advanced Light Source with MP's theoretical core
data, using MPContribs tools for data formatting, management and exploration.
The capabilities being developed for these collaborations are serving as the
model for how new materials data can be incorporated into the Materials Project
website with minimal staff overhead while giving powerful tools for data search
and display to the user community.Comment: 12 pages, 5 figures, Proceedings of 10th Gateway Computing
Environments Workshop (2015), to be published in "Concurrency in Computation:
Practice and Experience
Constraints on Light Dark Matter From Core-Collapse Supernovae
We show that light ( 1 -- 30 MeV) dark matter particles can play a
significant role in core-collapse supernovae, if they have relatively large
annihilation and scattering cross sections, as compared to neutrinos. We find
that if such particles are lighter than 10 MeV and reproduce the
observed dark matter relic density, supernovae would cool on a much longer time
scale and would emit neutrinos with significantly smaller energies than in the
standard scenario, in disagreement with observations. This constraint may be
avoided, however, in certain situations for which the neutrino--dark matter
scattering cross sections remain comparatively small.Comment: 4 pages, 1 figur
GMA Instrumentation of the Athena Framework using NetLogger
Grid applications are, by their nature, wide-area distributed applications.
This WAN aspect of Grid applications makes the use of conventional monitoring
and instrumentation tools (such as top, gprof, LSF Monitor, etc) impractical
for verification that the application is running correctly and efficiently. To
be effective, monitoring data must be "end-to-end", meaning that all components
between the Grid application endpoints must be monitored. Instrumented
applications can generate a large amount of monitoring data, so typically the
instrumentation is off by default. For jobs running on a Grid, there needs to
be a general mechanism to remotely activate the instrumentation in running
jobs. The NetLogger Toolkit Activation Service provides this mechanism.
To demonstrate this, we have instrumented the ATLAS Athena Framework with
NetLogger to generate monitoring events. We then use a GMA-based activation
service to control NetLogger's trigger mechanism. The NetLogger trigger
mechanism allows one to easily start, stop, or change the logging level of a
running program by modifying a trigger file. We present here details of the
design of the NetLogger implementation of the GMA-based activation service and
the instrumentation service for Athena. We also describe how this activation
service allows us to non-intrusively collect and visualize the ATLAS Athena
Framework monitoring data
Recommended from our members
Grid Logging: Best Practices Guide
The purpose of this document is to help developers of Grid middleware and application software generate log files that will be useful to Grid administrators, users, developers and Grid middleware itself. Currently, most of the currently generated log files are only useful to the author of the program. Good logging practices are instrumental to performance analysis, problem diagnosis, and security auditing tasks such as incident tracing and damage assessment. This document does not discuss the issue of a logging API. It is assumed that a standard log API such as syslog (C), log4j (Java), or logger (Python) is being used. Other custom logging API or even printf could be used. The key point is that the logs must contain the required information in the required format. At a high level of abstraction, the best practices for Grid logging are: (1) Consistently structured, typed, log events; (2) A standard high-resolution timestamp; (3) Use of logging levels and categories to separate logs by detail and purpose; (4) Consistent use of global and local identifiers; and (5) Use of some regular, newline-delimited ASCII text format. The rest of this document describes each of these recommendations in detail
Flight Testing of Guidance, Navigation and Control Systems on the Mighty Eagle Robotic Lander Testbed
During 2011 a series of progressively more challenging flight tests of the Mighty Eagle autonomous terrestrial lander testbed were conducted primarily to validate the GNC system for a proposed lunar lander. With the successful completion of this GNC validation objective the opportunity existed to utilize the Mighty Eagle as a flying testbed for a variety of technologies. In 2012 an Autonomous Rendezvous and Capture (AR&C) algorithm was implemented in flight software and demonstrated in a series of flight tests. In 2012 a hazard avoidance system was developed and flight tested on the Mighty Eagle. Additionally, GNC algorithms from Moon Express and a MEMs IMU were tested in 2012. All of the testing described herein was above and beyond the original charter for the Mighty Eagle. In addition to being an excellent testbed for a wide variety of systems the Mighty Eagle also provided a great learning opportunity for many engineers and technicians to work a flight program
Janus: Privacy-Preserving Billing for Dynamic Charging of Electric Vehicles
Dynamic charging is an emerging technology that
allows an electric vehicle (EV) to charge its battery while moving
along the road. Dynamic charging charges the EVâs battery
through magnetic induction between the receiving coils attached
to the EVâs battery and the wireless charging pads embedded
under the roadbed and operated by Pad Owners (POs). A key
challenge in dynamic charging is billing, which must consider
the fact that the charging service happens while the EV is
moving on the road, and should allow for flexible usage plans.
A promising candidate could be the subscription-based billing
model, in which an EV subscribes to an electric utility that has
a business relationship with various POs that operate charging
sections. The POs report charging information to the utility of
the EV, and at the end of each billing cycle, the EV receives a
single bill for all its dynamic charging sessions from the utility.
Overshadowing its advantages, a major shortcoming of such
a solution is that the utility gets access to the EVsâ mobility
information, invading thus the location privacy of the EVs.
To enable subscription based billing for dynamic charging, in
this paper we propose Janus, a privacy-preserving billing protocol
for dynamic EV charging. Janus uses homomorphic commitment
and blind signatures with attributes to construct a cryptographic
proof on the charging fee of each individual dynamic charging
session, and allows the utility to verify the correctness of the EVâs
total bill without learning the time, the location, or the charging
fee of each individual charging session of the EV. Our Pythonbased
implementation shows that the real-time computational
overhead of Janus is less than 0.6 seconds, which is well within
the delay constraint of the subscription-based billing model, and
makes Janus an appealing solution for future dynamic charging
applications.Department of Energy/DE-OE0000780Ope
The Kepler End-to-End Model: Creating High-Fidelity Simulations to Test Kepler Ground Processing
The Kepler mission is designed to detect the transit of Earth-like planets around Sun-like stars by observing 100,000 stellar targets. Developing and testing the Kepler ground-segment processing system, in particular the data analysis pipeline, requires high-fidelity simulated data. This simulated data is provided by the Kepler End-to-End Model (ETEM). ETEM simulates the astrophysics of planetary transits and other phenomena, properties of the Kepler spacecraft and the format of the downlinked data. Major challenges addressed by ETEM include the rapid production of large amounts of simulated data, extensibility and maintainability
- âŠ