10,625 research outputs found
Final report: Workshop on: Integrating electric mobility systems with the grid infrastructure
EXECUTIVE SUMMARY:
This document is a report on the workshop entitled “Integrating Electric Mobility
Systems with the Grid Infrastructure” which was held at Boston University on November 6-7
with the sponsorship of the Sloan Foundation. Its objective was to bring together researchers
and technical leaders from academia, industry, and government in order to set a short and longterm research agenda regarding the future of mobility and the ability of electric utilities to meet
the needs of a highway transportation system powered primarily by electricity. The report is a
summary of their insights based on workshop presentations and discussions. The list of
participants and detailed Workshop program are provided in Appendices 1 and 2.
Public and private decisions made in the coming decade will direct profound changes in
the way people and goods are moved and the ability of clean energy sources – primarily
delivered in the form of electricity – to power these new systems. Decisions need to be made
quickly because of rapid advances in technology, and the growing recognition that meeting
climate goals requires rapid and dramatic action. The blunt fact is, however, that the pace of
innovation, and the range of business models that can be built around these innovations, has
grown at a rate that has outstripped our ability to clearly understand the choices that must be
made or estimate the consequences of these choices. The group of people assembled for this
Workshop are uniquely qualified to understand the options that are opening both in the future of
mobility and the ability of electric utilities to meet the needs of a highway transportation system
powered primarily by electricity. They were asked both to explain what is known about the
choices we face and to define the research issues most urgently needed to help public and
private decision-makers choose wisely. This report is a summary of their insights based on
workshop presentations and discussions.
New communication and data analysis tools have profoundly changed the definition of
what is technologically possible. Cell phones have put powerful computers, communication
devices, and position locators into the pockets and purses of most Americans making it possible
for Uber, Lyft and other Transportation Network Companies to deliver on-demand mobility
services. But these technologies, as well as technologies for pricing access to congested
roads, also open many other possibilities for shared mobility services – both public and private –
that could cut costs and travel time by reducing congestion. Options would be greatly expanded
if fully autonomous vehicles become available. These new business models would also affect
options for charging electric vehicles. It is unclear, however, how to optimize charging
(minimizing congestion on the electric grid) without increasing congestion on the roads or
creating significant problems for the power system that supports such charging capacity.
With so much in flux, many uncertainties cloud our vision of the future. The way new
mobility services will reshape the number, length of trips, and the choice of electric vehicle
charging systems and constraints on charging, and many other important behavioral issues are
critical to this future but remain largely unknown. The challenge at hand is to define plausible
future structures of electric grids and mobility systems, and anticipate the direct and indirect
impacts of the changes involved. These insights can provide tools essential for effective private ... [TRUNCATED]Workshop funded by the Alfred P. Sloan Foundatio
Repository of NSF Funded Publications and Data Sets: "Back of Envelope" 15 year Cost Estimate
In this back of envelope study we calculate the 15 year fixed and variable costs of setting up and running a data repository (or database) to store and serve the publications and datasets derived from research funded by the National Science Foundation (NSF). Costs are computed on a yearly basis using a fixed estimate of the number of papers that are published each year that list NSF as their funding agency. We assume each paper has one dataset and estimate the size of that dataset based on experience. By our estimates, the number of papers generated each year is 64,340. The average dataset size over all seven directorates of NSF is 32 gigabytes (GB). A total amount of data added to the repository is two petabytes (PB) per year, or 30 PB over 15 years.
The architecture of the data/paper repository is based on a hierarchical storage model that uses a combination of fast disk for rapid access and tape for high reliability and cost efficient long-term storage. Data are ingested through workflows that are used in university institutional repositories, which add metadata and ensure data integrity. Average fixed costs is approximately 150 - 4.87 – 167,000,000 over 15 years of operation, curating close to one million of datasets and one million papers. After 15 years and 30 PB of data accumulated and curated, we estimate the cost per gigabyte at 167 million cost is a direct cost in that it does not include federally allowable indirect costs return (ICR).
After 15 years, it is reasonable to assume that some datasets will be compressed and rarely accessed. Others may be deemed no longer valuable, e.g., because they are replaced by more accurate results. Therefore, at some point the data growth in the repository will need to be adjusted by use of strategic preservation
Smart objects as building blocks for the internet of things
The combination of the Internet and emerging technologies such as nearfield communications, real-time localization, and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of Things and enable novel computing applications. As a step toward design and architectural principles for smart objects, the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity. In particular, they describe activity-, policy-, and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application
A Compiler and Runtime Infrastructure for Automatic Program Distribution
This paper presents the design and the implementation of a compiler and runtime infrastructure for automatic program distribution. We are building a research infrastructure that enables experimentation with various program partitioning and mapping strategies and the study of automatic distribution's effect on resource consumption (e.g., CPU, memory, communication). Since many optimization techniques are faced with conflicting optimization targets (e.g., memory and communication), we believe that it is important to be able to study their interaction.
We present a set of techniques that enable flexible resource modeling and program distribution. These are: dependence analysis, weighted graph partitioning, code and communication generation, and profiling. We have developed these ideas in the context of the Java language. We present in detail the design and implementation of each of the techniques as part of our compiler and runtime infrastructure. Then, we evaluate our design and present preliminary experimental data for each component, as well as for the entire system
An Extensible Timing Infrastructure for Adaptive Large-scale Applications
Real-time access to accurate and reliable timing information is necessary to
profile scientific applications, and crucial as simulations become increasingly
complex, adaptive, and large-scale. The Cactus Framework provides flexible and
extensible capabilities for timing information through a well designed
infrastructure and timing API. Applications built with Cactus automatically
gain access to built-in timers, such as gettimeofday and getrusage,
system-specific hardware clocks, and high-level interfaces such as PAPI. We
describe the Cactus timer interface, its motivation, and its implementation. We
then demonstrate how this timing information can be used by an example
scientific application to profile itself, and to dynamically adapt itself to a
changing environment at run time
e-Science Infrastructure for the Social Sciences
When the term „e-Science“ became popular, it frequently was referred to as “enhanced science” or “electronic science”. More telling is the definition ‘e-Science is about global collaboration in key areas of science and the next generation of infrastructure that will enable it’ (Taylor, 2001). The question arises to what extent can the social sciences profit from recent developments in e- Science infrastructure? While computing, storage and network capacities so far were sufficient to accommodate and access social science data bases, new capacities and technologies support new types of research, e.g. linking and analysing transactional or audio-visual data. Increasingly collaborative working by researchers in distributed networks is efficiently supported and new resources are available for e-learning. Whether these new developments become transformative or just helpful will very much depend on whether their full potential is recognized and creatively integrated into new research designs by theoretically innovative scientists. Progress in e-Science was very much linked to the vision of the Grid as “a software infrastructure that enables flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions and resources’ and virtually unlimited computing capacities (Foster et al. 2000). In the Social Sciences there has been considerable progress in using modern IT- technologies for multilingual access to virtual distributed research databases across Europe and beyond (e.g. NESSTAR, CESSDA – Portal), data portals for access to statistical offices and for linking access to data, literature, project, expert and other data bases (e.g. Digital Libraries, VASCODA/SOWIPORT). Whether future developments will need GRID enabling of social science databases or can be further developed using WEB 2.0 support is currently an open question. The challenges here are seamless integration and interoperability of data bases, a requirement that is also stipulated by internationalisation and trans-disciplinary research. This goes along with the need for standards and harmonisation of data and metadata. Progress powered by e- infrastructure is, among others, dependent on regulatory frameworks and human capital well trained in both, data science and research methods. It is also dependent on sufficient critical mass of the institutional infrastructure to efficiently support a dynamic research community that wants to “take the lead without catching up”.
- …