163 research outputs found
An Ethanol Blend Wall Shift is Prone to Increase Petroleum Gasoline Demand
The US Environmental Protection Agency announced a waiver allowing an increase in the Fuel-Ethanol blend limit (the “blend wall” ) from 10% (E10) to 15% (E15) on October,2010.Justifications for the waiver are reduced vehicle fuel prices and less consumption of petroleum gasoline, leading to energy security. In this paper, employing Monte Carlo simulations and Savitzky-Golay smoothing filter, an empirical study examines this waiver revealing an anomaly where a relaxation of this blend wall elicits a demand response. Under a wide range of elasticities, this demand response can actually increase the consumption of petroleum gasoline and thus lead to greater energy insecurity. The economics supporting this result and associated policy implications are developed and discussed.Blend wall, Energy security, Ethanol, Resource /Energy Economics and Policy,
Can the U.S. Ethanol Industry Compete in the Alternative Fuels' Market?
The U.S. ethanol fuel industry has experienced preferential treatment from federal and state governments ever since the Energy Tax Act of 1978 exempted 10% ethanol/gasoline blend (gasohol) from the federal excise tax. Combined with a 54¢/gal ethanol import tariff, this exemption was designed to provide incentives for the establishment and development of a U.S. ethanol industry. Despite these tax exemptions, until recently, the U.S. ethanol fuel industry was unable to expand from a limited regional market. Ethanol was dominated in the market by MTBE (methyl-tertiary-butyl ether). Only after MTBE was found to contaminate groundwater and consequently banned in many states did the demand for ethanol expand nationally. Limit pricing on the part of MTBE refiners is one hypothesis that may explain this lack of ethanol entry into the fuel-additives market. As a test of this hypothesis, a structural vector autoregression (SVAR) model of the ethanol fuel market is developed. The results support the hypothesis of limit-pricing behavior on the part of MTBE refiners, and suggest the U.S. corn-based ethanol industry is vulnerable to limit-price competition, which could recur. The dependence of corn-based ethanol price on supply determinants limits U.S. ethanol refiners' ability to price compete with sugar cane-based ethanol refiners. Without federal support, U.S. ethanol refiners may find it difficult to complete with cheaper sugar cane-refined ethanol, chiefly from Brazil.Resource /Energy Economics and Policy,
Estimating China’s Energy and Environmental Productivity Efficiency: A Parametric Hyperbolic Distance Function Approach
Since the beginning of this century, China’s annual GDP growth is over 9%. This growth is fueled by large increases in energy consumption, led by a coal-dominated energy structure, and associated with higher sulfur dioxide emissions and industry dust. In 2008, China accounted for over 17% of the world’s total primary energy consumption and accounts for nearly three-quarters of global energy growth. At an average annual energy growth rate over 12% since 2000, China’s future share of primary energy consumption will continue to increase. A consequence of this growth is China becoming the global leader in sulfur and carbon dioxide emissions. To deal with these energy and environmental challenges, the government set energy saving and pollution reduction target objectives in the 11th Five Year Plan (2006-2010): relative to 2005 by 2010, saving national energy use per unit of GDP by 20% and reducing the country’s primary pollution emissions by 10%. These targets were then disaggregated into energy saving targets for each province. With this disaggregated scheme, similar to country’s target, 20 provinces were assigned a 20% energy saving target, seven provinces were assigned targets below 20%, varying from 12% to 17%, and four provinces were given targets above 20%. These allocation were generally not guided by technical or economic efficiency, and thus may not be optimal from the perspectives of equity and efficiency. Historically less energy efficiency provinces may have more potential to reduce their energy consumption and pollution emissions, while higher efficiency provinces may have less potential. The major objective is to determine the optimal targets for each province required to comply with the national Five Year Plan target. A comparison of the estimated optimal with the current government targets will then reveal the value of incorporating economic theory into the decision calculation of setting disaggregate targets. Determining optimal targets requires consideration of both desirable and undesirable comes from alternative feasible targets. An objective is then to delineate these comes as criterion for selection. The procedure employed is a parametric hyperbolic distance function approach with a translog specification. This procedure provides the flexibility of using energy, labor, and capital stock as inputs to produce the desirable output (GDP) and the undesirable output (sulfur dioxide emissions). The procedure will address the objectives by simultaneously estimating both the desirable and undesirable comes. Specifically, the production frontier and environmental productivity efficiency are estimated for each province. The hyperbolic distance function enables the estimation of efficiency scores by incorporating all types of inputs and outputs, and only requires information on input and outputs quantities but not on prices, making it possible to model the emissions in the production process, given nonmarket characteristics of emissions. Based on these parametric estimations, the optimal targets are determined. The trajectory of obtaining these optimal targets for each province is determined by estimating how each province can improve its productive performance through increasing its desirable output and reducing its undesirable output, while simultaneously saving energy inputs. The results provide an empirical measurement of energy efficiency with maximum potential of energy saving for each province at a given technology considering the diverse economic, industry, and energy consumption patterns in the provinces. With a panel data of 29 provinces in China from 2000-2007, the hyperbolic distance function allows us to measure environmental productivity change over time, and then decompose this environmental productivity change into efficiency change, which is the movement toward the frontier, and technical change, which is the shift of the frontier. These further analyses help us identify potential different contributions of productivity growth for each province in China, and examine how the energy saving program will affect the environmental productivity growth for each province.environmental productivity efficiency, hyperbolic distance function, China's energy policy, Environmental Economics and Policy, Productivity Analysis, Resource /Energy Economics and Policy,
Unity is Strength: Enhancing Precision in Reentrancy Vulnerability Detection of Smart Contract Analysis Tools
Reentrancy is one of the most notorious vulnerabilities in smart contracts,
resulting in significant digital asset losses. However, many previous works
indicate that current Reentrancy detection tools suffer from high false
positive rates. Even worse, recent years have witnessed the emergence of new
Reentrancy attack patterns fueled by intricate and diverse vulnerability
exploit mechanisms. Unfortunately, current tools face a significant limitation
in their capacity to adapt and detect these evolving Reentrancy patterns.
Consequently, ensuring precise and highly extensible Reentrancy vulnerability
detection remains critical challenges for existing tools. To address this
issue, we propose a tool named ReEP, designed to reduce the false positives for
Reentrancy vulnerability detection. Additionally, ReEP can integrate multiple
tools, expanding its capacity for vulnerability detection. It evaluates results
from existing tools to verify vulnerability likelihood and reduce false
positives. ReEP also offers excellent extensibility, enabling the integration
of different detection tools to enhance precision and cover different
vulnerability attack patterns. We perform ReEP to eight existing
state-of-the-art Reentrancy detection tools. The average precision of these
eight tools increased from the original 0.5% to 73% without sacrificing recall.
Furthermore, ReEP exhibits robust extensibility. By integrating multiple tools,
the precision further improved to a maximum of 83.6%. These results demonstrate
that ReEP effectively unites the strengths of existing works, enhances the
precision of Reentrancy vulnerability detection tools
SparseCoder: Identifier-Aware Sparse Transformer for File-Level Code Summarization
Code summarization aims to generate natural language descriptions of source
code, facilitating programmers to understand and maintain it rapidly. While
previous code summarization efforts have predominantly focused on method-level,
this paper studies file-level code summarization, which can assist programmers
in understanding and maintaining large source code projects. Unlike
method-level code summarization,file-level code summarization typically
involves long source code within a single file, which makes it challenging for
Transformer-based models to understand the code semantics for the maximum input
length of these models is difficult to set to a large number that can handle
long code input well, due to the quadratic scaling of computational complexity
with the input sequence length. To address this challenge, we propose
SparseCoder, an identifier-aware sparse transformer for effectively handling
long code sequences. Specifically, the SparseCoder employs a sliding window
mechanism for self-attention to model short-term dependencies and leverages the
structure message of code to capture long-term dependencies among source code
identifiers by introducing two types of sparse attention patterns named global
and identifier attention. To evaluate the performance of SparseCoder, we
construct a new dataset FILE-CS for file-level code summarization in Python.
Experimental results show that our SparseCoder model achieves state-of-the-art
performance compared with other pre-trained models, including full
self-attention and sparse models. Additionally, our model has low memory
overhead and achieves comparable performance with models using full
self-attention mechanism.Comment: To appear in SANER'2
Some clarifications about Lema\^itre-Tolman models of the Universe used to deal with the dark energy problem
During the past fifteen years, inhomogeneous cosmological models have been
put forward to explain the observed dimming of the SNIa luminosity without
resorting to dark energy. The simplest models are the spherically symmetric
Lema\^itre-Tolman (LT) solutions with a central observer. Their use must be
considered as a mere first step towards more sophisticated models. Spherical
symmetry is but a mathematical simplification and one must consider spherical
symmetric models as exhibiting an energy density smoothed out over angles
around us. However, they have been taken at face value by some authors who
tried to use them for either irrelevant purposes or to put them to the test as
if they were robust models of our Universe. We wish to clarify how these models
must be used in cosmology. We first use the results obtained by Iguchi and
collaborators to derive the density profiles of the pure growing and decaying
mode LT models. We then discuss the relevance of the different test proposals
in the light of the interpretation given above. We show that decaying-mode
(parabolic) LT models always exhibit an overdensity near their centre and
growing-mode (elliptic or hyperbolic) LT models, a void. This is at variance
with some statements in the literature. We dismiss all previous proposals
merely designed to test the spherical symmetry of the LT models, and we agree
that the value of and the measurement of the redshift drift are valid
tests of the models. However, we suspect that this last test, which is the best
in principle, will be more complicated to implement than usually claimed.Comment: 18 pages, no figure, section 3 modified, results of section 3.2
changed, sections 4.3 and 4.4 added, other minor changes and references adde
Halo abundances and shear in void models
We study the non-linear gravitational collapse of dark matter into halos
through numerical N-body simulations of Lemaitre-Tolman-Bondi void models. We
extend the halo mass function formalism to these models in a consistent way.
This extension not only compares well with the simulated data at all times and
radii, but it also gives interesting clues about the impact of the background
shear on the growth of perturbations. Our results give hints about the
possibility of constraining the background shear via cluster number counts,
which could then give rise to strong constraints on general inhomogeneous
models, of any scale.Comment: 5 pages, 3 figures, Accepted in Physics of the Dark Universe,
preprint IFT-UAM/CSIC-12-3
Turn the Rudder: A Beacon of Reentrancy Detection for Smart Contracts on Ethereum
Smart contracts are programs deployed on a blockchain and are immutable once
deployed. Reentrancy, one of the most important vulnerabilities in smart
contracts, has caused millions of dollars in financial loss. Many reentrancy
detection approaches have been proposed. It is necessary to investigate the
performance of these approaches to provide useful guidelines for their
application. In this work, we conduct a large-scale empirical study on the
capability of five well-known or recent reentrancy detection tools such as
Mythril and Sailfish. We collect 230,548 verified smart contracts from
Etherscan and use detection tools to analyze 139,424 contracts after
deduplication, which results in 21,212 contracts with reentrancy issues. Then,
we manually examine the defective functions located by the tools in the
contracts. From the examination results, we obtain 34 true positive contracts
with reentrancy and 21,178 false positive contracts without reentrancy. We also
analyze the causes of the true and false positives. Finally, we evaluate the
tools based on the two kinds of contracts. The results show that more than
99.8% of the reentrant contracts detected by the tools are false positives with
eight types of causes, and the tools can only detect the reentrancy issues
caused by call.value(), 58.8% of which can be revealed by the Ethereum's
official IDE, Remix. Furthermore, we collect real-world reentrancy attacks
reported in the past two years and find that the tools fail to find any issues
in the corresponding contracts. Based on the findings, existing works on
reentrancy detection appear to have very limited capability, and researchers
should turn the rudder to discover and detect new reentrancy patterns except
those related to call.value().Comment: Accepted by ICSE 2023. Dataset available at
https://github.com/InPlusLab/ReentrancyStudy-Dat
iJTyper: An Iterative Type Inference Framework for Java by Integrating Constraint- and Statistically-based Methods
Inferring the types of API elements in incomplete code snippets (e.g., those
on Q&A forums) is a prepositive step required to work with the code snippets.
Existing type inference methods can be mainly categorized as constraint-based
or statistically-based. The former imposes higher requirements on code syntax
and often suffers from low recall due to the syntactic limitation of code
snippets. The latter relies on the statistical regularities learned from a
training corpus and does not take full advantage of the type constraints in
code snippets, which may lead to low precision. In this paper, we propose an
iterative type inference framework for Java, called iJTyper, by integrating the
strengths of both constraint- and statistically-based methods. For a code
snippet, iJTyper first applies a constraint-based method and augments the code
context with the inferred types of API elements. iJTyper then applies a
statistically-based method to the augmented code snippet. The predicted
candidate types of API elements are further used to improve the
constraint-based method by reducing its pre-built knowledge base. iJTyper
iteratively executes both methods and performs code context augmentation and
knowledge base reduction until a termination condition is satisfied. Finally,
the final inference results are obtained by combining the results of both
methods. We evaluated iJTyper on two open-source datasets. Results show that 1)
iJTyper achieves high average precision/recall of 97.31% and 92.52% on both
datasets; 2) iJTyper significantly improves the recall of two state-of-the-art
baselines, SnR and MLMTyper, by at least 7.31% and 27.44%, respectively; and 3)
iJTyper improves the average precision/recall of the popular language model,
ChatGPT, by 3.25% and 0.51% on both datasets
- …