144,089 research outputs found
Thermosonic flip chip interconnection using electroplated copper column arrays
Published versio
Regioselective Formation of α-Vinylpyrroles from the Ruthenium-Catalyzed Coupling Reaction of Pyrroles and Terminal Alkynes Involving C–H Bond Activation
The cationic ruthenium catalyst Ru3(CO)12/NH4PF6 was found to be highly effective for the intermolecular coupling reaction of pyrroles and terminal alkynes to give gem-selective α-vinylpyrroles. The carbon isotope effect on the α-pyrrole carbon and the Hammett correlation from a series of para-substituted N-arylpyrroles (ρ = −0.90) indicate a rate-limiting C−C bond formation step of the coupling reaction
Hydrodynamic limit of order book dynamics
In this paper, we establish a fluid limit for a two--sided Markov order book
model. Our main result states that in a certain asymptotic regime, a pair of
measure-valued processes representing the "sell-side shape" and "buy-side
shape" of an order book converges to a pair of deterministic measure-valued
processes in a certain sense. We also test our fluid approximation on data. The
empirical results suggest that the approximation is reasonably good for
liquidly--traded stocks in certain time periods
Recommended from our members
Impacts of model calibration on high-latitude land-surface processes: PILPS 2(e) calibration/validation experiments
In the PILPS 2(e) experiment, the Snow Atmosphere Soil Transfer (SAST) land-surface scheme developed from the Biosphere-Atmosphere Transfer Scheme (BATS) showed difficulty in accurately simulating the patterns and quantities of runoff resulting from heavy snowmelt in the high-latitude Torne-Kalix River basin (shared by Sweden and Finland). This difficulty exposes the model deficiency in runoff formations. After representing subsurface runoff and calibrating the parameters, the accuracy of hydrograph prediction improved substantially. However, even with the accurate precipitation and runoff, the predicted soil moisture and its variation were highly "model-dependent". Knowledge obtained from the experiment is discussed. © 2003 Elsevier Science B.V. All rights reserved
Handling boundary constraints for particle swarm optimization in high-dimensional search space
Despite the fact that the popular particle swarm optimizer (PSO) is currently being extensively applied to many real-world problems that often have high-dimensional and complex fitness landscapes, the effects of boundary constraints on PSO have not attracted adequate attention in the literature. However, in accordance with the theoretical analysis in [11], our numerical experiments show that particles tend to fly outside of the boundary in the first few iterations at a very high probability in high-dimensional search spaces. Consequently, the method used to handle boundary violations is critical to the performance of PSO. In this study, we reveal that the widely used random and absorbing bound-handling schemes may paralyze PSO for high-dimensional and complex problems. We also explore in detail the distinct mechanisms responsible for the failures of these two bound-handling schemes. Finally, we suggest that using high-dimensional and complex benchmark functions, such as the composition functions in [19], is a prerequisite to identifying the potential problems in applying PSO to many real-world applications because certain properties of standard benchmark functions make problems inexplicit. © 2011 Elsevier Inc. All rights reserved
How proofs are prepared at Camelot
We study a design framework for robust, independently verifiable, and
workload-balanced distributed algorithms working on a common input. An
algorithm based on the framework is essentially a distributed encoding
procedure for a Reed--Solomon code, which enables (a) robustness against
byzantine failures with intrinsic error-correction and identification of failed
nodes, and (b) independent randomized verification to check the entire
computation for correctness, which takes essentially no more resources than
each node individually contributes to the computation. The framework builds on
recent Merlin--Arthur proofs of batch evaluation of Williams~[{\em Electron.\
Colloq.\ Comput.\ Complexity}, Report TR16-002, January 2016] with the
observation that {\em Merlin's magic is not needed} for batch evaluation---mere
Knights can prepare the proof, in parallel, and with intrinsic
error-correction.
The contribution of this paper is to show that in many cases the verifiable
batch evaluation framework admits algorithms that match in total resource
consumption the best known sequential algorithm for solving the problem. As our
main result, we show that the -cliques in an -vertex graph can be counted
{\em and} verified in per-node time and space on
compute nodes, for any constant and
positive integer divisible by , where is the
exponent of matrix multiplication. This matches in total running time the best
known sequential algorithm, due to Ne{\v{s}}et{\v{r}}il and Poljak [{\em
Comment.~Math.~Univ.~Carolin.}~26 (1985) 415--419], and considerably improves
its space usage and parallelizability. Further results include novel algorithms
for counting triangles in sparse graphs, computing the chromatic polynomial of
a graph, and computing the Tutte polynomial of a graph.Comment: 42 p
Recommended from our members
Model performance of downscaling 1999-2004 hydrometeorological fields to the upper Rio Grande basin using different forcing datasets
This study downscaled more than five years of data (1999-2004) for hydrometeorological fields over the upper Rio Grande basin (URGB) to a 4-km resolution using a regional model [fifth-generation Pennsylvania State University-National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5, version 3)] and two forcing datasets that include National Centers for Environmental Prediction (NCEP)-NCAR reanalysis-1 (R1) and North America Regional Reanalysis (NARR) data. The long-term high-resolution simulation results show detailed patterns of hydroclimatological fields that are highly related to the characteristics of the regional terrain; the most important of these patterns are precipitation localization features caused by the complex topography. In comparison with station observational data, the downscaling processing, on whichever forcing field is used, generated more accurate surface temperature and humidity fields than the Eta Model and NARR data, although it still included marked errors, such as a negative (positive) bias toward the daily maximum (minimum) temperature and overestimated precipitation, especially in the cold season. Comparing the downscaling results forced by the NARR and R1 with both the gridded and station observational data shows that under the NARR forcing, the MM5 model produced generally better results for precipitation, temperature, and humidity than it did under the R1 forcing. These improvements were more apparent in winter and spring. During the warm season, although the use of NARR improved the precipitation estimates statistically at the regional (basin) scale, it substantially underestimated them over the southern upper Rio Grande basin, partly because the NARR forcing data exhibited warm and dry biases in the monsoon-active region during the simulation period and improper domain selection. Analyses also indicate that over mountainous regions, both the Climate Prediction Center's (CPC's) gridded (0.25°) and NARR forcings underestimate precipitation in comparison with station gauge data. © 2008 American Meteorological Society
- …
