284 research outputs found

    Evidence for electron transfer between graphene and non‐covalently bound π‐systems

    Get PDF
    Hybridizing graphene and molecules possess a high potential for developing materials for new applications. However, new methods to characterize such hybrids must be developed. Herein, the wet‐chemical non‐covalent functionalization of graphene with cationic π‐systems is presented and the interaction between graphene and the molecules is characterized in detail. A series of tricationic benzimidazolium salts with various steric demand and counterions was synthesized, characterized and used for the fabrication of graphene hybrids. Subsequently, the doping effects were studied. The molecules are adsorbed onto graphene and studied by Raman spectroscopy, XPS as well as ToF‐SIMS. The charged π‐systems show a p‐doping effect on the underlying graphene. Consequently, the tricationic molecules are reduced through a partial electron transfer process from graphene, a process which is accompanied by the loss of counterions. DFT calculations support this hypothesis and the strong p‐doping could be confirmed in fabricated monolayer graphene/hybrid FET devices. The results are the basis to develop sensor applications, which are based on analyte/molecule interactions and effects on doping

    Assessing PM(sub 2.5) Exposures with High Spatiotemporal Resolution Across the Continental United States

    Get PDF
    A number of models have been developed to estimate PM2.5 exposure, including satellite-based aerosol optical depth (AOD) models, land-use regression or chemical transport model simulation, all with both strengths and weaknesses. Variables like normalized difference vegetation index (NDVI), surface reflectance, absorbing aerosol index and meteoroidal fields, are also informative about PM2.5 concentrations. Our objective is to establish a hybrid model which incorporates multiple approaches and input variables to improve model performance. To account for complex atmospheric mechanisms, we used a neural network for its capacity to model nonlinearity and interactions. We used convolutional layers, which aggregate neighboring information, into a neural network to account for spatial and temporal autocorrelation. We trained the neural network for the continental United States from 2000 to 2012 and tested it with left out monitors. Ten-fold cross-validation revealed good model performance with total R2 of 0.84 on the left out monitors. Regional R2 could be even higher for the Eastern and Central United States. Model performance was still good at low PM2.5 concentrations. Then, we used the trained neural network to make daily prediction of PM2.5 at 1 km 1 km grid cells. This model allows epidemiologists to access PM2.5 exposure in both the short term and the long term

    On the Unlikelihood of D-Separation

    Full text link
    Causal discovery aims to recover a causal graph from data generated by it; constraint based methods do so by searching for a d-separating conditioning set of nodes in the graph via an oracle. In this paper, we provide analytic evidence that on large graphs, d-separation is a rare phenomenon, even when guaranteed to exist, unless the graph is extremely sparse. We then provide an analytic average case analysis of the PC Algorithm for causal discovery, as well as a variant of the SGS Algorithm we call UniformSGS. We consider a set V={v1,
,vn}V=\{v_1,\ldots,v_n\} of nodes, and generate a random DAG G=(V,E)G=(V,E) where (va,vb)∈E(v_a, v_b) \in E with i.i.d. probability p1p_1 if aba b. We provide upper bounds on the probability that a subset of V−{x,y}V-\{x,y\} d-separates xx and yy, conditional on xx and yy being d-separable; our upper bounds decay exponentially fast to 00 as ∣V∣→∞|V| \rightarrow \infty. For the PC Algorithm, while it is known that its worst-case guarantees fail on non-sparse graphs, we show that the same is true for the average case, and that the sparsity requirement is quite demanding: for good performance, the density must go to 00 as ∣V∣→∞|V| \rightarrow \infty even in the average case. For UniformSGS, while it is known that the running time is exponential for existing edges, we show that in the average case, that is the expected running time for most non-existing edges as well

    Optimized Interpolation Attacks on LowMC

    Get PDF
    LowMC is a collection of block cipher families introduced at Eurocrypt 2015 by Albrecht et al. Its design is optimized for instantiations of multi-party computation, fully homomorphic encryption, and zero-knowledge proofs. A unique feature of LowMC is that its internal affine layers are chosen at random, and thus each block cipher family contains a huge number of instances. The Eurocrypt paper proposed two specific block cipher families of LowMC, having 80-bit and 128-bit keys. In this paper, we mount interpolation attacks (algebraic attacks introduced by Jakobsen and Knudsen) on LowMC, and show that a practically significant fraction of 2−382^{-38} of its 80-bit key instances could be broken 2232^{23} times faster than exhaustive search. Moreover, essentially all instances that are claimed to provide 128-bit security could be broken about 10001000 times faster. In order to obtain these results, we had to develop novel techniques and optimize the original interpolation attack in new ways. While some of our new techniques exploit specific internal properties of LowMC, others are more generic and could be applied, in principle, to any block cipher

    Min-Cost Bipartite Perfect Matching with Delays

    Get PDF
    In the min-cost bipartite perfect matching with delays (MBPMD) problem, requests arrive online at points of a finite metric space. Each request is either positive or negative and has to be matched to a request of opposite polarity. As opposed to traditional online matching problems, the algorithm does not have to serve requests as they arrive, and may choose to match them later at a cost. Our objective is to minimize the sum of the distances between matched pairs of requests (the connection cost) and the sum of the waiting times of the requests (the delay cost). This objective exhibits a natural tradeoff between minimizing the distances and the cost of waiting for better matches. This tradeoff appears in many real-life scenarios, notably, ride-sharing platforms. MBPMD is related to its non-bipartite variant, min-cost perfect matching with delays (MPMD), in which each request can be matched to any other request. MPMD was introduced by Emek et al. (STOC\u2716), who showed an O(log^2(n)+log(Delta))-competitive randomized algorithm on n-point metric spaces with aspect ratio Delta. Our contribution is threefold. First, we present a new lower bound construction for MPMD and MBPMD. We get a lower bound of Omega(sqrt(log(n)/log(log(n)))) on the competitive ratio of any randomized algorithm for MBPMD. For MPMD, we improve the lower bound from Omega(sqrt(log(n))) (shown by Azar et al., SODA\u2717) to Omega(log(n)/log(log(n))), thus, almost matching their upper bound of O(log(n)). Second, we adapt the algorithm of Emek et al. to the bipartite case, and provide a simplified analysis that improves the competitive ratio to O(log(n)). The key ingredient of the algorithm is an O(h)-competitive randomized algorithm for MBPMD on weighted trees of height h. Third, we provide an O(h)-competitive deterministic algorithm for MBPMD on weighted trees of height h. This algorithm is obtained by adapting the algorithm for MPMD by Azar et al. to the apparently more complicated bipartite setting

    Fine Particulate Matter Predictions Using High Resolution Aerosol Optical Depth (AOD) Retrievals

    Get PDF
    To date, spatial-temporal patterns of particulate matter (PM) within urban areas have primarily been examined using models. On the other hand, satellites extend spatial coverage but their spatial resolution is too coarse. In order to address this issue, here we report on spatial variability in PM levels derived from high 1 km resolution AOD product of Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm developed for MODIS satellite. We apply day-specific calibrations of AOD data to predict PM(sub 2.5) concentrations within the New England area of the United States. To improve the accuracy of our model, land use and meteorological variables were incorporated. We used inverse probability weighting (IPW) to account for nonrandom missingness of AOD and nested regions within days to capture spatial variation. With this approach we can control for the inherent day-to-day variability in the AOD-PM(sub 2.5) relationship, which depends on time-varying parameters such as particle optical properties, vertical and diurnal concentration profiles and ground surface reflectance among others. Out-of-sample "ten-fold" cross-validation was used to quantify the accuracy of model predictions. Our results show that the model-predicted PM(sub 2.5) mass concentrations are highly correlated with the actual observations, with out-of- sample R(sub 2) of 0.89. Furthermore, our study shows that the model captures the pollution levels along highways and many urban locations thereby extending our ability to investigate the spatial patterns of urban air quality, such as examining exposures in areas with high traffic. Our results also show high accuracy within the cities of Boston and New Haven thereby indicating that MAIAC data can be used to examine intra-urban exposure contrasts in PM(sub 2.5) levels

    Generic Attacks on Hash Combiners

    Get PDF
    Hash combiners are a practical way to make cryptographic hash functions more tolerant to future attacks and compatible with existing infrastructure. A combiner combines two or more hash functions in a way that is hopefully more secure than each of the underlying hash functions, or at least remains secure as long as one of them is secure. Two classical hash combiners are the exclusive-or (XOR) combiner H1(M)⊕H2(M)H_1(M) \oplus H_2(M) and the concatenation combiner H1(M)∄H2(M)H_1(M) \parallel H_2(M). Both of them process the same message using the two underlying hash functions in parallel. Apart from parallel combiners, there are also cascade constructions sequentially calling the underlying hash functions to process the message repeatedly, such as Hash-Twice H2(H1(IV,M),M)H_2(H_1(IV,M),M) and the Zipper hash H2(H1(IV,M),M←)H_2(H_1(IV,M),\overleftarrow{M}), where M←\overleftarrow{M} is the reverse of the message MM. In this work, we study the security of these hash combiners by devising the best-known generic attacks. The results show that the security of most of the combiners is not as high as commonly believed. We summarize our attacks and their computational complexities (ignoring the polynomial factors) as follows: 1. Several generic preimage attacks on the XOR combiner: -- A first attack with a best-case complexity of 25n/62^{5n/6} obtained for messages of length 2n/32^{n/3}. It relies on a novel technical tool named Interchange Structure. It is applicable for combiners whose underlying hash functions follow the Merkle-DamgĂ„rd construction or the HAIFA framework. -- A second attack with a best-case complexity of 22n/32^{2n/3} obtained for messages of length 2n/2 2^{n/2} . It exploits properties of functional graphs of random mappings. It achieves a significant improvement over the first attack but is only applicable when the underlying hash functions use the Merkle-DamgĂ„rd construction. -- An improvement upon the second attack with a best-case complexity of 25n/82^{5n/8} obtained for messages of length 25n/82^{5n/8}. It further exploits properties of functional graphs of random mappings and uses longer messages. These attacks show a rather surprising result: regarding preimage resistance, the sum of two nn-bit narrow-pipe hash functions following the considered constructions can never provide n n -bit security. 2. A generic second-preimage attack on the concatenation combiner of two Merkle DamgĂ„rd hash functions. This attack finds second preimages faster than 2n2^n for challenges longer than 22n/72^{2n/7} and has a best-case complexity of 23n/42^{3n/4} obtained for challenges of length 23n/42^{3n/4}. It also exploits properties of functional graphs of random mappings. 3. The first generic second-preimage attack on the Zipper hash with underlying hash functions following the Merkle-DamgĂ„rd construction. The best-case complexity is 23n/52^{3n/5}, obtained for challenge messages of length 22n/52^{2n/5}. 4. An improved generic second-preimage attack on Hash-Twice with underlying hash functions following the Merkle-DamgĂ„rd construction. The best-case complexity is 213n/222^{13n/22}, obtained for challenge messages of length 213n/222^{13n/22}. The last three attacks show that regarding second-preimage resistance, the concatenation and cascade of two nn-bit narrow-pipe Merkle-DamgĂ„rd hash functions do not provide much more security than that can be provided by a single nn-bit hash function. Our main technical contributions include the following: 1. The interchange structure, which enables simultaneously controlling the behaviours of two hash computations sharing the same input. 2. The simultaneous expandable message, which is a set of messages of length covering a whole appropriate range and being multi-collision for both of the underlying hash functions. 3. New ways to exploit the properties of functional graphs of random mappings generated by fixing the message block input to the underlying compression functions

    Spatiotemporal Prediction of Fine Particulate Matter Using High-Resolution Satellite Images in the Southeastern US 2003-2011

    Get PDF
    Numerous studies have demonstrated that fine particulate matter (PM(sub 2.5), particles smaller than 2.5 micrometers in aerodynamic diameter) is associated with adverse health outcomes. The use of ground monitoring stations of PM(sub 2.5) to assess personal exposure, however, induces measurement error. Land-use regression provides spatially resolved predictions but land-use terms do not vary temporally. Meanwhile, the advent of satellite-retrieved aerosol optical depth (AOD) products have made possible to predict the spatial and temporal patterns of PM(sub 2.5) exposures. In this paper, we used AOD data with other PM(sub 2.5) variables, such as meteorological variables, land-use regression, and spatial smoothing to predict daily concentrations of PM(sub 2.5) at a 1 sq km resolution of the Southeastern United States including the seven states of Georgia, North Carolina, South Carolina, Alabama, Tennessee, Mississippi, and Florida for the years from 2003 to 2011. We divided the study area into three regions and applied separate mixed-effect models to calibrate AOD using ground PM(sub 2.5) measurements and other spatiotemporal predictors. Using 10-fold cross-validation, we obtained out of sample R2 values of 0.77, 0.81, and 0.70 with the square root of the mean squared prediction errors of 2.89, 2.51, and 2.82 cu micrograms for regions 1, 2, and 3, respectively. The slopes of the relationships between predicted PM2.5 and held out measurements were approximately 1 indicating no bias between the observed and modeled PM(sub 2.5) concentrations. Predictions can be used in epidemiological studies investigating the effects of both acute and chronic exposures to PM(sub 2.5). Our model results will also extend the existing studies on PM(sub 2.5) which have mostly focused on urban areas because of the paucity of monitors in rural areas

    A New Hybrid Spatio-temporal Model for Estimating Daily Multi-year PM2.5 Concentrations Across Northeastern USA Using High Resolution Aerosol Optical Depth Data

    Get PDF
    The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter PM(sub 2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data.We developed and cross validated models to predict daily PM(sub 2.5) at a 1X 1 km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003-2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1 X 1 km grid predictions. We used mixed models regressing PM(sub 2.5) measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Our model performance was excellent (mean out-of-sample R(sup 2) = 0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R(sup 2) = 0.87, R(sup)2 = 0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region

    Computational Lattice-Gas Modeling of the Electrosorption of Small Molecules and Ions

    Full text link
    We present two recent applications of lattice-gas modeling techniques to electrochemical adsorption on catalytically active metal substrates: urea on Pt(100) and (bi)sulfate on Rh(111). Both involve the specific adsorption of small molecules or ions on well-characterized single-crystal electrodes, and they provide a particularly good fit between the adsorbate geometry and the substrate structure. The close geometric fit facilitates the formation of ordered submonolayer adsorbate phases in a range of electrode potential positive of the range in which an adsorbed monolayer of hydrogen is stable. In both systems the ordered-phase region is separated from the adsorbed- hydrogen region by a phase transition, signified in cyclic voltammograms by a sharp current peak. Based on data from {\it in situ\/} radiochemical surface concentration measurements, cyclic voltammetry, and scanning tunneling micro- scopy, and {\it ex situ\/} Auger electron spectroscopy and low-energy electron diffraction, we have developed specific lattice-gas models for the two systems. These models were studied by group-theoretical ground-state calcu- lations and numerical Monte Carlo simulations, and effective lattice-gas inter- action parameters were determined so as to provide agreement with experiments.Comment: 17 pp. uuencoded postscript, FSU-SCRI-94C-9
    • 

    corecore