285 research outputs found

    Fine-Grained Derandomization: From Problem-Centric to Resource-Centric Complexity

    Get PDF
    We show that popular hardness conjectures about problems from the field of fine-grained complexity theory imply structural results for resource-based complexity classes. Namely, we show that if either k-Orthogonal Vectors or k-CLIQUE requires n^{epsilon k} time, for some constant epsilon>1/2, to count (note that these conjectures are significantly weaker than the usual ones made on these problems) on randomized machines for all but finitely many input lengths, then we have the following derandomizations: - BPP can be decided in polynomial time using only n^alpha random bits on average over any efficient input distribution, for any constant alpha>0 - BPP can be decided in polynomial time with no randomness on average over the uniform distribution This answers an open question of Ball et al. (STOC \u2717) in the positive of whether derandomization can be achieved from conjectures from fine-grained complexity theory. More strongly, these derandomizations improve over all previous ones achieved from worst-case uniform assumptions by succeeding on all but finitely many input lengths. Previously, derandomizations from worst-case uniform assumptions were only know to succeed on infinitely many input lengths. It is specifically the structure and moderate hardness of the k-Orthogonal Vectors and k-CLIQUE problems that makes removing this restriction possible. Via this uniform derandomization, we connect the problem-centric and resource-centric views of complexity theory by showing that exact hardness assumptions about specific problems like k-CLIQUE imply quantitative and qualitative relationships between randomized and deterministic time. This can be either viewed as a barrier to proving some of the main conjectures of fine-grained complexity theory lest we achieve a major breakthrough in unconditional derandomization or, optimistically, as route to attain such derandomizations by working on very concrete and weak conjectures about specific problems

    Core level binding energy for nitrogen doped char: XPS deconvolution analysis from first principles

    Get PDF
    Amorphous carbon produced from lignocellulosic materials has received much attention in recent years because of its applications in environmental and agricultural management with potential to sequester carbon, serve as a soil amendment, and improve soil aggregation. Modern engineered amorphous carbons with promising properties, such as porous structure, surface functionalities (O, N, P, S) and layers with large number of defects, are used in the field of adsorption and catalysis. There is a growing interest in the production of nitrogen-doped carbonaceous materials because of their excellent properties in a variety of applications such carbon electrodes, heterogenous catalysis adsorption and batteries. However, quantifying the surface nitrogen and oxygen content in amorphous nitrogen doped carbons via deconvolution of C 1s x-ray photoelectron (XPS) spectra remains difficult due to limited information in the literature. No suitable method exists to accurately correlate both the nitrogen and oxygen content to the carbon (C 1s) XPS spectrum in the literature. To improve the interpretation of spectra, the C 1s, N 1s and O 1s core level energy shifts have been calculated for various nitrogenated carbon structures from first principles by performing density functional theory (DFT) based calculations. Furthermore, we propose a new method to improve the self-consistency of the XPS interpretation based on a seven-peak C 1s deconvolution (3 C-C peaks, 3 C-N/-O peaks, and π-π* transition peaks). With the DFT calculations, spectral components arising from surface-defect carbons could be distinguished from aromatic sp2 carbon. The deconvolution method proposed provides C/(N+O) ratios in very good agreement (error less than 5%) with those obtained from total C 1s, N 1s and O 1s peaks. Our deconvolution strategy provides a simple guideline for obtaining high-quality fits to experimental data on the basis of a careful evaluation of experimental conditions and resul

    Synergistic effects between nitrogen functionalities and metals content on the removal of phosphate ions

    Get PDF
    The release of phosphate ions in the runoff is today a major threat to the environment and humans. Therefore, it is vital to develop effective technologies to remove phosphate ions from aqueous solutions before they are discharged into runoff and natural water bodies. This study aims to evaluate and proposed a mechanism of phosphate adsorption by using nitrogen and metals-functionalized chars. In order to isolate the contribution of individual components of lignocellulosic biomass, simple cellulose was used for the char production. Five samples of nitrogen-doped chars were produced via annealing cellulose under ammonia gas at different temperatures (500, 600, 700, 800, 850 and 900 ℃). Some of the analytical techniques used for the chars characterization were: Elemental and proximate analysis, gas physisorption analysis, Scanning Electron Microscopy and X-ray photoelectron spectroscopy analysis. These samples were subsequently used for phosphate adsorption. Characterization of the resulting chars shows an increase of the nitrogen content in the samples, where the greater percentage of it appears at a temperature of 800 ℃ (12.5 wt%) and the maximum surface area was for char produced at 900 ℃ (1314 m2/g). To evaluate the effect of nitrogen and metals in char to adsorb phosphate ions, three sets of chars were produced at 800 ℃; char with magnesium and nitrogen (Mg_N_char); char with nitrogen (N_char) and char with magnesium (Mg_N_char). The results show that Mg_N_char sample exhibits a maximum adsorption capacity of 340 mg/g, whereas the Mg_char and N_char samples give an adsorption capacity of 7.8 mg/g and 21.4 mg/g respectively. These results demonstrate that the presence of magnesium and nitrogen in chars is very effective in the retention of phosphate ions. Other metals such as Fe and Ca combined with nitrogen will also be tested, details of the results will be presented at the conference

    Average-Case Fine-Grained Hardness

    Get PDF
    We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL \u2794), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems -- namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) -- in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2o(1)n^{2-o(1)} time to compute on average, and that of APSP gives us a function that requires n3o(1)n^{3-o(1)} time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO \u2703)

    Proofs of Useful Work

    Get PDF
    We give Proofs of Work (PoWs) whose hardness is based on a wide array of computational problems, including Orthogonal Vectors, 3SUM, All-Pairs Shortest Path, and any problem that reduces to them (this includes deciding any graph property that is statable in first-order logic). This results in PoWs whose completion does not waste energy but instead is useful for the solution of computational problems of practical interest. The PoWs that we propose are based on delegating the evaluation of low-degree polynomials originating from the study of average-case fine-grained complexity. We prove that, beyond being hard on the average (based on worst-case hardness assumptions), the task of evaluating our polynomials cannot be amortized across multiple~instances. For applications such as Bitcoin, which use PoWs on a massive scale, energy is typically wasted in huge proportions. We give a framework that can utilize such otherwise wasteful work. Note: An updated version of this paper is available at https://eprint.iacr.org/2018/559. The update is to accommodate the fact (pointed out by anonymous reviewers) that the definition of Proof of Useful Work in this paper is already satisfied by a generic naive construction

    Proofs of Work from Worst-Case Assumptions

    Get PDF
    We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC \u2717), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a `proof of concept\u27 of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs\u27 structure can be exploited in ways previous heuristic PoWs could not. This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS \u2716)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC \u2717) can be modified to have much faster verification time, can be proved in zero knowledge, and more. Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs

    Blood Biomarkers to Predict Long-Term Mortality after Ischemic Stroke

    Get PDF
    Biomarcador; Endostatina; Accident cerebrovascular isquèmicBiomarcador; Endostatina; Accidente cerebrovascular isquémicoBiomarker; Endostatin; Ischemic strokeStroke is a major cause of disability and death globally, and prediction of mortality represents a crucial challenge. We aimed to identify blood biomarkers measured during acute ischemic stroke that could predict long-term mortality. Nine hundred and forty-one ischemic stroke patients were prospectively recruited in the Stroke-Chip study. Post-stroke mortality was evaluated during a median 4.8-year follow-up. A 14-biomarker panel was analyzed by immunoassays in blood samples obtained at hospital admission. Biomarkers were normalized and standardized using Z-scores. Multiple Cox regression models were used to identify clinical variables and biomarkers independently associated with long-term mortality and mortality due to stroke. In the multivariate analysis, the independent predictors of long-term mortality were age, female sex, hypertension, glycemia, and baseline National Institutes of Health Stroke Scale (NIHSS) score. Independent blood biomarkers predictive of long-term mortality were endostatin > quartile 2, tumor necrosis factor receptor-1 (TNF-R1) > quartile 2, and interleukin (IL)-6 > quartile 2. The risk of mortality when these three biomarkers were combined increased up to 69%. The addition of the biomarkers to clinical predictors improved the discrimination (integrative discriminative improvement (IDI) 0.022 (0.007–0.048), p quartile 3 was an independent predictor of mortality due to stroke. Altogether, endostatin, TNF-R1, and IL-6 circulating levels may aid in long-term mortality prediction after stroke.This work has been funded by Instituto de Salud Carlos III (PI18/00804) and by La Fundació La Marató (Reg. 84/240 proj. 201702). Neurovascular Research Laboratory takes part in the Spanish stroke research network INVICTUS+ (RD16/0019/0021). L.R. is supported by a pre-doctoral fellowship from the Instituto de Salud Carlos III (IFI17/00012)

    Headache : A striking prodromal and persistent symptom, predictive of COVID-19 clinical evolution

    Get PDF
    To define headache characteristics and evolution in relation to COVID-19 and its inflammatory response. This is a prospective study, comparing clinical data and inflammatory biomarkers of COVID-19 patients with and without headache, recruited at the Emergency Room. We compared baseline with 6-week follow-up to evaluate disease evolution. Of 130 patients, 74.6% (97/130) had headache. In all, 24.7% (24/97) of patients had severe pain with migraine-like features. Patients with headache had more anosmia/ageusia (54.6% vs. 18.2%; p < 0.0001). Clinical duration of COVID-19 was shorter in the headache group (23.9 ± 11.6 vs. 31.2 ± 12.0 days; p = 0.028). In the headache group, IL-6 levels were lower at the ER (22.9 (57.5) vs. 57.0 (78.6) pg/mL; p = 0.036) and more stable during hospitalisation. After 6 weeks, of 74 followed-up patients with headache, 37.8% (28/74) had ongoing headache. Of these, 50% (14/28) had no previous headache history. Headache was the prodromal symptom of COVID-19 in 21.4% (6/28) of patients with persistent headache (p = 0.010). Headache associated with COVID-19 is a frequent symptom, predictive of a shorter COVID-19 clinical course. Disabling headache can persist after COVID-19 resolution. Pathophysiologically, its migraine-like features may reflect an activation of the trigeminovascular system by inflammation or direct involvement of SARS-CoV-2, a hypothesis supported by concomitant anosmia

    Terminología básica de conservación y restauración del Patrimonio Cultural 3. Español - Inglés - Francés - Italiano - Alemán - Portugués.

    Get PDF
    Versión 2018 Ampliación con portugués del PIMCD 2016 Glosario de 80 términos fundamentales de conservación y restauración del patrimonio cultural, con sus definiciones, imágenes ilustrativas, y traducción a inglés, francés, italiano y alemán, siguiendo las más recientes normativas y documentos internacionales
    corecore