944 research outputs found

    Convexity in source separation: Models, geometry, and algorithms

    Get PDF
    Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work

    A combined experimental and computational study of the pressure dependence of the vibrational spectrum of solid picene C_22H_14

    Full text link
    We present high-quality optical data and density functional perturbation theory calculations for the vibrational spectrum of solid picene (C22_{22}H14_{14}) under pressure up to 8 GPa. First-principles calculations reproduce with a remarkable accuracy the pressure effects on both frequency and intensities of the phonon peaks experimentally observed . Through a detailed analysis of the phonon eigenvectors, We use the projection on molecular eigenmodes to unambiguously fit the experimental spectra, resolving complicated spectral structures, in a system with hundreds of phonon modes. With these projections, we can also quantify the loss of molecular character under pressure. Our results indicate that picene, despite a \sim 20 % compression of the unit cell, remains substantially a molecular solid up to 8 GPa, with phonon modes displaying a smooth and uniform hardening with pressure. The Grueneisen parameter of the 1380 cm^{-1} a_1 Raman peak (γp=0.1\gamma_p=0.1) is much lower than the effective value (γd=0.8\gamma_d=0.8) due to K doping. This is an indication that the phonon softening in K doped samples is mainly due to charge transfer and electron-phonon coupling.Comment: Replaced with final version (PRB

    Vibrational spectrum of solid picene (C_22H_14)

    Full text link
    Recently, Mitsuhashi et al., have observed superconductivity with transition temperature up to 18 K in potassium doped picene (C22H14), a polycyclic aromatic hydrocarbon compound [Nature 464 (2010) 76]. Theoretical analysis indicate the importance of electron-phonon coupling in the superconducting mechanisms of these systems, with different emphasis on inter- and intra-molecular vibrations, depending on the approximations used. Here we present a combined experimental and ab-initio study of the Raman and infrared spectrum of undoped solid picene, which allows us to unanbiguously assign the vibrational modes. This combined study enables the identification of the modes which couple strongly to electrons and hence can play an important role in the superconducting properties of the doped samples

    Ground truth deficiencies in software engineering: when codifying the past can be counterproductive

    Get PDF
    Many software engineering tools build and evaluate their models based on historical data to support development and process decisions. These models help us answer numerous interesting questions, but have their own caveats. In a real-life setting, the objective function of human decision-makers for a given task might be influenced by a whole host of factors that stem from their cognitive biases, subverting the ideal objective function required for an optimally functioning system. Relying on this data as ground truth may give rise to systems that end up automating software engineering decisions by mimicking past sub-optimal behaviour. We illustrate this phenomenon and suggest mitigation strategies to raise awareness

    Analysis of Circular Economy Research and Innovation (R&I) intensity for critical products in the supply chains of strategic technologies.

    Get PDF
    To develop renewable energy, digital, space and defence technologies, the European Union (EU) needs access to critical raw materials of which a large share is currently imported from third countries. To mitigate the risk of supply disruptions, the Critical Raw Materials Act proposes to diversify sources of imports, while increasing domestic extraction, processing, and recycling. The circular economy is therefore positioned as a key element of the EU strategy to deploy strategic technologies for navigating the sustainability transition in a complex geopolitical landscape. In line with this position, the present study analyses the intensity of circular economy research and innovation (R&I) in the supply chains of strategic technologies. The focus is placed on four critical products containing raw materials having high supply risks: lithium-ion battery cells; neodymium-iron-boron permanent magnets; photovoltaic cells; hydrogen electrolysers and fuel-cells. The R&I analysis is based on the identification of scientific articles, patents, and innovation projects on the subject, with a global scope, in the period between 2014 and 2022. The analysis is enriched by connecting to parallel work on the subject, conducted by Joint Research Centre (JRC) as well as academic institutions, industry, and policy stakeholders. This is functional to provide insight into: where circularity efforts R&I have been placed in terms of different products and supply chains; which countries are undertaking these efforts; how the EU is positioned and how much funding was deployed so far; what are the current gaps and trends going forward. Main insights include the following: 1) circularity R&I for critical products is not balanced, with a prominent focus placed on Li-ion cells on a global level 2) the EU has followed this trend in terms of number of innovation projects and public spending; 3) Next to EU efforts, China and the USA focus intensely on circular economy R&I as well. This study contributes with evidence to advance scientific research and policymaking on the role of a circular economy to achieve open strategic autonomy and climate neutrality in the EU

    Cardiomyopathy associated with diabetes. the central role of the cardiomyocyte

    Get PDF
    The term diabetic cardiomyopathy (DCM) labels an abnormal cardiac structure and performance due to intrinsic heart muscle malfunction, independently of other vascular co-morbidity. DCM, accounting for 50%-80% of deaths in diabetic patients, represents a worldwide problem for human health and related economics. Optimal glycemic control is not sufficient to prevent DCM, which derives from heart remodeling and geometrical changes, with both consequences of critical events initially occurring at the cardiomyocyte level. Cardiac cells, under hyperglycemia, very early undergo metabolic abnormalities and contribute to T helper (Th)-driven inflammatory perturbation, behaving as immunoactive units capable of releasing critical biomediators, such as cytokines and chemokines. This paper aims to focus onto the role of cardiomyocytes, no longer considered as "passive" targets but as "active" units participating in the inflammatory dialogue between local and systemic counterparts underlying DCM development and maintenance. Some of the main biomolecular/metabolic/inflammatory processes triggered within cardiac cells by high glucose are overviewed; particular attention is addressed to early inflammatory cytokines and chemokines, representing potential therapeutic targets for a prompt early intervention when no signs or symptoms of DCM are manifesting yet. DCM clinical management still represents a challenge and further translational investigations, including studies at female/male cell level, are warranted

    Neuro-evolution Methods for Designing Emergent Specialization

    Get PDF
    This research applies the Collective Specialization Neuro-Evolution (CONE) method to the problem of evolving neural controllers in a simulated multi-robot system. The multi-robot system consists of multiple pursuer (predator) robots, and a single evader (prey) robot. The CONE method is designed to facilitate behavioral specialization in order to increase task performance in collective behavior solutions. Pursuit-Evasion is a task that benefits from behavioral specialization. The performance of prey-capture strategies derived by the CONE method, are compared to those derived by the Enforced Sub-Populations (ESP) method. Results indicate that the CONE method effectively facilitates behavioral specialization in the team of pursuer robots. This specialization aids in the derivation of robust prey-capture strategies. Comparatively, ESP was found to be not as appropriate for facilitating behavioral specialization and effective prey-capture behaviors

    Prediction of Simulated 1,000 m Kayak Ergometer Performance in Young Athletes

    Get PDF
    This study aimed to develop a predictive explanatory model for the 1,000-m time-trial (TT) performance in young national-level kayakers, from biomechanical and physiological parameters assessed in a maximal graded exercise test (GXT). Twelve young male flat-water kayakers (age 16.1 ± 1.1 years) participated in the study. The design consisted of 2 exercise protocols, separated by 48 h, on a kayak ergometer. The first protocol consisted of a GXT starting at 8 km.h−1 with increments in speed of 1 km.h−1 at each 2-min interval until exhaustion. The second protocol comprised the 1,000-m TT. Results: In the GXT, they reached an absolute (Formula presented.) O2max of 3.5 ± 0.7 (L.min−1), a maximum aerobic power (MAP) of 138.5 ± 24.5 watts (W) and a maximum aerobic speed (MAS) of 12.8 ± 0.5 km/h. The TT had a mean duration of 292.3 ± 15 s, a power output of 132.6 ± 22.0 W and a (Formula presented.) O2max of 3.5 ± 0.6 (L.min−1). The regression model [TT (s) = 413.378–0.433 × (MAP)−0.554 × (stroke rate at MAP)] presented an R2 = 84.5%. Conclusion: It was found that (Formula presented.) O2max, stroke distance and stroke rate during the GXT were not different from the corresponding variables ((Formula presented.) O2peak, stroke distance and stroke rate) observed during the TT. The MAP and the corresponding stroke rate were strong predicting factors of 1,000 m TT performance. In conclusion, the TT can be useful for quantifying biomechanical parameters (stroke distance and stroke rate) and to monitor training induced changes in the cardiorespiratory fitness ((Formula presented.) O2max)
    • …
    corecore