871 research outputs found
Multi-parameter formal deformations of ternary hom-Nambu-Lie algebras
In this note, we introduce a notion of multi-parameter formal deformations of
ternary hom-Nambu-Lie algebras. Within this framework, we construct formal
deformations of the three-dimensional Jacobian determinant and of the
cross-product in four-dimensional Euclidean space. We also conclude that the
previously defined ternary q-Virasoro-Witt algebra is a formal deformation of
the ternary Virasoro-Witt algebra.Comment: 6 pages; corrected a typ
Stand type affects fluxes of volatile organic compounds from the forest floor in hemiboreal and boreal climates
The forest floor is a significant contributor to the stand-scale fluxes of biogenic volatile organic compounds. In this study, the effect of tree species (Scots pine vs. Norway spruce) on forest floor fluxes of volatile organic compounds (VOC) was compared in boreal and hemiboreal climates.Peer reviewe
SACOBRA with Online Whitening for Solving Optimization Problems with High Conditioning
Real-world optimization problems often have expensive objective functions in terms of cost and time. It is desirable to find near-optimal solutions with very few function evaluations. Surrogate-assisted optimizers tend to reduce the required number of function evaluations by replacing the real function with an efficient mathematical model built on few evaluated points. Problems with a high condition number are a challenge for many surrogate-assisted optimizers including SACOBRA. To address such problems we propose a new online whitening operating in the black-box optimization paradigm. We show on a set of high-conditioning functions that online whitening tackles SACOBRA's early stagnation issue and reduces the optimization error by a factor between 10 to 1e12 as compared to the plain SACOBRA, though it imposes many extra function evaluations. Covariance matrix adaptation evolution strategy (CMA-ES) has for very high numbers of function evaluations even lower errors, whereas SACOBRA performs better in the expensive setting (less than 1e03 function evaluations). If we count all parallelizable function evaluations (population evaluation in CMA-ES, online whitening in our approach) as one iteration, then both algorithms have comparable strength even on the long run. This holds for problems with dimension D Algorithms and the Foundations of Software technolog
COVID-19 information disorder:six types of harmful information during the pandemic in Europe
{Sten Hansson and Kati Orru and Sten Torpan and Asta Bäck and Austeja Kazemekaityte and Sunniva Frislid Meyer and Johanna Ludvigsen and Lucia Savadori and Alessandro Galvagni and Ala Pigrée}, {COVID-19 information disorder: six types of harmful information during the pandemic in Europe}, {Journal of Risk Research}, {24}, {3-4}, {380-393}, {2021}, {Routledge}, {10.1080/13669877.2020.1871058}, { https://doi.org/10.1080/13669877.2020.1871058}The outbreak of a novel coronavirus disease COVID-19 propelled the creation, transmission, and consumption of false information – unverified claims, misleading statements, false rumours, conspiracy theories, and so on – all around the world. When various official or unofficial sources issue erroneous, misleading or contradicting information during a crisis, people who are exposed to this may behave in ways that cause harm to the health and well-being of themselves or others, e.g., by not taking appropriate risk reducing measures or blaming or harassing vulnerable groups. To work towards a typology of informational content that may increase people’s vulnerability in the context of the coronavirus pandemic, we explored 98 instances of potentially harmful information that spread in six European countries – France, Italy, Norway, Finland, Lithuania, and Estonia – between March and May 2020. We suggest that during the pandemic, exposure to harmful information may have made people more vulnerable in six ways: (1) by discouraging appropriate protective actions against catching/spreading the virus, (2) by promoting the use of false (or harmful) remedies against the virus, (3) by misrepresenting the transmission mechanisms of the virus, (4) by downplaying the risks related to the pandemic, (5) by tricking people into buying fake protection against the virus or into revealing their confidential information, and (6) by victimising the alleged spreaders of the virus by harassment/hate speech. The proposed typology can be used to guide the development of risk communication plans to address each of these information-related vulnerabilities.publishedVersio
Lipoprotein(a) in Alzheimer, Atherosclerotic, Cerebrovascular, Thrombotic, and Valvular Disease: Mendelian Randomization Investigation.
Lipoprotein(a) (Lp[a]) is a circulating lipoprotein with proatherogenic, proinflammatory, and possibly prothrombotic properties. Circulating Lp(a) levels are largely genetically determined, in particular, by the LPA gene. As such, genetic variants at the LPA locus can serve as instrumental variables for investigating the clinical effects of circulating Lp(a) levels. Mendelian randomization (MR) studies have shown that elevated Lp(a) levels are associated with a higher risk of coronary artery disease1–3 and aortic valve stenosis.2–4 Evidence on the causal role of elevated Lp(a) levels for other atherosclerotic and specific valvular diseases is limited, although there are MR data supporting a positive association between genetically predicted Lp(a) levels and peripheral artery disease.2,3 Whether Lp(a) is causally related to thrombotic disease and cerebrovascular disease remains unclear.2,3,5
In this study, we used the UK Biobank cohort to perform an MR investigation into the causal effects of circulating Lp(a) levels on atherosclerotic, cerebrovascular, thrombotic, and valvular diseases. Because a recent MR study provided evidence of an inverse association of Lp(a) levels with Alzheimer disease,5 we additionally explored whether genetically predicted Lp(a) levels are associated with Alzheimer disease and dementia.Dr Larsson receives support from the Swedish Heart-Lung Foundation (Hjärt-Lungfonden, grant number 20190247), the Swedish Research Council (Vetenskapsrådet, grant number 2019-00977), and the Swedish Research Council for Health, Working Life and Welfare (Forte, grant number 2018-00123). Dr Gill is funded by the Wellcome 4i Clinical PhD Program at Imperial College London. Dr Burgess is supported by a Sir Henry Dale Fellowship jointly funded by the Wellcome Trust and the Royal Society (award number 204623/Z/16/Z). Drs Burgess and Butterworth report funding from Novartis relating to the investigation of lipoprotein(a). The funder had no influence on the content of the investigation or the decision to publish. This work was supported by core funding from the UK Medical Research Council (MR/L003120/1), the British Heart Foundation (RG/13/13/30194; RG/18/13/33946), the National Institute for Health Research [Cambridge Biomedical Research Centre at the Cambridge University Hospitals NHS Foundation Trust] and Health Data Research UK, which is funded by the UK Medical Research Council, Engineering and Physical Sciences Research Council, Economic and Social Research Council, Department of Health and Social Care (England), Chief Scientist Office of the Scottish Government Health and Social Care Directorates, Health and Social Care Research and Development Division (Welsh Government), Public Health Agency (Northern Ireland), British Heart Foundation and Wellcome
Anomaly Detection in Electrocardiogram Readings with Stacked LSTM Networks
Real-world anomaly detection for time series is still a challenging task. This is especially true for periodic or quasi-periodic time series since automated approaches have to learn long-term correlations before they are able to detect anomalies. Electrocardiography (ECG) time series, a prominent real-world example of quasi-periodic signals, are investigated in this work. Anomaly detection algorithms often have the additional goal to identify anomalies in an unsupervised manner. In this paper we present an unsupervised time series anomaly detection algorithm. It learns with recurrent Long Short-Term Memory (LSTM) networks to predict the normal time series behavior. The prediction error on several prediction horizons is used to build a statistical model of normal behavior. We propose new methods that are essential for a successful model-building process and for a high signal-to-noise-ratio. We apply our method to the well-known MIT-BIH ECG data set and present first results. We obtain a good recall of anomalies while having a very low false alarm rate (FPR) in a fully unsupervised procedure. We compare also with other anomaly detectors (NuPic, ADVec) from the state-of-the-art.Algorithms and the Foundations of Software technolog
Expressivity of parameterized and data-driven representations in quality diversity search
Algorithms and the Foundations of Software technolog
Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control
Algorithms and the Foundations of Software technolog
A new acquisition function for robust Bayesian optimization of unconstrained problems
A new acquisition function is proposed for solving robust optimization problems via Bayesian Optimization. The proposed acquisition function reflects the need for the robust instead of the nominal optimum, and is based on the intuition of utilizing the higher moments of the improvement. The efficacy of Bayesian Optimization based on this acquisition function is demonstrated on four test problems, each affected by three different levels of noise. Our findings suggest the promising nature of the proposed acquisition function as it yields a better robust optimal value of the function in 6/12 test scenarios when compared with the baseline.Horizon 2020(H2020)766186Algorithms and the Foundations of Software technolog
Complications in lymph node excision in the head and neck area
Background Although needle biopsy is widely used in work-up of lymphadenopathy, lymph node excision (LNE) is often required especially in lymphoma diagnostics. LNE is an invasive procedure, which carries a potential risk of complications. However, comprehensive studies evaluating the spectrum and occurrence of complications are lacking. Aims/Objectives This study addresses the role of preoperative needle biopsies in patients who underwent LNE. Furthermore, surgical complications related to LNE are analyzed. Materials and methods Altogether 321 patients, who underwent LNE in two-year period in 2018-19, and fulfilled our study criteria, were included. Patients' data were retrieved from the electronic patient records. Results The surgical complication rate was 5.9%. Most of the complications (n = 16; 84.2%) were categorized as minor (I-II) according to the Clavien-Dindo scale. The remaining three (15.8%), all hemorrhages, were categorized as major complications and required intervention. Preoperative needle biopsy might have avoided the need for LNE in some patients, which we discuss in this study. Conclusions and significance Surgical complications after LNE in the head and neck area are rare and mostly minor. Needle biopsy is often recommended preoperatively to avoid unnecessary operations and to refrain performing LNE for patients with non-lymphatic malignancy.Peer reviewe
- …