142 research outputs found
Recommended from our members
Kernel reconstruction for delayed neural field equations
Understanding the neural field activity for realistic living systems is a challenging task in contemporary neuroscience. Neural fields have been studied and developed theoretically and numerically with considerable success over the past four decades. However, to make effective use of such models, we need to identify their constituents in practical systems. This includes the determination of model parameters and in particular the reconstruction of the underlying effective connectivity in biological tissues. In this work, we provide an integral equation approach to the reconstruction of the neural connectivity in the case where the neural activity is governed by a delay neural field equation. As preparation, we study the solution of the direct problem based on the Banach fixed point theorem. Then we reformulate the inverse problem into a family of integral equations of the first kind. This equation will be vector valued when several neural activity trajectories are taken as input for the inverse problem. We employ spectral regularization techniques for its stable solution. A sensitivity analysis of the regularized kernel reconstruction with respect to the input signal u is carried out, investigating the Frechet differentiability of the kernel with respect to the signal. Finally, we use numerical examples to show the feasibility of the approach for kernel reconstruction, including numerical sensitivity tests, which show that the integral equation approach is a very stable and promising approach for practical computational neuroscience
Monte Carlo Procedure for Protein Design
A new method for sequence optimization in protein models is presented. The
approach, which has inherited its basic philosophy from recent work by Deutsch
and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional
probabilities rather than minimizing energy functions, is based upon a novel
and very efficient multisequence Monte Carlo scheme. By construction, the
method ensures that the designed sequences represent good folders
thermodynamically. A bootstrap procedure for the sequence space search is
devised making very large chains feasible. The algorithm is successfully
explored on the two-dimensional HP model with chain lengths N=16, 18 and 32.Comment: 7 pages LaTeX, 4 Postscript figures; minor change
Recommended from our members
How can existing ground-based profiling instruments improve European weather forecasts?
Observations of profiles of winds, aerosol, clouds, winds, temperature and humidity in the lowest few km of the atmosphere from networks of ceilometers, Doppler wind lidars and microwave radiometers are starting to flow in real time to forecasting centers in Europe.
To realise the promise of improved predictions of hazardous weather such as flash floods, wind storms, fog and poor air quality from high-resolution mesoscale models, the forecast models must be initialized with an accurate representation of the current state of the atmosphere, but the lowest few km are hardly accessible by satellite, especially in dynamically-active conditions. We report on recent European developments in the exploitation of existing ground-based profiling instruments so that they are networked and able to send data in real-time to forecast centers. The three classes of instruments are: (i) Automatic lidars and ceilometers providing backscatter profiles of clouds, aerosols, dust, fog and volcanic ash, the last two being especially important for air traffic control; (ii) Doppler wind lidars deriving profiles of wind, turbulence, wind shear, wind-gusts and low-level jets; and (iii) Microwave radiometers estimating profiles of temperature and humidity in nearly all weather conditions. Twenty-two European countries and fifteen European National Weather Services are collaborating in the project, that involves the implementation of common operating procedures, instrument calibrations, data formats and retrieval algorithms. Currently, data from 220 ceilometers in 17 countries are being distributed in near real-time to national weather forecast centers; this should soon rise to many hundreds. The wind lidars should start delivering real time data in late 2018, and the plan is to incorporate the microwave radiometers in 2019. Initial data assimilation tests indicate a positive impact of the new data
Evaluation-as-a-service for the computational sciences: overview and outlook
Evaluation in empirical computer science is essential to show progress and assess technologies developed. Several research domains such as information retrieval have long relied on systematic evaluation to measure progress: here, the Cranfield paradigm of creating shared test collections, defining search tasks, and collecting ground truth for these tasks has persisted up until now. In recent years, however, several new challenges have emerged that do not fit this paradigm very well: extremely large data sets, confidential data sets as found in the medical domain, and rapidly changing data sets as often encountered in industry. Crowdsourcing has also changed the way in which industry approaches problem-solving with companies now organizing challenges and handing out monetary awards to incentivize people to work on their challenges, particularly in the field of machine learning.
This article is based on discussions at a workshop on Evaluation-as-a-Service (EaaS). EaaS is the paradigm of not providing data sets to participants and have them work on the data locally, but keeping the data central and allowing access via Application Programming Interfaces (API), Virtual Machines (VM), or other possibilities to ship executables. The objectives of this article are to summarize and compare the current approaches and consolidate the experiences of these approaches to outline the next steps of EaaS, particularly toward sustainable research infrastructures.
The article summarizes several existing approaches to EaaS and analyzes their usage scenarios and also the advantages and disadvantages. The many factors influencing EaaS are summarized, and the environment in terms of motivations for the various stakeholders, from funding agencies to challenge organizers, researchers and participants, to industry interested in supplying real-world problems for which they require solutions.
EaaS solves many problems of the current research environment, where data sets are often not accessible to many researchers. Executables of published tools are equally often not available making the reproducibility of results impossible. EaaS, however, creates reusable/citable data sets as well as available executables. Many challenges remain, but such a framework for research can also foster more collaboration between researchers, potentially increasing the speed of obtaining research results
Reporting of Adverse Events in Published and Unpublished Studies of Health Care Interventions : A Systematic Review
BACKGROUND: We performed a systematic review to assess whether we can quantify the underreporting of adverse events (AEs) in the published medical literature documenting the results of clinical trials as compared with other nonpublished sources, and whether we can measure the impact this underreporting has on systematic reviews of adverse events. METHODS AND FINDINGS: Studies were identified from 15 databases (including MEDLINE and Embase) and by handsearching, reference checking, internet searches, and contacting experts. The last database searches were conducted in July 2016. There were 28 methodological evaluations that met the inclusion criteria. Of these, 9 studies compared the proportion of trials reporting adverse events by publication status. The median percentage of published documents with adverse events information was 46% compared to 95% in the corresponding unpublished documents. There was a similar pattern with unmatched studies, for which 43% of published studies contained adverse events information compared to 83% of unpublished studies. A total of 11 studies compared the numbers of adverse events in matched published and unpublished documents. The percentage of adverse events that would have been missed had each analysis relied only on the published versions varied between 43% and 100%, with a median of 64%. Within these 11 studies, 24 comparisons of named adverse events such as death, suicide, or respiratory adverse events were undertaken. In 18 of the 24 comparisons, the number of named adverse events was higher in unpublished than published documents. Additionally, 2 other studies demonstrated that there are substantially more types of adverse events reported in matched unpublished than published documents. There were 20 meta-analyses that reported the odds ratios (ORs) and/or risk ratios (RRs) for adverse events with and without unpublished data. Inclusion of unpublished data increased the precision of the pooled estimates (narrower 95% confidence intervals) in 15 of the 20 pooled analyses, but did not markedly change the direction or statistical significance of the risk in most cases. The main limitations of this review are that the included case examples represent only a small number amongst thousands of meta-analyses of harms and that the included studies may suffer from publication bias, whereby substantial differences between published and unpublished data are more likely to be published. CONCLUSIONS: There is strong evidence that much of the information on adverse events remains unpublished and that the number and range of adverse events is higher in unpublished than in published versions of the same study. The inclusion of unpublished data can also reduce the imprecision of pooled effect estimates during meta-analysis of adverse events
Drinking to ease the burden: a cross-sectional study on trauma, alcohol abuse and psychopathology in a post-conflict context
Ertl V, Saile R, Neuner F, Catani C. Drinking to ease the burden: a cross-sectional study on trauma, alcohol abuse and psychopathology in a post-conflict context. BMC Psychiatry. 2016;16(1): 202.Background
It is likely that alcohol use and abuse increase during and after violent conflicts. The most prominent explanation of this phenomenon has been referred to as self-medication hypothesis. It predicts that psychotropic substances are consumed to deal with conflict-related psychic strains and trauma. In northern Uganda, a region that has been affected by a devastating civil war and is characterized by high levels of alcohol abuse we examined the associations between war-trauma, childhood maltreatment and problems related to alcohol use. Deducing from the self-medication hypothesis we assumed alcohol consumption moderates the relationship between trauma-exposure and psychopathology.
Methods
A cross-sectional epidemiological survey targeting war-affected families in post-conflict northern Uganda included data of male (n = 304) and female (n = 365) guardians. We used standardized questionnaires in an interview format to collect data on the guardians’ socio-demography, trauma-exposure, alcohol consumption and symptoms of alcohol abuse, PTSD and depression.
Results
Symptoms of current alcohol use disorders were present in 46 % of the male and 1 % of the female respondents. A multiple regression model revealed the unique contributions of emotional abuse in the families of origin and trauma experienced outside the family-context in the prediction of men’s alcohol-related symptoms. We found that alcohol consumption moderated the dose-effect relationship between trauma-exposure and symptoms of depression and PTSD. Significant interactions indicated that men who reported more alcohol-related problems experienced less increase in symptoms of PTSD and depression with increasing trauma-exposure.
Conclusions
The gradual attenuation of the dose-effect the more alcohol-related problems were reported is consistent with the self-medication hypothesis. Hence, the functionality of alcohol consumption has to be considered when designing and implementing addiction treatment in post-conflict contexts
- …