73 research outputs found
Hepatic consequences of COVID-19 infection. Lapping or biting?
The outbreak of coronavirus disease 2019 (COVID-19) starting last December in China placed emphasis on liver involvement during infection. This review discusses the underlying mechanisms linking COVID-19 to liver dysfunction, according to recent available information, while waiting further studies. The manifestations of liver damage are usually mild (moderately elevated serum aspartate aminotransferase activities), and generally asymptomatic. Few patients can still develop severe liver problems, and therapeutic options can be limited. Liver dysfunction may affect about one-third of the patients, with prevalence greater in men than women, and in elderly. Mechanisms of damage are complex and include direct cholangiocyte damage and other coexisting conditions such as the use of antiviral drugs, systemic inflammatory response, respiratory distress syndrome-induced hypoxia, sepsis, and multiple organ dysfunction. During new COVID-19 infections, liver injury may be observed. If liver involvement appears during COVID-19 infection, however, attention is required. This is particularly true if patients are older or have a pre-existing history of liver diseases. During COVID-19 infection, the onset of liver damage impairs the prognosis, and hospital stay is longer
Media Reporting of Health Interventions: Signs of Improvement, but Major Problems Persist
Background: Studies have persistently shown deficiencies in medical reporting by the mainstream media. We have been monitoring the accuracy and comprehensiveness of medical news reporting in Australia since mid 2004. This analysis of more than 1200 stories in the Australian media compares different types of media outlets and examines reporting trends over time. Methods and Findings: Between March 2004 and June 2008 1230 news stories were rated on a national medical news monitoring web site, Media Doctor Australia. These covered a variety of health interventions ranging from drugs, diagnostic tests and surgery to dietary and complementary therapies. Each story was independently assessed by two reviewers using ten criteria. Scores were expressed as percentages of total assessable items deemed satisfactory according to a coding guide. Analysis of variance was used to compare mean scores and Fishers exact test to compare proportions. Trends over time were analysed using un-weighted linear regression analysis. Broadsheet newspapers had the highest average satisfactory scores: 58% (95% CI 56–60%), compared with tabloid newspapers and online news outlets, 48% (95% CI 44–52) and 48% (95% CI 46–50) respectively. The lowest scores were assigned to stories broadcast by human interest/current affairs television programmes (average score 33% (95% CI 28–38)). While there was a non- significant increase in average scores for all outlets, a significant improvement was seen in the online news media: a rise of 5.1% (95%CI 1.32, 8.97; P 0.009). Statistically significant improvements were seen in coverage of the potential harms of interventions, the availability of treatment or diagnostic options, and accurate quantification of benefits. Conclusion: Although the overall quality of medical reporting in the general media remains poor, this study showed modest improvements in some areas. However, the most striking finding was the continuing very poor coverage of health news by commercial current affairs television programs
The Structure of the EU Mediasphere
Background.
A trend towards automation of scientific research has recently resulted in what has been termed “data-driven inquiry” in various disciplines, including physics and biology. The automation of many tasks has been identified as a possible future also for the humanities and the social sciences, particularly in those disciplines concerned with the analysis of text, due to the recent availability of millions of books and news articles in digital format. In the social sciences, the analysis of news media is done largely by hand and in a hypothesis-driven fashion: the scholar needs to formulate a very specific assumption about the patterns that might be in the data, and then set out to verify if they are present or not.
Methodology/Principal Findings.
In this study, we report what we think is the first large scale content-analysis of cross-linguistic text in the social sciences, by using various artificial intelligence techniques. We analyse 1.3 M news articles in 22 languages detecting a clear structure in the choice of stories covered by the various outlets. This is significantly affected by objective national, geographic, economic and cultural relations among outlets and countries, e.g., outlets from countries sharing strong economic ties are more likely to cover the same stories. We also show that the deviation from average content is significantly correlated with membership to the eurozone, as well as with the year of accession to the EU.
Conclusions/Significance.
While independently making a multitude of small editorial decisions, the leading media of the 27 EU countries, over a period of six months, shaped the contents of the EU mediasphere in a way that reflects its deep geographic, economic and cultural relations. Detecting these subtle signals in a statistically rigorous way would be out of the reach of traditional methods. This analysis demonstrates the power of the available methods for significant automation of media content analysis
British press attitudes towards the EU's global presence:from the Russian-Georgian War to the 2009 Copenhagen Summit
This article surveys the way in which British print media have presented the European Union (EU)'s global presence in the international arena by analysing two case studies which reflect two very distinctive areas of EU foreign policy: global climate change policy and the policy towards Russia. It employs frame analysis, allowing for the identification of the way in which the discourse of the press was categorized around a series of central opinions and ideas. Frames underscore the connections made by journalists between different events, policies or phenomena and their possible interpretations. The analysis highlights that acting through the common framework of the EU rather than unilaterally was a strategy preferred by the British press. These findings are in stark contrast with the deep Euroscepticism which characterizes press attitudes towards most policy areas, and is often considered to be rooted in the British political culture, media system, public opinion or the longstanding tradition of viewing the European continent as the other
Journalism as usual: The use of social media as a newsgathering tool in the coverage of the Iranian elections in 2009
The Iranian elections of June 2009 and the ensuing protests were hailed as the 'Twitter revolution' in the media in the United Kingdom. However, this study of the use of sources by journalists covering the events shows that despite their rhetoric of the importance of social media in alerting the global community to events in Iran, journalists themselves did not turn to that social media for their own information, but relied most on traditional sourcing practices: political statements, expert opinion and a handful of 'man on the street' quotes for colour.
This study shows that although the mythology of the Internet as a place where all voices are equal, and have equal access to the public discourse continues – a kind of idealized 'public sphere' – the sourcing practices of journalists and the traditions of coverage continue to ensure that traditional voices and sources are heard above the crowd
Comparability of Raman Spectroscopic Configurations: A Large Scale Cross-Laboratory Study
This is the final version. Available on open access from the American Chemical Society via the DOI in this recordThe variable configuration of Raman spectroscopic platforms is one of the major obstacles in establishing Raman spectroscopy as a valuable physicochemical method within real-world scenarios such as clinical diagnostics. For such real world applications like diagnostic classification, the models should ideally be usable to predict data from different setups. Whether it is done by training a rugged model with data from many setups or by a primary-replica strategy where models are developed on a 'primary' setup and the test data are generated on 'replicate' setups, this is only possible if the Raman spectra from different setups are consistent, reproducible, and comparable. However, Raman spectra can be highly sensitive to the measurement conditions, and they change from setup to setup even if the same samples are measured. Although increasingly recognized as an issue, the dependence of the Raman spectra on the instrumental configuration is far from being fully understood and great effort is needed to address the resulting spectral variations and to correct for them. To make the severity of the situation clear, we present a round robin experiment investigating the comparability of 35 Raman spectroscopic devices with different configurations in 15 institutes within seven European countries from the COST (European Cooperation in Science and Technology) action Raman4clinics. The experiment was developed in a fashion that allows various instrumental configurations ranging from highly confocal setups to fibre-optic based systems with different excitation wavelengths. We illustrate the spectral variations caused by the instrumental configurations from the perspectives of peak shifts, intensity variations, peak widths, and noise levels. We conclude this contribution with recommendations that may help to improve the inter-laboratory studies.COST (European Cooperation in Science and Technology)Portuguese Foundation for Science and TechnologyNational Research Fund of Luxembourg (FNR)China Scholarship Council (CSC)BOKU Core Facilities Multiscale ImagingDeutsche Forschungsgemeinschaft (DFG, German Research Foundation
Web Searching: A Quality Measurement Perspective
The purpose of this paper is to describe various quality measures for search engines and to ask whether these are suitable. We especially focus on user needs and their use of web search engines. The paper presents an extensive literature review and a first quality measurement model, as well. Findings include that search engine quality can not be measured by just retrieval effectiveness (the quality of the results), but should also consider index quality, the quality of the search features and search engine usability.
For each of these sections, empirical results from studies conducted in the past, as well as from our own research are presented. These results have implications for the evaluation of search engines and for the development of better search systems that give the user the best possible search experience
- …