1,357 research outputs found

    Spanish Communication Academia: Scientific Productivity vs. Social Activity

    Get PDF
    At a time when academic activity in the area of communication is principally assessed by the impact of scientific journals, the scientific media and the scientific productivity of researchers, the question arises as to whether social factors condition scientific activity as much as these objective elements. This investigation analyzes the influence of scientific productivity and social activity in the area of communication. We identify a social network of researchers from a compilation of doctoral theses in communication and calculate the scientific production of 180 of the most active researchers who sit on doctoral committees. Social network analysis is then used to study the relations that are formed on these doctoral thesis committees. The results suggest that social factors, rather than individual scientific productivity, positively influence such a key academic and scientific activity as the award of doctoral degrees. Our conclusions point to a disconnection between scientific productivity and the international scope of researchers and their role in the social network. Nevertheless, the consequences of this situation are tempered by the nonhierarchical structure of relations between communication scientists

    Rotochemical heating with a density-dependent superfluid energy gap in neutron stars

    Full text link
    When a rotating neutron star loses angular momentum, the reduction of the centrifugal force makes it contract. This perturbs each fluid element, raising the local pressure and originating deviations from beta equilibrium, inducing reactions that release heat (rotochemical heating). This effect has previously been studied by Fern\'andez and Reisenegger for neutron stars of non-superfluid matter and by Petrovich and Reisenegger for superfluid matter, finding that the system in both cases reaches a quasi-steady state, corresponding to a partial equilibration between compression, due to the loss of angular momentum, and reactions that try to restore the equilibrium. However, Petrovich and Reisenegger assumes a constant value of the superfluid energy gap, whereas theoretical models predict density-dependent gap amplitudes, and therefore gaps that depend on the location in the star. In this work, we try to discriminate between several proposed gap models, comparing predicted surface temperatures to the value measured for the nearest millisecond pulsar, J0437-4715.Comment: 2 pages, 1 figure. VIII Symposium in Nuclear Physics and Applications: Nuclear and Particle Astrophysics. Appearing in the American Institute of Physics (AIP) conference proceedings

    Big Data Optimization : Algorithmic Framework for Data Analysis Guided by Semantics

    Get PDF
    Fecha de Lectura de Tesis: 9 noviembre 2018.Over the past decade the rapid rise of creating data in all domains of knowledge such as traffic, medicine, social network, industry, etc., has highlighted the need for enhancing the process of analyzing large data volumes, in order to be able to manage them with more easiness and in addition, discover new relationships which are hidden in them Optimization problems, which are commonly found in current industry, are not unrelated to this trend, therefore Multi-Objective Optimization Algorithms (MOA) should bear in mind this new scenario. This means that, MOAs have to deal with problems, which have either various data sources (typically streaming) of huge amount of data. Indeed these features, in particular, are found in Dynamic Multi-Objective Problems (DMOPs), which are related to Big Data optimization problems. Mostly with regards to velocity and variability. When dealing with DMOPs, whenever there exist changes in the environment that affect the solutions of the problem (i.e., the Pareto set, the Pareto front, or both), therefore in the fitness landscape, the optimization algorithm must react to adapt the search to the new features of the problem. Big Data analytics are long and complex processes therefore, with the aim of simplify them, a series of steps are carried out through. A typical analysis is composed of data collection, data manipulation, data analysis and finally result visualization. In the process of creating a Big Data workflow the analyst should bear in mind the semantics involving the problem domain knowledge and its data. Ontology is the standard way for describing the knowledge about a domain. As a global target of this PhD Thesis, we are interested in investigating the use of the semantic in the process of Big Data analysis, not only focused on machine learning analysis, but also in optimization

    Origin of passivation in hole-selective transition metal oxides for crystalline silicon heterojunction solar cells

    Get PDF
    Transition metal oxides (TMOs) have recently demonstrated to be a good alternative to boron/phosphorous doped layers in crystalline silicon heterojunction solar cells. In this work, the interface between n-type c-Si (n-Si) and three thermally evaporated TMOs (MoO3, WO3, and V2O5) was investigated by transmission electron microscopy, secondary ion-mass, and x-ray photoelectron spectroscopy. For the oxides studied, surface passivation of n-Si was attributed to an ultra-thin (1.9–2.8 nm) SiOx~1.5 interlayer formed by chemical reaction, leaving oxygen-deficient species (MoO, WO2, and VO2) as by-products. Carrier selectivity was also inferred from the inversion layer induced on the n-Si surface, a result of Fermi level alignment between two materials with dissimilar electrochemical potentials (work function difference ¿¿ = 1 eV). Therefore, the hole-selective and passivating functionality of these TMOs, in addition to their ambient temperature processing, could prove an effective means to lower the cost and simplify solar cell processing.Postprint (author's final draft

    Intervenciones para reducir el impacto de la práctica dual en el sector público de salud

    Get PDF
    INTRODUCTION: Dual practice (i.e. workers who work in the public and private sector) has an impact on health services in terms of quality and costs. However, the effectiveness of regulatory policies has not been proven. METHODS: We searched in Epistemonikos, the largest database of systematic reviews in health, which is maintained by screening multiple information sources, including MEDLINE, EMBASE, Cochrane, among others. We extracted data from the systematic reviews, reanalyzed data of primary studies, conducted a meta-analysis and generated a summary of findings table using the GRADE approach. RESULTS AND CONCLUSIONS: We identified three systematic reviews that included 23 primary studies overall, of which all correspond to observational studies. We concluded it is not clear whether the interventions to reduce the negative consequences of dual practice in the health system are effective because the certainty of the available evidence is very low

    Report of MIRACLE team for Geographical IR in CLEF 2006

    Full text link
    The main objective of the designed experiments is testing the effects of geographical information retrieval from documents that contain geographical tags. In the designed experiments we try to isolate geographical retrieval from textual retrieval replacing all geo-entity textual references from topics with associated tags and splitting the retrieval process in two phases: textual retrieval from the textual part of the topic without geo-entity references and geographical retrieval from the tagged text generated by the topic tagger. Textual and geographical results are combined applying different techniques: union, intersection, difference, and external join based. Our geographic information retrieval system consists of a set of basics components organized in two categories: (i) linguistic tools oriented to textual analysis and retrieval and (ii) resources and tools oriented to geographical analysis. These tools are combined to carry out the different phases of the system: (i) documents and topics analysis, (ii) relevant documents retrieval and (iii) result combination. If we compare the results achieved to the last campaign’s results, we can assert that mean average precision gets worse when the textual geo-entity references are replaced with geographical tags. Part of this worsening is due to our experiments return cero pertinent documents if no documents satisfy de geographical sub-query. But if we only analyze the results of queries that satisfied both textual and geographical terms, we observe that the designed experiments recover pertinent documents quickly, improving R-Precision values. We conclude that the developed geographical information retrieval system is very sensible to textual georeference and therefore it is necessary to improve the name entity recognition module

    Report of MIRACLE team for the Ad-Hoc track in CLEF 2006

    Get PDF
    This paper presents the 2006 MIRACLE’s team approach to the AdHoc Information Retrieval track. The experiments for this campaign keep on testing our IR approach. First, a baseline set of runs is obtained, including standard components: stemming, transforming, filtering, entities detection and extracting, and others. Then, a extended set of runs is obtained using several types of combinations of these baseline runs. The improvements introduced for this campaign have been a few ones: we have used an entity recognition and indexing prototype tool into our tokenizing scheme, and we have run more combining experiments for the robust multilingual case than in previous campaigns. However, no significative improvements have been achieved. For the this campaign, runs were submitted for the following languages and tracks: - Monolingual: Bulgarian, French, Hungarian, and Portuguese. - Bilingual: English to Bulgarian, French, Hungarian, and Portuguese; Spanish to French and Portuguese; and French to Portuguese. - Robust monolingual: German, English, Spanish, French, Italian, and Dutch. - Robust bilingual: English to German, Italian to Spanish, and French to Dutch. - Robust multilingual: English to robust monolingual languages. We still need to work harder to improve some aspects of our processing scheme, being the most important, to our knowledge, the entities recognition and normalization
    corecore