675 research outputs found

    The European business cycle and greenhouse gas emissions

    Get PDF
    Our planet is engulfed on a major transformation. Greenhouse gases (GHG emissions) levels are at unprecedented heights, producing devastating effects on our climate and biodiversity. This study analyzes the relation between GHG emissions and the business cycle at the European level. This analysis is done using annual Gross Domestic Product (GDP) per capita as a proxy for the business cycle. With the help of the Hodrick Prescott (HP) filter, the trend from the cycle is deconstructed. The following results appear after analysing the statistics of the variables: first the GHG emissions cycle presents procyclicality with respect to the business cycle in most countries of the European Union. Second, the relative volatility of the emission cycle is greater than the volatility of GDP per capita in most economies. And finally, there is a decreasing correlation between procyclicality of emissions and GDP per capita

    Surface photometry of radio loud elliptical galaxies from the B2 sample

    Get PDF
    V-band CCD imaging is presented for 72 galaxies from the B2 radio sample (Colla et al.; Fanti et al.), with redshifts up to 0.2 and radio powers P408=1023-1026.5 W Hz-1. According to the morphology on the optical images 57 galaxies are classified as ellipticals, 6 as spirals and 7 as irregular. Surface photometry of the sample of ellipticals was obtained fitting ellipses to the light distribution. The light profile of these galaxies generally follows a de Vaucouleurs law, although in three cases the profiles show large excesses relative to the r1/4 law at large radii. The fitted μe and re parameters for the de Vaucouleurs galaxies are given in the paper. Three of the ellipticals show a bright nucleus. One of them is a known broad line radio galaxy (B2 1833+32) and the remaining two are Markarian galaxies, classified in the literature as BL Lac objects (B2 1101+38 and B2 1652+39). The radial profiles for ellipticity, position angle, and B4 term of the Fourier analysis are presented in the paper, and the morphological peculiarities of the ellipticals are described, including the presence of shells, tails, nuclear dust, isophote twisting, off-centering, and boxiness or diskness of the isophotes. Only one of the galaxies in this work is included in the subsample of B2 radio galaxies with well-defined jets (Parma et al.). In this sense the present sample complements the sample of 24 radio galaxies with well-defined radio jets in Parma et al. for which a similar study was presented in González-Serrano et al.). The irregular galaxy B2 0916+33 appears to be misclassified, and we suggest that the right identification of the radio source is a nearby point like object with V=18.45 mag. The spiral galaxy associated with B2 1441+26 is also misclassified. A point-like optical object with V=18.88 mag, located at arcsec from the original identification and coincident with the radio core is the most probable counterpart

    Theoretical study of the low‐lying states of trans‐1,3‐butadiene

    Get PDF
    We present extensive ab initio calculations on the low‐lying electronic states of trans‐1,3‐butadiene within the multireference configuration interaction (MRCI) framework by selecting the configurations with a perturbative criterion. The X 1Ag ground state and 1 3Bu, 1 3Ag, 2 1Ag, and 1 1Bu valence excited states have been calculated at a fixed geometry. The results obtained are in good agreement with previous experimental and calculated values, and could help to understand polyene spectroscopy, photochemistry, and photophysics. The advantages of a MRCI method where the most important contributions to the total MRCI wave function, perturbatively selected, are treated variationally, and the remaining terms are evaluated by means of a perturbational approach, are also discussed. Furthermore, a criterion in order to build a correlation‐consistent configuration interaction space is stated and, therefore, a reliable approximation to achieve accurate energy differences is obtained. Several monoelectronic molecular‐orbital basis functions are tried in order to select the most adequate to describe each [email protected] ; [email protected] ; [email protected]

    A multistrategy approach for digital text

    Get PDF
    The goal of the research described here is to develop a multistrategy classifier system that can be used for document categorization. The system automatically discovers classification patterns by applying several empirical learning methods to different representations for preclassified documents. The learners work in a parallel manner, where each learner carries out its own feature selection based on evolutionary techniques and then obtains a classification model. In classifying documents, the system combines the predictions of the learners by applying evolutionary techniques as well. The system relies on a modular, flexible architecture that makes no assumptions about the design of learners or the number of learners available and guarantees the independence of the thematic domain

    Public sector wage gaps in Spanish regions

    Get PDF
    This paper provides an approximation to the measurement of public sector wage gaps in Spanish regions. By using data from the European Community Household Panel, it is shown that the balance between what private firms pay in the local market and what the public sector pays, differs substantially in different areas of the country. Public sector wage differences among Spanish regions are mostly due to differences in returns, not to differences in characteristics or to selection effects, and are not constant across gender, educational levels, or occupations. Moreover, in those regions where Regional Governments have a higher weight in public employment, public wage gaps are higher and public employers pay higher returns. There also seems to be a cross regional positive correlation between public wage gaps and unemployment, and a negative one between labour productivity and public wage gaps. Hence, a tentative conclusion is that the incentives to select into the public sector are higher in the low productivity regions, precisely those where scarcity of human capital in the private sector may be the most important factor for explaining economic backwardness

    Evolutionary learning of document categories

    Get PDF
    This paper deals with a supervised learning method devoted to producing categorization models of text documents. The goal of the method is to use a suitable numerical measurement of example similarity to find centroids describing different categories of examples. The centroids are not abstract or statistical models, but rather consist of bits of examples. The centroid-learning method is based on a Genetic Algorithm for Texts (GAT). The categorization system using this genetic algorithm infers a model by applying the genetic algorithm to each set of preclassified documents belonging to a category. The models thus obtained are the category centroids that are used to predict the category of a test document. The experimental results validate the utility of this approach for classifying incoming documents.Peer reviewe

    Modelo computacional de lectura cognitiva para la representación automática de textos

    Get PDF
    El modelado del lenguaje natural en los ordenadores conlleva ciertas restricciones debido a la estructura lógica y a las limitaciones de tiempo y espacio de las máquinas, además de la complejidad intrínseca del lenguaje. Uno de los mayores problemas de dicho modelado es la representación de la semántica. Los primeros modelos conexionistas del lenguaje se situaban próximos a la cognición humana pero no eran lo suficientemente generales y eficientes para aplicaciones reales. Estos primeros sistemas de procesamiento de lenguaje natural hacían uso de redes de asociación como formalismo de representación. Debido a las limitaciones de almacenamiento y procesamiento de los ordenadores de aquella época, y al crecimiento de la información textual almacenada electrónicamente, los sistemas de procesamiento del lenguaje adoptaron formalismos matemáticos y estadísticos. Hoy en día, a causa de esa cantidad creciente de información textual los sistemas que son capaces de procesar textos son de extrema utilidad. Hasta hace r lativamente poco tiempo, la mayoría de estos sistemas utilizaban la clásica representación de los textos como “bolsa de palabras”, un formalismo de tipo vectorial que sólo tiene en cuenta las apariciones de las palabras de manera independiente. A mediados de los noventa, surgen los hiperespacios de palabras como un formalismo de representación alternativo al de “bolsa de palabras” tradicional. LSA (Análisis de Semántica Latente) fue el precursor de todos ellos, seguido por HAL (Hiperespacio Análogo al Lenguaje), PMI-IR, Indexado Aleatorio, WAS (Espacio de Asociación de Palabras) o ICAN (Construcción Incremental de una Red Asociativa), entre otros. Este tipo de sistemas construyen una representación en forma de matriz del conocimiento semántico lingüístico almacenado en una colección de textos dada. Este hiperespacio tiene en cuenta las relaciones entre las palabras y el contexto sintáctico y semántico en el que aparecen. Sin embargo, estos sistemas también representan los textos como vectores, llevando a cabo peraciones con las filas y las columnas de la matriz correspondientes a las palabras de los documentos. Aunque la representación mediante hiperespacios contiene mucha más información que la representación tradicional, puesto que los valores de los vectores son el resultado de la interacción entre las palabras y el contexto, los textos siguen siendo presentados como un conjunto de números sin estructura. A pesar de ello, los sistemas basados en hiperespacios han aportado una mejora significativa con respecto a los sistemas basados en la representación clásica. De los sistemas anteriormente mencionados, sólo ICAN introduce una representación estructural, almacenando el conocimiento en forma de red contextual asociativa de palabras y no como una matriz. Este modelo, a diferencia del resto de sistemas mencionados, hace posible la actualización del conocimiento sin necesidad de la reconstrucción total del mismo. A pesar del progreso realizado utilizando los hiperespacios de palabras, los seres humanos continúan r alizando tareas de procesamiento de lenguaje natural, como la clasificación de textos o la recuperación de información, de manera mucho más precisa que los ordenadores aunque, por supuesto, más despacio. Es difícil concebir el conocimiento lingüístico representado como una matriz en el cerebro humano, así como que la lectura suponga realizar operaciones matemáticas sobre dicha matriz. La lectura es un proceso secuencial de percepción en el tiempo, durante el cual los mecanismos mentales construyen imágenes e inferencias que se van reforzando, actualizando o descartando hasta la conclusión de la lectura del texto, momento en el que la imagen mental generada permite a los seres humanos resumir o clasificar el texto, recuperar documentos similares o simplemente expresar opiniones sobre el mismo. Esta es la filosofía que subyace en el sistema presentado en esta tesis. Este sistema, denominado SILC (Sistema de Indexación por Lectura Cognitiva), está ligeramente inspirado en el formalismo que sugiere el sistema ICA . Lo que se propone en este trabajo de tesis doctoral es un modelo computacional de lectura que construye una representación de la semántica de un texto como resultado de un proceso en el tiempo. Dicha representación posee una estructura que posibilita la descripción de las relaciones entre los conceptos leídos y su nivel de significación en cada momento del proceso de lectura. Existen otros modelos computacionales de lectura cuyo objetivo es más teórico que aplicado. La mayoría de ellos parten del modelo conexionista de Construcción-Integración y se centran en diferentes fases u objetivos de la lectura. Todos estos sistemas ponen de manifiesto la gran variedad y complejidad de los procesos cognitivos implicados en la lectura. El modelo propuesto en esta tesis, SILC, es un método sencillo que incluye sólo algunos de dichos procesos cognitivos y, aunque trata de ser útil en aplicaciones prácticas, está inspirado en los seres humanos tratando de asemejarse más a su proceder que el resto de sistemas del mismo ca po de aplicación. El modelo que implementa SILC intenta simular, en parte, procesos cognitivos de alto nivel que operan en el tiempo. Primero, el sistema construye una red de asociación conceptual como una memoria lingüística base a partir de una colección de textos que representan el espacio de conocimiento semántico. A continuación, el modelo genera representaciones de los textos de entrada como redes de conceptos con niveles de activación, que recogen el nivel de significación semántica de los mismos. Para ello, el modelo utiliza el conocimiento semántico lingüístico previamente construido realizando inferencias sobre el mismo mediante la propagación por la red de la activación de los conceptos leídos en orden secuencial. La representación generada se usa posteriormente para indexar documentos con el fin de clasificarlos automáticamente. Los métodos de indexación tradicionales representan los textos como resultado de procesos matemáticos. Puesto que los seres humanos superan ampliamente a los ordenadores e tareas de procesamiento de lenguaje natural, el modelo de SILC se inspira en la cognición humana para mejorar su eficacia en dichas tareas. Se han realizado experimentos para comparar el modelo con sujetos humanos, tanto durante la lectura, mediante la predicción o inferencia de conceptos, como al final de la misma, mediante la comparación con resúmenes generados por los sujetos. Los resultados muestran que el sistema es adecuado para modelar de manera aproximada el proceder humano en la lectura y sustentan la hipótesis de partida de SILC: cuanto más se asemeje el sistema a los seres humanos, mejor realizará las tareas prácticas del lenguaje. Los resultados también demuestran que el sistema es adecuado como marco experimental de validación de hipótesis relacionadas con aspectos cognitivos de la lectura. Otros experimentos de aplicación práctica han mostrado que, una vez que los parámetros del modelo han sido optimizados, la representación generada obtiene mejores resultados en clasificación de textos que otr representaciones generadas por los sistemas existentes. Se han definido tres medidas de similitud semántica entre textos a partir de las representaciones generadas por SILC. Los resultados experimentales muestran que la mejor de ellas es más eficaz y eficiente que otras medidas de similitud existentes. Además, la sinergia de dicha medida con el modelo de lectura implementado hace a SILC apropiado para su aplicación a tareas reales de procesamiento de lenguaje natural

    The Revolutionary Media Education Decade: From the UNESCO to the ALFAMED Curriculum for Teacher Training

    Get PDF
    Nations across the globe are immersed in a technological revolution—intensified by the need to respond to COVID-19 issues. In order to be critical and responsible citizens in the current media ecosystem, it is important that students acquire and develop certain skills when consuming and producing information for and when communicating through the media. This is a major challenge that educational systems worldwide have to face. Hence, new curricula in media education to guide future teachers towards the successful acquisition of new media skills have been proposed. The aims of this work are to conduct a theoretical approach to this worldwide technological and media evolution in the past decade, to make an indepth comparison between the Curriculum for teachers on media and information literacy published by the UNESCO (2011) and the publication of the new AlfaMed Curriculum for the training of teachers in media education (2021). This framework starts by providing an extensive analysis of the key elements of both curricula and of their corresponding modules, establishing, thus, a constructive comparison while updating them, according to the needs, changes, and realities that have taken place regarding digital literacy in the past decade. Finally, the chapter concludes with the detailing of the challenges and with proposals for teacher training in media and information literacy.This work is framed under the development of the framework of Alfamed (Euro-American inter-university research network on media literacy for citizenship), with the support of the R+D Project: “Youtubers and Instagrammers: Media Competence in Emerging Prosumers” (RTI2018-093303-B- I00), financed by the State Research Agency of the Spanish Ministry of Science, Innovation and Universities and the European Regional Development Fund (ERDF). Also, some results are derived from the project: The construction of digital identity in older adults. Designing personalised learning trajectories in blended learning scenarios. Reference: PIC2-2020-18, from the University of Salamanca

    Characterizing visual asymmetries in contrast perception using shaded stimuli.

    Get PDF
    Previous research has shown a visual asymmetry in shaded stimuli where the perceived contrast depended on the polarity of their dark and light areas (Chacón, 2004). In particular, circles filled out with a top-dark luminance ramp were perceived with higher contrast than top-light ones although both types of stimuli had the same physical contrast. Here, using shaded stimuli, we conducted four experiments in order to find out if the perceived contrast depends on: (a) the contrast level, (b) the type of shading (continuous vs. discrete) and its degree of perceived three-dimensionality, (c) the orientation of the shading, and (d) the sign of the perceived contrast alterations. In all experiments the observers' tasks were to equate the perceived contrast of two sets of elements (usually shaded with opposite luminance polarity), in order to determine the subjective equality point. Results showed that (a) there is a strong difference in perceived contrast between circles filled out with luminance ramp top-dark and top-light that is similar for different contrast levels; (b) we also found asymmetries in contrast perception with different shaded stimuli, and this asymmetry was not related with the perceived three-dimensionality but with the type of shading, being greater for continuous-shading stimuli

    Long-term unemployment subsidies and middle-age disadvantaged workers’ health

    Get PDF
    We estimate the labour market and health effects of a long-term unemployment (LTU) subsidy targeted to middle aged disadvantaged workers. In order to do so, we exploit a Spanish reform introduced in July 2012 that increased the age eligibility threshold to receive the subsidy from 52 to 55. Using a within-cohort identification strategy, we show that men ineligible for the subsidy were more likely to leave the labour force. In terms of health outcomes, although we do not report impacts on hospitalizations when considering the whole sample, we do find significant results when we separate the analysis by main diagnosis and gender. More specifically, we show a reduction by 12.9% in hospitalizations due to injuries as well as a drop by 2 percentage points in the probability of a mental health diagnosis for men who were eligible for the LTU subsidy. Our results highlight the role of long-term unemployment benefits as a protecting device for the health (both physical and mental) of middle aged, low educated men who are in a disadvantaged position in the labour market
    corecore