174 research outputs found

    (b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!)

    Get PDF
    (b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!

    Towards the cross-identification of radio galaxies with machine learning and the effect of radio-loud AGN on galaxy evolution

    Get PDF
    It is now well established that active galactic nuclei (AGN) play a fundamental role in galaxy evolution. On cosmic scales, the evolution over cosmic time of the star-formation rate density and black hole accretion rate appear to be closely related, and on galactic scales, the mass of the stellar bulge is tightly correlated to the mass of the black hole. In particular, radio-loud AGN, which are characterised by powerful jets extending hundreds of kiloparsecs from the galaxy, make a significant contribution to the evolution of the most massive galaxies. There exists a correlation between the prevalence of radio-loud AGN and the stellar and black hole masses, with the stellar mass being the stronger driver of AGN activity. Furthermore, essentially all of the most massive galaxies host a radio-loud AGN. AGN feedback is the strongest candidate for driving the quenching of star-formation activity, in particular at galaxies at the highest masses, as it is capable of maintaining these galaxies as "red and dead". However, the precise mechanisms by which AGN influence galaxy evolution remain poorly understood. The anticipation of the Square Kilometre Array (SKA) brought radio astronomy into a revolutionary new era. New-generation radio telescopes have been built to develop and test new technologies while addressing different scientific questions. These have already detected a large number of sources and many previously unknown galaxies. One of these telescopes is the Low Frequency Array (LOFAR), which has been conducting an extensive survey across the entire northern sky called the LOFAR Two-Metre Sky Survey (LoTSS). In LoTSS, the source density is higher than in any existing large-area radio survey, and in less than a third of the survey, LoTSS already detected more than 4 million radio sources. The large size of the LoTSS samples already allows the separation of the AGNs into bins of stellar mass, environment, black hole mass, star formation rate, and morphology independently, thus enabling the breaking of degeneracies between the different parameters. The radio, long used to identify and study AGNs, is a powerful tool when radio sources are matched to their optically identified host galaxies. This "cross-matching" process typically depends on a combination of statistical approaches and visual inspection. For compact sources, cross-matching is traditionally achieved using statistical methods. The task becoms significantly more difficult when the radio emission is extended, split into multiple radio components, or when the host galaxy is not detected in the optical. In these cases, sources need to be inspected, radio components need to be eventually associated together into physical sources, and then radio sources need to be cross-matched with their optical and/or infrared counterparts. With recent radio continuum surveys growing massively in size, it is now extremely laborious to visually cross-match more than a small fraction of the total sources. The new high-sensitivity radio telescopes are also better at detecting complex radio structures, resulting in an increase in the number of radio sources whose radio emission is separated into different radio components. In addition, due to a higher density of objects, more compact sources can be randomly positioned close enough to resemble extended sources. Consequently, the cross-matching of radio galaxies with their optical counterparts is becoming increasingly difficult. It is crucial to minimise the extent of unnecessary inspection, with the present cross-matching systems demanding improvement. In this thesis, I use Machine Learning (ML) to investigate solutions to improve the cross-matching process. ML is a rapidly evolving technique that has recently benefited from a vast increase in data availability, increased computing power, and significantly improved algorithms. ML is gaining popularity in the field of astronomy, and it is undoubtedly the most promising technique for managing the large radio astronomy datasets, while having available at the same time the amount of data required to train ML algorithms. Part of the work in this thesis was indeed focused on creating a dataset based on visual inspections of the first data release of the LoTSS survey (LoTSS DR1) in order to train and cross-validate the ML models, and apply the results to the second data release (LoTSS DR2). I trained tree-based ML models using this dataset to determine whether a statistical match is reliable. In particular, I implemented a classifier to identify the sources for which a statistical match to optical and infrared catalogues by likelihood ratio is not reliable in order to select radio sources for visual inspection. I used the properties of the radio sources, the Gaussians that compose a source, the neighbouring radio sources, as well as the optical counterparts. The best model, a gradient boosting classifier, achieves an accuracy of 95% on a balanced dataset and 96% on real unbalanced data after optimising the classification threshold. The results were incorporated in the cross-matching of LoTSS DR2. I further present a deep learning classifier for identifying sources that require radio component association. In order to improve spatial and local information about the radio sources, I create a multi-modal model that makes use of different types of input data, with a convolutional network component of the model receiving radio images as input and a neural network component using parameters measured from the radio source and its near neighbours. The model helps to recover 94% of the sources with multiple components in balanced dataset and has an accuracy of 97% on real unbalanced data. The method has already been applied with success to properly identify sources that require component association in order to get the correct radio fluxes for AGN population studies. The ML techniques used in this work can be adapted to other radio surveys. Furthermore, ML will be crucial to dealing with the next radio surveys, in particular for source detection, identification and cross-matching, where only with reliable source identification is it possible to combine radio data with other data at different wavelengths and maximally exploit the scientific potential of the radio data. The use of deep learning, in particular testing ways of combining different data types, can bring further advantages, as it may help with the comprehension of data with different origins. This is particularly important for any upcoming data integration within the SKA. Finally, I used the results of cross-matching the LoTSS DR2 data to understand the interaction between radio-loud AGN, the host galaxy, and the surrounding environment. Specifically, the investigation focused on the properties of the hosts of radio-loud AGN, such as stellar mass, bulge mass, and black hole mass, as well as morphology and environmental factors. The results consistently support the significant influence of stellar mass on radio-AGN activity. It was found that galaxy morphology (i.e. ellipticals vs. spirals) has a negligible dependence on AGN activity unless at higher masses, but those correlate with stellar mass as well as with the environment. The most relevant factor for radio AGN prevalence, after controlling for stellar mass, emerged as higher-density environments, in particular on a global scale. These outcomes provide valuable insights into the triggering and fuelling mechanisms of radio-loud AGN, aligning with cooling flow models and improving our understanding of the phenomenon

    Deep Learning, Shallow Dips: Transit light curves have never been so trendy

    Get PDF
    At the crossroad between photometry and time-domain astronomy, light curves are invaluable data objects to study distant events and sources of light even when they can not be spatially resolved. In particular, the field of exoplanet sciences has tremendously benefited from acquired stellar light curves to detect and characterise a majority of the outer worlds that we know today. Yet, their analysis is challenged by the astrophysical and instrumental noise often diluting the signals of interest. For instance, the detection of shallow dips caused by transiting exoplanets in stellar light curves typically require a precision of the order of 1 ppm to 100 ppm in units of stellar flux, and their very study directly depends upon our capacity to correct for instrumental and stellar trends. The increasing number of light curves acquired from space and ground-based telescopes—of the order of billions—opens up the possibility for global, efficient, automated processing algorithms to replace individual, parametric and hard-coded ones. Luckily, the field of deep learning is also progressing fast, revolutionising time series problems and applications. This reinforces the incentive to develop data-driven approaches hand-in-hand with existing scientific models and expertise. With the study of exoplanetary transits in focus, I developed automated approaches to learn and correct for the time-correlated noise in and across light curves. In particular, I present (i) a deep recurrent model trained via a forecasting objective to detrend individual transit light curves (e.g. from the Spitzer space telescope); (ii) the power of a Transformer-based model leveraging whole datasets of light curves (e.g. from large transit surveys) to learn the trend via a masked objective; (iii) a hybrid and flexible framework to combine neural networks with transit physics

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data

    Data journeys in the sciences

    Get PDF
    This is the final version. Available from Springer via the DOI in this record. This groundbreaking, open access volume analyses and compares data practices across several fields through the analysis of specific cases of data journeys. It brings together leading scholars in the philosophy, history and social studies of science to achieve two goals: tracking the travel of data across different spaces, times and domains of research practice; and documenting how such journeys affect the use of data as evidence and the knowledge being produced. The volume captures the opportunities, challenges and concerns involved in making data move from the sites in which they are originally produced to sites where they can be integrated with other data, analysed and re-used for a variety of purposes. The in-depth study of data journeys provides the necessary ground to examine disciplinary, geographical and historical differences and similarities in data management, processing and interpretation, thus identifying the key conditions of possibility for the widespread data sharing associated with Big and Open Data. The chapters are ordered in sections that broadly correspond to different stages of the journeys of data, from their generation to the legitimisation of their use for specific purposes. Additionally, the preface to the volume provides a variety of alternative “roadmaps” aimed to serve the different interests and entry points of readers; and the introduction provides a substantive overview of what data journeys can teach about the methods and epistemology of research.European CommissionAustralian Research CouncilAlan Turing Institut

    Pedagogical approaches to surface phenomena in liquids: Investigation-based laboratory and modelling activities to improve students’ learning

    Get PDF
    Paperclips can float on water, mercury drops do not spread on solid surfaces, and fluids can flow against gravity in capillary tubes. Surface tension can be used to explain these phenomena that are macroscopic manifestations of microscopic molecular interactions. At both school and university levels, surface phenomena are introduced through traditional macroscopic or microscopic approaches. However, since explanations based on microscopic models are often in conflict with common macroscopic interpretations, the traditional teaching of the basic concepts related to surface phenomena can be unclear and can prevent students from an effective understanding of the topic. However, since surface phenomena applications are important in physics and other applied disciplines, it may be worth to reconstruct this content based on research results in Physics Education. Research demonstrates that models constructed at an intermediate scale (i.e., mesoscopic scale) can be used effectively in science education. Particularly, the literature recognizes mesoscopic models as valuable for efficiently introducing topics such as solid friction and fluid statics. These models have the benefits of the microscopic model. Particularly, they foster understanding based on the recognition of a “mechanism of functioning”, that is at the basis of the development of explicative lines or reasoning. Furthermore, these models do not require a significant amount of computer resources to execute simulations implementing the models. On the basis of these observations, we asked ourselves how we could contribute to improve the teaching and learning of this topic. We hypothesised that choosing an appropriate modelling scale to introduce a given topic would appreciably enhance the teaching/learning processes at both school and university levels. On the basis of our research hypothesis, we decided to study how and to what extent different didactical approaches based on macroscopic and mesoscopic description, respectively, can foster the teaching and learning of surface phenomena at secondary school level. We designed two teaching-learning sequences (TLSs), one based on macroscopic modelling, and the other on mesoscopic modelling, which were trialled each with a group of upper secondary school students. Each TLS was based on an inquiry-based approach and was planned to involve students in active learning practices. The main goal of the trialling was not to identify which group highlights the best learning depending on the different modelling approach, but to verify the aspects of each approach that can be considered truly relevant in promoting learning. The planning and implementation of the two TLSs were guided by the general research question “which aspects of each approach can be considered relevant in promoting students’ scientific learning?”. The data collected during the trialling of the TLSs (student worksheets, interviews, students’ answers to questionnaires etc.) were studied by means of qualitative and/or quantitative analysis methodologies. Resuming some results, after the instruction students who followed the macroscopic approach, appear more capable than students who followed the mesoscopic approach, in describing complex phenomena involving liquid-solid interaction, as capillarity. However, a close analysis of their answers to questionnaires, shows that they acquired a quite superficial knowledge, as they simply memorized notions and information on the topic, but did not reach a proper awareness of it. On the other hand, after the instruction students who followed the mesoscopic approach seem more capable of building explanation than students who followed the macroscopic approach. We can infer that mesoscopic modelling activities can support the development of explanation-oriented reasoning lines more than macroscopic traditional ones. We found that students who followed the mesoscopic approach understood more deeply than students who followed the macroscopic approach the analysed topics. This, however, often happens with respect to simple physical situations like the ones involving liquid-liquid interactions. These students found it difficult to understand more complex physical situations as those involved in liquid-solid interactions. In general, both groups show comparable levels of well-being in learning. This indicates that the inquiry-type approach proposed through the two TLSs has been welcomed by most of students. The mesoscopic approach promoted the development of the willingness to extend studies and research more than the macroscopic approach and this led students to reinforce beliefs and acquire behaviours characteristic of a growth mindset. On the other hand, students who followed the macroscopic approach developed the ability of generalization of what has been learned more than students who followed the mesoscopic approach

    ATHENA Research Book, Volume 2

    Get PDF
    ATHENA European University is an association of nine higher education institutions with the mission of promoting excellence in research and innovation by enabling international cooperation. The acronym ATHENA stands for Association of Advanced Technologies in Higher Education. Partner institutions are from France, Germany, Greece, Italy, Lithuania, Portugal and Slovenia: University of OrlĂ©ans, University of Siegen, Hellenic Mediterranean University, NiccolĂČ Cusano University, Vilnius Gediminas Technical University, Polytechnic Institute of Porto and University of Maribor. In 2022, two institutions joined the alliance: the Maria Curie-SkƂodowska University from Poland and the University of Vigo from Spain. Also in 2022, an institution from Austria joined the alliance as an associate member: Carinthia University of Applied Sciences. This research book presents a selection of the research activities of ATHENA University's partners. It contains an overview of the research activities of individual members, a selection of the most important bibliographic works of members, peer-reviewed student theses, a descriptive list of ATHENA lectures and reports from individual working sections of the ATHENA project. The ATHENA Research Book provides a platform that encourages collaborative and interdisciplinary research projects by advanced and early career researchers

    Academic integrity : a call to research and action

    Get PDF
    Originally published in French:L'urgence de l'intĂ©gritĂ© acadĂ©mique, Éditions EMS, Management & société, Caen, 2021 (ISBN 978-2-37687-472-0).The urgency of doing complements the urgency of knowing. Urgency here is not the inconsequential injunction of irrational immediacy. It arises in various contexts for good reasons, when there is a threat to the human existence and harms to others. Today, our knowledge based civilization is at risk both by new production models of knowledge and by the shamelessness of knowledge delinquents, exposing the greatest number to important risks. Swiftly, the editors respond to the diagnostic by setting up a reference tool for academic integrity. Across multiple dialogues between the twenty-five chapters and five major themes, the ethical response shapes pragmatic horizons for action, on a range of disciplinary competencies: from science to international diplomacy. An interdisciplinary work indispensable for teachers, students and university researchers and administrators

    SIS 2017. Statistics and Data Science: new challenges, new generations

    Get PDF
    The 2017 SIS Conference aims to highlight the crucial role of the Statistics in Data Science. In this new domain of ‘meaning’ extracted from the data, the increasing amount of produced and available data in databases, nowadays, has brought new challenges. That involves different fields of statistics, machine learning, information and computer science, optimization, pattern recognition. These afford together a considerable contribute in the analysis of ‘Big data’, open data, relational and complex data, structured and no-structured. The interest is to collect the contributes which provide from the different domains of Statistics, in the high dimensional data quality validation, sampling extraction, dimensional reduction, pattern selection, data modelling, testing hypotheses and confirming conclusions drawn from the data
    • 

    corecore