7,434 research outputs found

    Meta-critical thinking, paradox, and probabilities

    Get PDF
    There is as much lack of clarity concerning what “critical thinking” involves, even among those charged with teaching it, as there is consensus that we need more emphasis on it in both academia and society. There is an apparent need to think critically about critical thinking, an exercise that might be called meta-critical thinking. It involves emphasizing a practice in terms of which “critical thinking” is helpfully carried out and clarifying one or more of the concepts in terms of which “critical thinking” is usually defined. The practice is distinction making and the concept that of evidence. Science advances by constructing models that explain real-world processes. Once multiple potential models have been distinguished, there remains the task of identifying which models match the real-world process better than others. Since statistical inference has in large part to do with showing how data provide support, i.e., furnish evidence, that the model/hypothesis is more or less likely while still uncertain, we turn to it to help make the concept more precise and thereby useful. In fact, two of the leading methodological paradigms—Bayesian and likelihood—can be taken to provide answers to the questions of the extent to which as well as how data provide evidence for conclusions. Examining these answers in some detail is a highly promising way to make progress. We do so by way of the analysis of three well-known statistical paradoxes—the Lottery, the Old Evidence, and Humphreys’—and the identification of distinctions on the basis of which their plausible resolutions depend. These distinctions, among others between belief and evidence and different concepts of probability, in turn have more general applications. They are applied here to two highly contested public policy issues—the efficacy of COVID vaccinations and the fossil fuel cause of climate change. Our aim is to provide some tools, they might be called “healthy habits of mind,” with which to assess statistical arguments, in particular with respect to the nature and extent of the evidence they furnish, and to illustrate their use in well-defined ways

    Safe passage for attachment systems:Can attachment security at international schools be measured, and is it at risk?

    Get PDF
    Relocations challenge attachment networks. Regardless of whether a person moves or is moved away from, relocation produces separation and loss. When such losses are repeatedly experienced without being adequately processed, a defensive shutting down of the attachment system could result, particularly when such experiences occur during or across the developmental years. At schools with substantial turnover, this possibility could be shaping youth in ways that compromise attachment security and young people’s willingness or ability to develop and maintain deep long-term relationships. Given the well-documented associations between attachment security, social support, and long-term physical and mental health, the hypothesis that mobility could erode attachment and relational health warrants exploration. International schools are logical settings to test such a hypothesis, given their frequently high turnover without confounding factors (e.g. war trauma or refugee experiences). In addition, repeated experiences of separation and loss in international school settings would seem likely to create mental associations for the young people involved regarding how they and others tend to respond to such situations in such settings, raising the possibility that people at such schools, or even the school itself, could collectively be represented as an attachment figure. Questions like these have received scant attention in the literature. They warrant consideration because of their potential to shape young people’s most general convictions regarding attachment, which could, in turn, have implications for young people’s ability to experience meaning in their lives

    Fairness-aware Machine Learning in Educational Data Mining

    Get PDF
    Fairness is an essential requirement of every educational system, which is reflected in a variety of educational activities. With the extensive use of Artificial Intelligence (AI) and Machine Learning (ML) techniques in education, researchers and educators can analyze educational (big) data and propose new (technical) methods in order to support teachers, students, or administrators of (online) learning systems in the organization of teaching and learning. Educational data mining (EDM) is the result of the application and development of data mining (DM), and ML techniques to deal with educational problems, such as student performance prediction and student grouping. However, ML-based decisions in education can be based on protected attributes, such as race or gender, leading to discrimination of individual students or subgroups of students. Therefore, ensuring fairness in ML models also contributes to equity in educational systems. On the other hand, bias can also appear in the data obtained from learning environments. Hence, bias-aware exploratory educational data analysis is important to support unbiased decision-making in EDM. In this thesis, we address the aforementioned issues and propose methods that mitigate discriminatory outcomes of ML algorithms in EDM tasks. Specifically, we make the following contributions: We perform bias-aware exploratory analysis of educational datasets using Bayesian networks to identify the relationships among attributes in order to understand bias in the datasets. We focus the exploratory data analysis on features having a direct or indirect relationship with the protected attributes w.r.t. prediction outcomes. We perform a comprehensive evaluation of the sufficiency of various group fairness measures in predictive models for student performance prediction problems. A variety of experiments on various educational datasets with different fairness measures are performed to provide users with a broad view of unfairness from diverse aspects. We deal with the student grouping problem in collaborative learning. We introduce the fair-capacitated clustering problem that takes into account cluster fairness and cluster cardinalities. We propose two approaches, namely hierarchical clustering and partitioning-based clustering, to obtain fair-capacitated clustering. We introduce the multi-fair capacitated (MFC) students-topics grouping problem that satisfies students' preferences while ensuring balanced group cardinalities and maximizing the diversity of members regarding the protected attribute. We propose three approaches: a greedy heuristic approach, a knapsack-based approach using vanilla maximal 0-1 knapsack formulation, and an MFC knapsack approach based on group fairness knapsack formulation. In short, the findings described in this thesis demonstrate the importance of fairness-aware ML in educational settings. We show that bias-aware data analysis, fairness measures, and fairness-aware ML models are essential aspects to ensure fairness in EDM and the educational environment.Ministry of Science and Culture of Lower Saxony/LernMINT/51410078/E

    Computational and experimental studies on the reaction mechanism of bio-oil components with additives for increased stability and fuel quality

    Get PDF
    As one of the world’s largest palm oil producers, Malaysia encountered a major disposal problem as vast amount of oil palm biomass wastes are produced. To overcome this problem, these biomass wastes can be liquefied into biofuel with fast pyrolysis technology. However, further upgradation of fast pyrolysis bio-oil via direct solvent addition was required to overcome it’s undesirable attributes. In addition, the high production cost of biofuels often hinders its commercialisation. Thus, the designed solvent-oil blend needs to achieve both fuel functionality and economic targets to be competitive with the conventional diesel fuel. In this thesis, a multi-stage computer-aided molecular design (CAMD) framework was employed for bio-oil solvent design. In the design problem, molecular signature descriptors were applied to accommodate different classes of property prediction models. However, the complexity of the CAMD problem increases as the height of signature increases due to the combinatorial nature of higher order signature. Thus, a consistency rule was developed reduce the size of the CAMD problem. The CAMD problem was then further extended to address the economic aspects via fuzzy multi-objective optimisation approach. Next, a rough-set based machine learning (RSML) model has been proposed to correlate the feedstock characterisation and pyrolysis condition with the pyrolysis bio-oil properties by generating decision rules. The generated decision rules were analysed from a scientific standpoint to identify the underlying patterns, while ensuring the rules were logical. The decision rules generated can be used to select optimal feedstock composition and pyrolysis condition to produce pyrolysis bio-oil of targeted fuel properties. Next, the results obtained from the computational approaches were verified through experimental study. The generated pyrolysis bio-oils were blended with the identified solvents at various mixing ratio. In addition, emulsification of the solvent-oil blend in diesel was also conducted with the help of surfactants. Lastly, potential extensions and prospective work for this study have been discuss in the later part of this thesis. To conclude, this thesis presented the combination of computational and experimental approaches in upgrading the fuel properties of pyrolysis bio-oil. As a result, high quality biofuel can be generated as a cleaner burning replacement for conventional diesel fuel

    The State of the Art in Deep Learning Applications, Challenges, and Future Prospects::A Comprehensive Review of Flood Forecasting and Management

    Get PDF
    Floods are a devastating natural calamity that may seriously harm both infrastructure and people. Accurate flood forecasts and control are essential to lessen these effects and safeguard populations. By utilizing its capacity to handle massive amounts of data and provide accurate forecasts, deep learning has emerged as a potent tool for improving flood prediction and control. The current state of deep learning applications in flood forecasting and management is thoroughly reviewed in this work. The review discusses a variety of subjects, such as the data sources utilized, the deep learning models used, and the assessment measures adopted to judge their efficacy. It assesses current approaches critically and points out their advantages and disadvantages. The article also examines challenges with data accessibility, the interpretability of deep learning models, and ethical considerations in flood prediction. The report also describes potential directions for deep-learning research to enhance flood predictions and control. Incorporating uncertainty estimates into forecasts, integrating many data sources, developing hybrid models that mix deep learning with other methodologies, and enhancing the interpretability of deep learning models are a few of these. These research goals can help deep learning models become more precise and effective, which will result in better flood control plans and forecasts. Overall, this review is a useful resource for academics and professionals working on the topic of flood forecasting and management. By reviewing the current state of the art, emphasizing difficulties, and outlining potential areas for future study, it lays a solid basis. Communities may better prepare for and lessen the destructive effects of floods by implementing cutting-edge deep learning algorithms, thereby protecting people and infrastructure

    Runway Safety Improvements Through a Data Driven Approach for Risk Flight Prediction and Simulation

    Get PDF
    Runway overrun is one of the most frequently occurring flight accident types threatening the safety of aviation. Sensors have been improved with recent technological advancements and allow data collection during flights. The recorded data helps to better identify the characteristics of runway overruns. The improved technological capabilities and the growing air traffic led to increased momentum for reducing flight risk using artificial intelligence. Discussions on incorporating artificial intelligence to enhance flight safety are timely and critical. Using artificial intelligence, we may be able to develop the tools we need to better identify runway overrun risk and increase awareness of runway overruns. This work seeks to increase attitude, skill, and knowledge (ASK) of runway overrun risks by predicting the flight states near touchdown and simulating the flight exposed to runway overrun precursors. To achieve this, the methodology develops a prediction model and a simulation model. During the flight training process, the prediction model is used in flight to identify potential risks and the simulation model is used post-flight to review the flight behavior. The prediction model identifies potential risks by predicting flight parameters that best characterize the landing performance during the final approach phase. The predicted flight parameters are used to alert the pilots for any runway overrun precursors that may pose a threat. The predictions and alerts are made when thresholds of various flight parameters are exceeded. The flight simulation model simulates the final approach trajectory with an emphasis on capturing the effect wind has on the aircraft. The focus is on the wind since the wind is a relatively significant factor during the final approach; typically, the aircraft is stabilized during the final approach. The flight simulation is used to quickly assess the differences between fight patterns that have triggered overrun precursors and normal flights with no abnormalities. The differences are crucial in learning how to mitigate adverse flight conditions. Both of the models are created with neural network models. The main challenges of developing a neural network model are the unique assignment of each model design space and the size of a model design space. A model design space is unique to each problem and cannot accommodate multiple problems. A model design space can also be significantly large depending on the depth of the model. Therefore, a hyperparameter optimization algorithm is investigated and used to design the data and model structures to best characterize the aircraft behavior during the final approach. A series of experiments are performed to observe how the model accuracy change with different data pre-processing methods for the prediction model and different neural network models for the simulation model. The data pre-processing methods include indexing the data by different frequencies, by different window sizes, and data clustering. The neural network models include simple Recurrent Neural Networks, Gated Recurrent Units, Long Short Term Memory, and Neural Network Autoregressive with Exogenous Input. Another series of experiments are performed to evaluate the robustness of these models to adverse wind and flare. This is because different wind conditions and flares represent controls that the models need to map to the predicted flight states. The most robust models are then used to identify significant features for the prediction model and the feasible control space for the simulation model. The outcomes of the most robust models are also mapped to the required landing distance metric so that the results of the prediction and simulation are easily read. Then, the methodology is demonstrated with a sample flight exposed to an overrun precursor, and high approach speed, to show how the models can potentially increase attitude, skill, and knowledge of runway overrun risk. The main contribution of this work is on evaluating the accuracy and robustness of prediction and simulation models trained using Flight Operational Quality Assurance (FOQA) data. Unlike many studies that focused on optimizing the model structures to create the two models, this work optimized both data and model structures to ensure that the data well capture the dynamics of the aircraft it represents. To achieve this, this work introduced a hybrid genetic algorithm that combines the benefits of conventional and quantum-inspired genetic algorithms to quickly converge to an optimal configuration while exploring the design space. With the optimized model, this work identified the data features, from the final approach, with a higher contribution to predicting airspeed, vertical speed, and pitch angle near touchdown. The top contributing features are altitude, angle of attack, core rpm, and air speeds. For both the prediction and the simulation models, this study goes through the impact of various data preprocessing methods on the accuracy of the two models. The results may help future studies identify the right data preprocessing methods for their work. Another contribution from this work is on evaluating how flight control and wind affect both the prediction and the simulation models. This is achieved by mapping the model accuracy at various levels of control surface deflection, wind speeds, and wind direction change. The results saw fairly consistent prediction and simulation accuracy at different levels of control surface deflection and wind conditions. This showed that the neural network-based models are effective in creating robust prediction and simulation models of aircraft during the final approach. The results also showed that data frequency has a significant impact on the prediction and simulation accuracy so it is important to have sufficient data to train the models in the condition that the models will be used. The final contribution of this work is on demonstrating how the prediction and the simulation models can be used to increase awareness of runway overrun.Ph.D

    Mathematical Problems in Rock Mechanics and Rock Engineering

    Get PDF
    With increasing requirements for energy, resources and space, rock engineering projects are being constructed more often and are operated in large-scale environments with complex geology. Meanwhile, rock failures and rock instabilities occur more frequently, and severely threaten the safety and stability of rock engineering projects. It is well-recognized that rock has multi-scale structures and involves multi-scale fracture processes. Meanwhile, rocks are commonly subjected simultaneously to complex static stress and strong dynamic disturbance, providing a hotbed for the occurrence of rock failures. In addition, there are many multi-physics coupling processes in a rock mass. It is still difficult to understand these rock mechanics and characterize rock behavior during complex stress conditions, multi-physics processes, and multi-scale changes. Therefore, our understanding of rock mechanics and the prevention and control of failure and instability in rock engineering needs to be furthered. The primary aim of this Special Issue “Mathematical Problems in Rock Mechanics and Rock Engineering” is to bring together original research discussing innovative efforts regarding in situ observations, laboratory experiments and theoretical, numerical, and big-data-based methods to overcome the mathematical problems related to rock mechanics and rock engineering. It includes 12 manuscripts that illustrate the valuable efforts for addressing mathematical problems in rock mechanics and rock engineering

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Automatic Control of General Anesthesia: New Developments and Clinical Experiments

    Get PDF
    L’anestesia generale è uno stato di coma farmacologicamente indotto, temporaneo e reversibile. Il suo obiettivo consiste nel provocare la perdita totale della coscienza e nel sopprimere la percezione del dolore. Essa costituisce un aspetto fondamentale per la medicina moderna in quanto consente di praticare interventi chirurgici invasivi senza causare ansia e dolore al paziente. Nella pratica clinica dell’anestesia totalmente endovenosa questi effetti vengono generalmente ottenuti mediante la somministrazione simultanea del farmaco ipnotico propofol e del farmaco analgesico remifentanil. Il dosaggio di questi farmaci viene gestito dal medico anestesista basandosi su linee guida farmacologiche e monitorando la risposta clinica del paziente. Recenti sviluppi nelle tecniche di elaborazione dei segnali fisiologici hanno consentito di ottenere degli indicatori quantitativi dello stato anestetico del paziente. Tali indicatori possono essere utilizzati come segnali di retroazione per sistemi di controllo automatico dell'anestesia. Lo sviluppo di questi sistemi ha come obiettivo quello di fornire uno strumento di supporto per l'anestesista. Il lavoro presentato in questa tesi è stato svolto nell'ambito del progetto di ricerca riguardante il controllo automatico dell'anestesia attivo presso l'Università degli Studi di Brescia. Esso è denominato ACTIVA (Automatic Control of Total IntraVenous Anesthesia) ed è il risultato della collaborazione tra il Gruppo di Ricerca sui Sistemi di Controllo dell’Università degli Studi di Brescia e l’Unità Operativa Anestesia e Rianimazione 2 degli Spedali Civili di Brescia. L’obiettivo del progetto ACTIVA consiste nello sviluppo teorico, nell’implementazione e nella validazione clinica di strategie di controllo innovative per il controllo automatico dell’anestesia totalmente endovenosa. Nel dettaglio, in questa tesi vengono inizialmente presentati i risultati sperimentali ottenuti con strutture di controllo basate sull'algoritmo PID e PID ad eventi per la somministrazione di propofol e remifentanil. Viene poi presentato lo sviluppo teorico e la validazione clinica di strutture di controllo predittivo basate su modello. Successivamente vengono presentati i risultati di uno studio in simulazione riguardante una soluzione di controllo innovativa che consente all'anestesista di regolare esplicitamente il bilanciamento tra propofol e remifentanil. Infine, vengono presentati gli sviluppi teorici ed i relativi studi in simulazione riguardanti soluzioni di controllo personalizzate per le fasi di induzione e mantenimento dell'anestesia.General anesthesia is a state of pharmacologically induced, temporary and reversible coma. Its goal is to cause total loss of consciousness and suppress the perception of pain. It constitutes a fundamental aspect of modern medicine as it allows invasive surgical procedures to be performed without causing anxiety and pain to the patient. In the clinical practice of total intravenous anesthesia, these effects are generally obtained by the simultaneous administration of the hypnotic drug propofol and of the analgesic drug remifentanil. The dosing of these drugs is managed by the anesthesiologist on the basis of pharmacological guidelines and by monitoring the patient's clinical response. Recent developments in physiological signal processing techniques have introduced the possibility to obtain quantitative indicators of the patient's anesthetic state. These indicators can be used as feedback signals for automatic anesthesia control systems. The development of these systems aims to provide a support tool for the anesthesiologist. The work presented in this thesis has been carried out in the framework of the research project concerning the automatic control anesthesia at the University of Brescia. The project is called ACTIVA (Automatic Control of Total IntraVenous Anesthesia) and is the result of the collaboration between the Research Group on Control Systems of the University of Brescia and the Anesthesia and Intensive Care Unit 2 of the Spedali Civili di Brescia. The objective of the ACTIVA project consists in the theoretical development, implementation, and clinical validation of innovative control strategies for the automatic control of total intravenous anesthesia. In detail, in this thesis the experimental results obtained with control structures based on the PID and on event-based PID controllers for the administration of propofol and remifentanil are initially presented. The theoretical development and clinical validation of model predictive control strategies is then proposed. Next, the results of a simulation study regarding an innovative control solution that allows the anesthesiologist to explicitly adjust the balance between propofol and remifentanil are given. Finally, the theoretical developments and the relative simulation studies concerning personalized control solutions for induction and maintenance phases of anesthesia are explained
    • …
    corecore