317 research outputs found

    Fatigue Damage Identification by a Global-Local Integrated Procedure for Truss-Like Steel Bridges

    Get PDF
    Civil steel structures and infrastructures, such as truss railway bridges, are often subject to potential damage, mainly due to fatigue phenomena and corrosion. Terefore, damage detection algorithms should be designed and appropriately implemented to increase their structural health. Today, the vast amount of information provided by data processing techniques and measurements coming from a monitoring system constitutes a possible tool for damage identifcation in terms of both detection and description. For this reason, the research activity aims to develop a methodology for a preliminary description of the damage in steel railway bridges induced by fatigue phenomena. Te proposed approach is developed through an integration of global and local pro cedures. At the global scale, vibration-based procedures will be applied to improve a forecast numerical model and, subsequently, to identify the zones most involved in fatigue problems. At the local scale, careful and refned local identifcation will be pursued via image processing techniques whose evidence will be analyzed and described through nonlinear numerical models. A case study of a historical railway bridge in Spain will illustrate the methodology’s performance, potentiality, and critical issue

    The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice

    Get PDF
    Artificial intelligence (“AI”) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create AI models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI. A particularly pressing area of concern has been criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have often embraced claims that AI should remain undisclosed in court. Both champions and critics of AI, however, mistakenly assume that we inevitably face a trade-off: black box AI may be incomprehensible, but it performs more accurately. But that is not so. In this Article, we question the basis for this assumption, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, and it may reflect preexisting racial and socioeconomic disparities. Unless AI is interpretable, decisionmakers like lawyers and judges who must use it will not be able to detect those underlying errors, much less understand what the AI recommendation means. Debunking the black box performance myth has implications for constitutional criminal procedure rights and legislative policy. Judges and lawmakers have been reluctant to impair the perceived effectiveness of black box AI by requiring disclosures to the defense. Absent some compelling—or even credible—government interest in keeping AI black box, and given the substantial constitutional rights and public safety interests at stake, we argue that the burden rests on the government to justify any departure from the norm that all lawyers, judges, and jurors can fully understand AI. If AI is to be used at all in settings like the criminal system—and we do not suggest that it necessarily should—the presumption should be in favor of glass box AI, absent strong evidence to the contrary. We conclude by calling for national and local regulation to safeguard, in all criminal cases, the right to glass box AI

    Using decision-tree classifier systems to extract knowledge from databases

    Get PDF
    One difficulty in applying artificial intelligence techniques to the solution of real world problems is that the development and maintenance of many AI systems, such as those used in diagnostics, require large amounts of human resources. At the same time, databases frequently exist which contain information about the process(es) of interest. Recently, efforts to reduce development and maintenance costs of AI systems have focused on using machine learning techniques to extract knowledge from existing databases. Research is described in the area of knowledge extraction using a class of machine learning techniques called decision-tree classifier systems. Results of this research suggest ways of performing knowledge extraction which may be applied in numerous situations. In addition, a measurement called the concept strength metric (CSM) is described which can be used to determine how well the resulting decision tree can differentiate between the concepts it has learned. The CSM can be used to determine whether or not additional knowledge needs to be extracted from the database. An experiment involving real world data is presented to illustrate the concepts described

    Human Rights Treaty Commitment and Compliance: A Machine Learning-based Causal Inference Approach

    Get PDF
    Why do states ratify international human rights treaties? How much do human rights treaties influence state behaviors directly and indirectly? Why are some human rights treaty monitoring procedures more effective than others? What are the most predictively and causally important factors that can reduce and prevent state repression and human rights violations? This dissertation provide answers to these keys causal questions in political science research, using a novel approach that combines machine learning and the structural causal model framework. The four research questions are arranged in a chronological order that refects the causal process relating to international human rights treaties, going from (a) the causal determinants of treaty ratification to (b) the causal mechanisms of human rights treaties to (c) the causal effects of human rights treaty monitoring procedures to (d) other factors that causally influence human rights violations. Chapter 1 identifies the research traditions within which this dissertation is located, offers an overview of the methodological advances that enable this research, specifies the research questions, and previews the findings. Chapters 2, 3, 4, and 5 present in chronological order four empirical studies that answer these four research questions. Finally, Chapter 6 summarizes the substantive findings, suggests some other research questions that could be similarly investigated, and recaps the methodological approach and the contributions of the dissertation

    A Study of the Effectiveness of Neural Networks for Elemental Concentration from Libs Spectra

    Get PDF
    Laser-induced breakdown spectroscopy (LIBS) is an advanced data analysis technique for spectral analysis based on the direct measurement of the spectrum of optical emission from a laser-induced plasma. Assignment of different atomic and ionic lines, which are signatures of a particular element, is the basis of a qualitative identification of the species present in plasma. The relative intensities of these atomic and ionic lines can be used for the quantitative determination of the corresponding elements present in different samples. Calibration curve based on absolute intensity is the statistical method of determining concentrations of elements in different samples. Since we need an exact knowledge of the sample composition to build the proper calibration curve, this method has some limitations in the case of samples of unknown composition. The current research is to investigate the usefulness of ANN for the determination of the element concentrations from spectral data. From the study it is shown that neural networks predict elemental concentrations that are at least as good as the results obtained from traditional analysis. Also by automating the analysis process, we have achieved a vast saving in the time required for the data analysis

    Algorithms for Automatic Data Validation and Performance Assessment of MOX Gas Sensor Data Using Time Series Analysis

    Get PDF
    The following work presents algorithms for semi-automatic validation, feature extraction and ranking of time series measurements acquired from MOX gas sensors. Semi-automatic measurement validation is accomplished by extending established curve similarity algorithms with a slope-based signature calculation. Furthermore, a feature-based ranking metric is introduced. It allows for individual prioritization of each feature and can be used to find the best performing sensors regarding multiple research questions. Finally, the functionality of the algorithms, as well as the developed software suite, are demonstrated with an exemplary scenario, illustrating how to find the most power-efficient MOX gas sensor in a data set collected during an extensive screening consisting of 16,320 measurements, all taken with different sensors at various temperatures and analytes

    Activity Report: Automatic Control 2009

    Get PDF

    A Machine Learning-Based Framework for Accurate and Early Diagnosis of Liver Diseases: A Comprehensive Study on Feature Selection, Data Imbalance, and Algorithmic Performance

    Get PDF
    The liver is the largest organ of the human body with more than 500 vital functions. In recent decades, a large number of liver patients have been reported with diseases such as cirrhosis, fibrosis, or other liver disorders. There is a need for effective, early, and accurate identification of individuals suffering from such disease so that the person may recover before the disease spreads and becomes fatal. For this, applications of machine learning are playing a significant role. Despite the advancements, existing systems remain inconsistent in performance due to limited feature selection and data imbalance. In this article, we reviewed 58 articles extracted from 5 different electronic repositories published from January 2015 to 2023. After a systematic and protocol-based review, we answered 6 research questions about machine learning algorithms. The identification of effective feature selection techniques, data imbalance management techniques, accurate machine learning algorithms, a list of available data sets with their URLs and characteristics, and feature importance based on usage has been identified for diagnosing liver disease. The reason to select this research question is, in any machine learning framework, the role of dimensionality reduction, data imbalance management, machine learning algorithm with its accuracy, and data itself is very significant. Based on the conducted review, a framework, machine learning-based liver disease diagnosis (MaLLiDD), has been proposed and validated using three datasets. The proposed framework classified liver disorders with 99.56%, 76.56%, and 76.11% accuracy. In conclusion, this article addressed six research questions by identifying effective feature selection techniques, data imbalance management techniques, algorithms, datasets, and feature importance based on usage. It also demonstrated a high accuracy with the framework for early diagnosis, marking a significant advancement
    • …
    corecore