257 research outputs found

    I, Inventor: Patent Inventorship for Artificial Intelligence Systems

    Get PDF

    Prediction of student success: A smart data-driven approach

    Get PDF
    Predicting student’s academic performance is one of the subjects related to the Educational Data Mining process, which intends to extract useful information and new patterns from educational data. Understanding the drivers of student success may assist educators in developing pedagogical methods providing a tool for personalized feedback and advice. In order to improve the academic performance of students and create a decision support solution for higher education institutes, this dissertation proposed a methodology that uses educational data mining to compare prediction models for the students' success. Data belongs to ISCTE master students, a Portuguese university, during 2012 to 2022 academic years. In addition, it was studied which factors are the strongest predictors of the student’s success. PyCaret library was used to compare the performance of several algorithms. Factors that were proposed to influence the success include, for example, the student's gender, previous educational background, the existence of a special statute, and the parents' educational degree. The analysis revealed that the Light Gradient Boosting Machine Classifier had the best performance with an accuracy of 87.37%, followed by Gradient Boosting Classifier (accuracy = 85.11%) and Adaptive Boosting Classifier (accuracy = 83.37%). Hyperparameter tunning improved the performance of all the algorithms. Feature importance analysis revealed that the factors that impacted the student’s success most were the average grade, master time, and the gap between degrees, i.e., the number of years between the last degree and the start of the master.A previsão do sucesso académico de estudantes é um dos tópicos relacionados com a mineração de dados educacionais, a qual pretende extrair informação útil e encontrar padrões a partir de dados académicos. Compreender que fatores afetam o sucesso dos estudantes pode ajudar, as instituições de educação, no desenvolvimento de métodos pedagógicos, dando uma ferramenta de feedback e aconselhamento personalizado. Com o fim de melhorar o desempenho académico dos estudantes e criar uma solução de apoio à decisão, para instituições de ensino superior, este artigo propõe uma metodologia que usa mineração de dados para comparar modelos de previsão para o sucesso dos alunos. Os dados pertencem a alunos de mestrado que frequentaram o ISCTE, uma universidade portuguesa, durante os anos letivos de 2012 a 2022. Além disso, foram estudados quais os fatores que mais afetam o sucesso do aluno. Os vários algoritmos foram comparados pela biblioteca PyCaret. Alguns dos fatores que foram propostos como relevantes para o sucesso incluem, o género do aluno, a formação educacional anterior, a existência de um estatuto especial e o grau de escolaridade dos pais. A análise dos resultados demonstrou que o classificador Light Gradient Boosting Machine (LGBMC) é o que tem o melhor desempenho com uma accuracy de 87.37%, seguindo-se o classificador Gradient Boosting Classifier (accuracy=85.11%) e o classificador Adaptive Boosting (accuracy=83.37%). A afinação de hiperparâmetros melhorou o desempenho de todos os algoritmos. As variáveis que demonstraram ter maior impacto foram a média dos estudantes, a duração do mestrado e o intervalo entre estudos

    Intelligent Case Assignment Method Based on the Chain of Criminal Behavior Elements

    Get PDF
    The assignment of cases means the court assigns cases to specific judges. The traditional case assignment methods, based on the facts of a case, are weak in the analysis of semantic structure of the case not considering the judges\u27 expertise. By analyzing judges\u27 trial logic, we find that the order of criminal behaviors affects the final judgement. To solve these problems, we regard intelligent case assignment as a text-matching problem, and propose an intelligent case assignment method based on the chain of criminal behavior elements. This method introduces the chain of criminal behavior elements to enhance the structured semantic analysis of the case. We build a BCTA (Bert-Cnn-Transformer-Attention) model to achieve intelligent case assignment. This model integrates a judge\u27s expertise in the judge\u27s presentation, thus recommending the most compatible judge for the case. Comparing the traditional case assignment methods, our BCTA model obtains 84% absolutely considerable improvement under P@1. In addition, comparing other classic text matching models, our BCTA model achieves an absolute considerable improvement of 4% under P@1 and 9% under Macro F1. Experiments conducted on real-world data set demonstrate the superiority of our method

    On the Evolution of Knowledge Graphs: A Survey and Perspective

    Full text link
    Knowledge graphs (KGs) are structured representations of diversified knowledge. They are widely used in various intelligent applications. In this article, we provide a comprehensive survey on the evolution of various types of knowledge graphs (i.e., static KGs, dynamic KGs, temporal KGs, and event KGs) and techniques for knowledge extraction and reasoning. Furthermore, we introduce the practical applications of different types of KGs, including a case study in financial analysis. Finally, we propose our perspective on the future directions of knowledge engineering, including the potential of combining the power of knowledge graphs and large language models (LLMs), and the evolution of knowledge extraction, reasoning, and representation

    Thirty years of artificial intelligence and law : the third decade

    Get PDF

    Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework

    Full text link
    This paper examines the current landscape of AI regulations, highlighting the divergent approaches being taken, and proposes an alternative contextual, coherent, and commensurable (3C) framework. The EU, Canada, South Korea, and Brazil follow a horizontal or lateral approach that postulates the homogeneity of AI systems, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the U.K., Israel, Switzerland, Japan, and China have pursued a context-specific or modular approach, tailoring regulations to the specific use cases of AI systems. The U.S. is reevaluating its strategy, with growing support for controlling existential risks associated with AI. Addressing such fragmentation of AI regulations is crucial to ensure the interoperability of AI. The present degree of proportionality, granularity, and foreseeability of the EU AI Act is not sufficient to garner consensus. The context-specific approach holds greater promises but requires further development in terms of details, coherency, and commensurability. To strike a balance, this paper proposes a hybrid 3C framework. To ensure contextuality, the framework categorizes AI into distinct types based on their usage and interaction with humans: autonomous, allocative, punitive, cognitive, and generative AI. To ensure coherency, each category is assigned specific regulatory objectives: safety for autonomous AI; fairness and explainability for allocative AI; accuracy and explainability for punitive AI; accuracy, robustness, and privacy for cognitive AI; and the mitigation of infringement and misuse for generative AI. To ensure commensurability, the framework promotes the adoption of international industry standards that convert principles into quantifiable metrics. In doing so, the framework is expected to foster international collaboration and standardization without imposing excessive compliance costs

    The digital resurrection of Margaret Thatcher: Creative, technological and legal dilemmas in the use of deepfakes in screen drama

    Get PDF
    This article develops from the findings of an interdisciplinary research project that has linked film practice research with computer science and law, in an exercise that seeks to digitally resurrect Margaret Thatcher to play herself in a contemporary film drama. The article highlights the imminent spread of machine learning techniques for digital face replacement across fiction content production, with central research questions concerning the ethical and legal issues that arise from the appropriation of the facial image of a deceased person for use in drama

    Early detection of lung cancer through nodule characterization by Deep Learning

    Full text link
    Lung cancer is one of the most frequent cancers in the world with 1.8 million new cases reported in 2012, representing 12.9% of all new cancers worldwide, accounting 1.4 million deaths up to 2008. The importance of early detection and classification of malignant and benign nodules using computed tomography (CT) scans, may facilitate radiologists the tasks of nodule staging assessment and individual therapeutic planning. However, if potential malignant nodules are detected on CT scans, treatments may be less aggressive, not even requiring chemotherapy or radiation therapy after surgery. This Bachelor Thesis focus on the exploration of existing methods and data sets for the automatic classification of lung nodules based on CT images. To this aim, we start by assembling, studying and analyzing some state-of-the-art studies in lung nodule detection, characterization and classification. Furthermore, we report and contextualize state-of-the-art deep learning architectures suited for lung nodule classification. From the public datasets researched, we select a widely used and large data set of lung nodules CT scans, and use it to fine-tune a state-of-theart convolutional neural network. We compare this strategy with training-from-scratch a new shallower neuronal network. Initial evaluation suggests that: (1) Transfer learning is unable to perform correctly due to its inability to adapt between natural images and CT scans domains. (2) Learning from scratch is unable to learn from a small number of samples. However, this first evaluation paves the road towards the design of better classification methods fed by better annotated public-available data sets. In overall, this Project is a mandatory first stage on a hot research topic

    A Comparative Analysis of Best Practices in a Facial Recognition Policy for Law Enforcement Agencies

    Get PDF
    Facial Recognition Technology (FRT) and the plethora of applications that have adopted this technology have exploded in the last decade. Most people have probably heard about local law enforcement agencies utilizing FRT to catch criminals, locate missing persons, and provide large-scale event security. Law enforcement\u27s use of FRT has been criticized since its implementation. Critics have lambasted FRT, citing inaccuracy of the technology; potential race, age, and gender bias; the collection and retention of images; and a lack of governing standards as to when the technology can be applied. In addition, the lack of transparency has been met with fierce pushback as entities such as the ACLU have filed multiple lawsuits against federal agencies in an attempt to garner additional information on the use and practices of FRT within these agencies. This research paper will discuss multiple aspects of Facial Recognition Technology. A brief history of the technology will be given along with an overview of how FRT works and its implementation in law enforcement agencies, as well as in private sector settings. This paper will also review new and existing laws at the state, local, and federal level. Issues over the misuse of FRT, concerns of civil rights activists, and limitations of FRT will be conveyed. Police department policies governing the use of FRT will also be explored in detail
    corecore