750 research outputs found

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Understanding comparative questions and retrieving argumentative answers

    Get PDF
    Making decisions is an integral part of everyday life, yet it can be a difficult and complex process. While peoples’ wants and needs are unlimited, resources are often scarce, making it necessary to research the possible alternatives and weigh the pros and cons before making a decision. Nowadays, the Internet has become the main source of information when it comes to comparing alternatives, making search engines the primary means for collecting new information. However, relying only on term matching is not sufficient to adequately address requests for comparisons. Therefore, search systems should go beyond this approach to effectively address comparative information needs. In this dissertation, I explore from different perspectives how search systems can respond to comparative questions. First, I examine approaches to identifying comparative questions and study their underlying information needs. Second, I investigate a methodology to identify important constituents of comparative questions like the to-be-compared options and to detect the stance of answers towards these comparison options. Then, I address ambiguous comparative search queries by studying an interactive clarification search interface. And finally, addressing answering comparative questions, I investigate retrieval approaches that consider not only the topical relevance of potential answers but also account for the presence of arguments towards the comparison options mentioned in the questions. By addressing these facets, I aim to provide a comprehensive understanding of how to effectively satisfy the information needs of searchers seeking to compare different alternatives

    Honest Score Client Selection Scheme: Preventing Federated Learning Label Flipping Attacks in Non-IID Scenarios

    Full text link
    Federated Learning (FL) is a promising technology that enables multiple actors to build a joint model without sharing their raw data. The distributed nature makes FL vulnerable to various poisoning attacks, including model poisoning attacks and data poisoning attacks. Today, many byzantine-resilient FL methods have been introduced to mitigate the model poisoning attack, while the effectiveness when defending against data poisoning attacks still remains unclear. In this paper, we focus on the most representative data poisoning attack - "label flipping attack" and monitor its effectiveness when attacking the existing FL methods. The results show that the existing FL methods perform similarly in Independent and identically distributed (IID) settings but fail to maintain the model robustness in Non-IID settings. To mitigate the weaknesses of existing FL methods in Non-IID scenarios, we introduce the Honest Score Client Selection (HSCS) scheme and the corresponding HSCSFL framework. In the HSCSFL, The server collects a clean dataset for evaluation. Under each iteration, the server collects the gradients from clients and then perform HSCS to select aggregation candidates. The server first evaluates the performance of each class of the global model and generates the corresponding risk vector to indicate which class could be potentially attacked. Similarly, the server evaluates the client's model and records the performance of each class as the accuracy vector. The dot product of each client's accuracy vector and global risk vector is generated as the client's host score; only the top p\% host score clients are included in the following aggregation. Finally, server aggregates the gradients and uses the outcome to update the global model. The comprehensive experimental results show our HSCSFL effectively enhances the FL robustness and defends against the "label flipping attack.

    The emerging landscape of Social Media Data Collection: anticipating trends and addressing future challenges

    Full text link
    [spa] Las redes sociales se han convertido en una herramienta poderosa para crear y compartir contenido generado por usuarios en todo internet. El amplio uso de las redes sociales ha llevado a generar una enorme cantidad de información, presentando una gran oportunidad para el marketing digital. A través de las redes sociales, las empresas pueden llegar a millones de consumidores potenciales y capturar valiosos datos de los consumidores, que se pueden utilizar para optimizar estrategias y acciones de marketing. Los beneficios y desafíos potenciales de utilizar las redes sociales para el marketing digital también están creciendo en interés entre la comunidad académica. Si bien las redes sociales ofrecen a las empresas la oportunidad de llegar a una gran audiencia y recopilar valiosos datos de los consumidores, el volumen de información generada puede llevar a un marketing sin enfoque y consecuencias negativas como la sobrecarga social. Para aprovechar al máximo el marketing en redes sociales, las empresas necesitan recopilar datos confiables para propósitos específicos como vender productos, aumentar la conciencia de marca o fomentar el compromiso y para predecir los comportamientos futuros de los consumidores. La disponibilidad de datos de calidad puede ayudar a construir la lealtad a la marca, pero la disposición de los consumidores a compartir información depende de su nivel de confianza en la empresa o marca que lo solicita. Por lo tanto, esta tesis tiene como objetivo contribuir a la brecha de investigación a través del análisis bibliométrico del campo, el análisis mixto de perfiles y motivaciones de los usuarios que proporcionan sus datos en redes sociales y una comparación de algoritmos supervisados y no supervisados para agrupar a los consumidores. Esta investigación ha utilizado una base de datos de más de 5,5 millones de colecciones de datos durante un período de 10 años. Los avances tecnológicos ahora permiten el análisis sofisticado y las predicciones confiables basadas en los datos capturados, lo que es especialmente útil para el marketing digital. Varios estudios han explorado el marketing digital a través de las redes sociales, algunos centrándose en un campo específico, mientras que otros adoptan un enfoque multidisciplinario. Sin embargo, debido a la naturaleza rápidamente evolutiva de la disciplina, se requiere un enfoque bibliométrico para capturar y sintetizar la información más actualizada y agregar más valor a los estudios en el campo. Por lo tanto, las contribuciones de esta tesis son las siguientes. En primer lugar, proporciona una revisión exhaustiva de la literatura sobre los métodos para recopilar datos personales de los consumidores de las redes sociales para el marketing digital y establece las tendencias más relevantes a través del análisis de artículos significativos, palabras clave, autores, instituciones y países. En segundo lugar, esta tesis identifica los perfiles de usuario que más mienten y por qué. Específicamente, esta investigación demuestra que algunos perfiles de usuario están más inclinados a cometer errores, mientras que otros proporcionan información falsa intencionalmente. El estudio también muestra que las principales motivaciones detrás de proporcionar información falsa incluyen la diversión y la falta de confianza en las medidas de privacidad y seguridad de los datos. Finalmente, esta tesis tiene como objetivo llenar el vacío en la literatura sobre qué algoritmo, supervisado o no supervisado, puede agrupar mejor a los consumidores que proporcionan sus datos en las redes sociales para predecir su comportamiento futuro

    Fighting the dark side:a scoping review of dark pattern mitigation

    Get PDF
    Abstract. As technology plays an ever-greater role in the everyday life of people, during the last decade there has been a rising concern about designers using their knowledge of human behaviour to design interfaces that trick users into doing things against their best interest. These design patterns are known as dark patterns, and the human-computer interaction and design communities have condemned their use. Informed by research, lawmakers have also started to form regulations against them. This thesis aimed to first introduce what the current state of dark pattern research is, and then answer the research question of how the usage of dark patterns could be mitigated. To answer the research question, a literature review in the form of scoping review was conducted. In scoping review, 28 articles that considered dark pattern mitigation were found to be relevant to the research question. Thematic analysis was used as a qualitative analysis method to identify common themes in articles. As a result, dark pattern mitigation tactics could be divided into seven different themes: company actions and economic value, regulating dark patterns, raising public awareness, tools for users, designing for the well-being of users, educating designers and developers, and enhancing dark pattern research. Mitigation tactics or propositions were then introduced in more detail under these themes. The results of the scoping review demonstrate that there is no one specific weapon to be used in the fight against dark patterns. On the contrary, different techniques from different fields need to be used together to effectively identify and mitigate dark patterns

    Demand Response in Smart Grids

    Get PDF
    The Special Issue “Demand Response in Smart Grids” includes 11 papers on a variety of topics. The success of this Special Issue demonstrates the relevance of demand response programs and events in the operation of power and energy systems at both the distribution level and at the wide power system level. This reprint addresses the design, implementation, and operation of demand response programs, with focus on methods and techniques to achieve an optimized operation as well as on the electricity consumer

    Deep Neural Networks and Tabular Data: Inference, Generation, and Explainability

    Get PDF
    Over the last decade, deep neural networks have enabled remarkable technological advancements, potentially transforming a wide range of aspects of our lives in the future. It is becoming increasingly common for deep-learning models to be used in a variety of situations in the modern life, ranging from search and recommendations to financial and healthcare solutions, and the number of applications utilizing deep neural networks is still on the rise. However, a lot of recent research efforts in deep learning have focused primarily on neural networks and domains in which they excel. This includes computer vision, audio processing, and natural language processing. It is a general tendency for data in these areas to be homogeneous, whereas heterogeneous tabular datasets have received relatively scant attention despite the fact that they are extremely prevalent. In fact, more than half of the datasets on the Google dataset platform are structured and can be represented in a tabular form. The first aim of this study is to provide a thoughtful and comprehensive analysis of deep neural networks' application to modeling and generating tabular data. Apart from that, an open-source performance benchmark on tabular data is presented, where we thoroughly compare over twenty machine and deep learning models on heterogeneous tabular datasets. The second contribution relates to synthetic tabular data generation. Inspired by their success in other homogeneous data modalities, deep generative models such as variational autoencoders and generative adversarial networks are also commonly applied for tabular data generation. However, the use of Transformer-based large language models (which are also generative) for tabular data generation have been received scant research attention. Our contribution to this literature consists of the development of a novel method for generating tabular data based on this family of autoregressive generative models that, on multiple challenging benchmarks, outperformed the current state-of-the-art methods for tabular data generation. Another crucial aspect for a deep-learning data system is that it needs to be reliable and trustworthy to gain broader acceptance in practice, especially in life-critical fields. One of the possible ways to bring trust into a data-driven system is to use explainable machine-learning methods. In spite of this, the current explanation methods often fail to provide robust explanations due to their high sensitivity to the hyperparameter selection or even changes of the random seed. Furthermore, most of these methods are based on feature-wise importance, ignoring the crucial relationship between variables in a sample. The third aim of this work is to address both of these issues by offering more robust and stable explanations, as well as taking into account the relationships between variables using a graph structure. In summary, this thesis made a significant contribution that touched many areas related to deep neural networks and heterogeneous tabular data as well as the usage of explainable machine learning methods

    Interrogating autism from a multidimensional perspective: an integrative framework.

    Get PDF
    Autism Spectrum Disorder (ASD) is a condition characterized by social and behavioral impairments, affecting approximately 1 in every 44 children in the United States. Common symptoms include difficulties in communication, interpersonal interactions, and behavior. While symptoms can manifest as early as infancy, obtaining an accurate diagnosis may require multiple visits to a pediatric specialist due to the subjective nature of the assessment, which may yield varying scores from different specialists. Despite growing evidence of the role of differences in brain development and/or environmental and/or genetic factors in autism development, the exact pathology of this disorder has yet to be fully elucidated by scientists. At present, the diagnosis of ASD typically involves a set of gold-standard diagnostic evaluations, such as the Autism Diagnostic Observation Schedule (ADOS), the Autism Diagnostic Interview-Revised (ADI-R), and the more cost-effective Social Responsive Scale (SRS). Administering these diagnostic tests, which involve assessing communication and behavioral patterns, along with obtaining a clinical history, requires the expertise of a team of qualified clinicians. This process is time-consuming, effortful, and involves a degree of subjectivity due to the reliance on clinical judgment. Aside from conventional observational assessments, recent developments in neuroimaging and machine learning offer a fast and objective alternative for diagnosing ASD using brain imaging. This comprehensive work explores the use of different imaging modalvities, namely structural MRI (sMRI) and resting-state functional MRI (rs-fMRI), to investigate their potential for autism diagnosis. The proposed study aims to offer a new approach and perspective in comprehending ASD as a multidimensional problem, within a behavioral space that is defined by one of the available ASD diagnostic tools. This dissertation introduces a thorough investigation of the utilization of feature engineering tools to extract distinctive insights from various brain imaging modalities, including the application of novel feature representations. Additionally, the use of a machine learning framework to aid in the precise classification of individuals with autism is also explored in detail. This extensive research, which draws upon large publicly available datasets, sheds light on the influence of various decisions made throughout the pipeline on diagnostic accuracy. Furthermore, it identifies brain regions that may be impacted and contribute to an autism diagnosis. The attainment of high global state-of-the-art cross-validated, and hold-out set accuracy validates the advantages of feature representation and engineering in extracting valuable information, as well as the potential benefits of employing neuroimaging for autism diagnosis. Furthermore, a suggested diagnostic report has been put forth to assist physicians in mapping diagnoses to underlying neuroimaging markers. This approach could enable an earlier, automated, and more objective personalized diagnosis

    Beyond Quantity: Research with Subsymbolic AI

    Get PDF
    How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which master problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately

    A Little More Logical: Reasoning Well About Science, Ethics, Religion, and the Rest of Life

    Get PDF
    "A Little More Logical" is the perfect guide for anyone looking to improve their critical thinking and logical reasoning skills. With chapters on everything from logic basics to fallacies of weak induction to moral reasoning, this book covers all the essential concepts you need to become a more logical thinker. You'll learn about influential figures in the field of logic, such as Rudolph Carnap, Betrrand Russell, and Ada Lovelace, and how to apply your newfound knowledge to real-world situations. Whether you're looking to engage in debates with others, make better decisions in your personal and professional life, or simply want to improve your overall critical thinking skills, "A Little More Logical" has you covered. So why wait? Start learning and become a little more logical today! "A Little More Logical" differs from typical logical textbooks in a number of ways. One key difference is its emphasis on engaging and relatable examples and case studies. Rather than simply presenting dry definitions and concepts, the book uses fables, stories, and real-world situations to illustrate key ideas and make them more relatable for readers. Another unique aspect of "A Little More Logical" is its inclusion of "Minds that Mattered" sections, which highlight the contributions and insights of influential figures in the field of logic and critical thinking. These sections provide readers with a deeper understanding of the history and development of logical principles and offer valuable context for the concepts being discussed. Additionally, "A Little More Logical" covers a wide range of topics beyond the basics of logic and argument evaluation. Chapters on moral reasoning, probability and inductive logic, scientific reasoning, conspiracy theories, statistical reasoning, and the history of formal logic offer a more comprehensive and well-rounded understanding of logic and critical thinking. Overall, "A Little More Logical" stands out as a dynamic and engaging resource for anyone looking to improve their logical reasoning abilities. Its relatable examples, historical context, and broad coverage make it a valuable resource for anyone interested in mastering the principles of logic. This is a free, Creative-Commons-licensed book
    corecore