204,422 research outputs found

    Is there a future for AI without representation?

    Get PDF

    Transparency and Fairness in Machine Learning Applications

    Get PDF
    Businesses and consumers increasingly use artificial intelligence (“AI”)— and specifically machine learning (“ML”) applications—in their daily work. ML is often used as a tool to help people perform their jobs more efficiently, but increasingly it is becoming a technology that may eventually replace humans in performing certain functions. An AI recently beat humans in a reading comprehension test, and there is an ongoing race to replace human drivers with self-driving cars and trucks. Tomorrow there is the potential for much more—as AI is even learning to build its own AI. As the use of AI technologies continues to expand, and especially as machines begin to act more autonomously with less human intervention, important questions arise about how we can best integrate this new technology into our society, particularly within our legal and compliance frameworks. The questions raised are different from those that we have already addressed with other technologies because AI is different. Most previous technologies functioned as a tool, operated by a person, and for legal purposes we could usually hold that person responsible for actions that resulted from using that tool. For example, an employee who used a computer to send a discriminatory or defamatory email could not have done so without the computer, but the employee would still be held responsible for creating the email. While AI can function as merely a tool, it can also be designed to act after making its own decisions, and in the future, will act even more autonomously. As AI becomes more autonomous, it will be more difficult to determine who—or what—is making decisions and taking actions, and determining the basis and responsibility for those actions. These are the challenges that must be overcome to ensure AI’s integration for legal and compliance purposes

    Optimizing U-Net Architecture with Feed-Forward Neural Networks for Precise Cobb Angle Prediction in Scoliosis Diagnosis

    Get PDF
    In the burgeoning field of Artificial Intelligence (AI) and its notable subsets, such as Deep Learning (DL), there is evidence of its transformative impact in assisting clinicians, particularly in diagnosing scoliosis. AI is unrivaled for its speed and precision in analyzing medical images, including X-rays and computed tomography (CT) scans. However, the path does not lack obstacles. Biases, unanticipated outcomes, and false positive and negative predictions present significant challenges. Our research employed three complex experimental sets, each focusing on adapting the U-Net architecture. Through a nuanced combination of feed-forward neural network (FFNN) configurations and hyperparameters, we endeavored to determine the most effective nonlinear regression model configuration for predicting the Cobb angle. This was done with the dual purpose of reducing AI training time without sacrificing predictive accuracy. Utilizing the capabilities of the PyTorch framework, we meticulously crafted and refined the deep learning models for each of the three experiments, focusing on an FFFN dropout rate of p=0.45. The Root Mean Square Error (RMSE), the number of epochs, and the number of nodes spanning three hidden layers in each FFFN were utilized as crucial performance metrics while a base learning rate of 0.001 was maintained. Notably, during the optimization phase, one of the experiments incorporated a learning rate scheduler to protect against potential pitfalls such as local minima and saddle points. A judiciously incorporated Early Stopping technique, triggered between the patience range of 5-10 epochs, ensured model stability as the Mean Squared Error (MSE) plateau loss approached approximately 1. Consequently, the model converged between 50 and 82 epochs. We hypothesize that our proposed architecture holds promise for future refinements, conditioned on assiduous experimentation with an array of medical deep learning paradigms

    Integrating Cultural Knowledge into Artificially Intelligent Systems: Human Experiments and Computational Implementations

    Get PDF
    With the advancement of Artificial Intelligence, it seems as if every aspect of our lives is impacted by AI in one way or the other. As AI is used for everything from driving vehicles to criminal justice, it becomes crucial that it overcome any biases that might hinder its fair application. We are constantly trying to make AI be more like humans. But most AI systems so far fail to address one of the main aspects of humanity: our culture and the differences between cultures. We cannot truly consider AI to have understood human reasoning without understanding culture. So it is important for cultural information to be embedded into AI systems in some way, as well as for the AI systems to understand the differences across these cultures. The main way I have chosen to do this are using two cultural markers: motifs and rituals. This is because they are both so inherently part of any culture. Motifs are things that are repeated often and are grounded in well-known stories, and tend to be very specific to individual cultures. Rituals are something that are part of every culture in some way, and while there are some that are constant across all cultures, some are very specific to individual ones. This makes them great to compare and to contrast. The first two parts of this dissertation talk about a couple of cognitive psychology studies I conducted. The first is to see how people understood motifs. Is is true that in-culture people identify motifs better than out-culture people? We see that my study shows this to indeed be the case. The second study attempts to test if motifs are recognizable in texts, regardless of whether or not people might understand their meaning. Our results confirm our hypothesis that motifs are recognizable. The third part of my work discusses the survey and data collection effort around rituals. I collected data about rituals from people from various national groups, and observed the differences in their responses. The main results from this was twofold: first, that cultural differences across groups are quantifiable, and that they are prevalent and observable with proper effort; and second, to collect and curate a substantial culturally sensitive dataset that can have a wide variety of use across various AI systems. The fourth part of the dissertation focuses on a system I built, called the motif association miner, which provides information about motifs present in input text, like associations, sources of motifs, connotations, etc. This information will be highly useful as this will enable future systems to use my output as input for their systems, and have a better understanding of motifs, especially as this shows an approach of bringing out meaning of motifs specific to certain culture to wider usage. As the final contribution, this thesis details my efforts to use the curated ritual data to improve existing Question Answering system, and show that this method helps systems perform better in situations which vary by culture. This data and approach, which will be made publicly available, will enable others in the field to take advantage of the information contained within to try and combat some bias in their systems

    Expressivity of Spiking Neural Networks

    Full text link
    This article studies the expressive power of spiking neural networks where information is encoded in the firing time of neurons. The implementation of spiking neural networks on neuromorphic hardware presents a promising choice for future energy-efficient AI applications. However, there exist very few results that compare the computational power of spiking neurons to arbitrary threshold circuits and sigmoidal neurons. Additionally, it has also been shown that a network of spiking neurons is capable of approximating any continuous function. By using the Spike Response Model as a mathematical model of a spiking neuron and assuming a linear response function, we prove that the mapping generated by a network of spiking neurons is continuous piecewise linear. We also show that a spiking neural network can emulate the output of any multi-layer (ReLU) neural network. Furthermore, we show that the maximum number of linear regions generated by a spiking neuron scales exponentially with respect to the input dimension, a characteristic that distinguishes it significantly from an artificial (ReLU) neuron. Our results further extend the understanding of the approximation properties of spiking neural networks and open up new avenues where spiking neural networks can be deployed instead of artificial neural networks without any performance loss

    On the Computational Complexity and Formal Hierarchy of Second Order Recurrent Neural Networks

    Full text link
    Artificial neural networks (ANNs) with recurrence and self-attention have been shown to be Turing-complete (TC). However, existing work has shown that these ANNs require multiple turns or unbounded computation time, even with unbounded precision in weights, in order to recognize TC grammars. However, under constraints such as fixed or bounded precision neurons and time, ANNs without memory are shown to struggle to recognize even context-free languages. In this work, we extend the theoretical foundation for the 2nd2^{nd}-order recurrent network (2nd2^{nd} RNN) and prove there exists a class of a 2nd2^{nd} RNN that is Turing-complete with bounded time. This model is capable of directly encoding a transition table into its recurrent weights, enabling bounded time computation and is interpretable by design. We also demonstrate that 22nd order RNNs, without memory, under bounded weights and time constraints, outperform modern-day models such as vanilla RNNs and gated recurrent units in recognizing regular grammars. We provide an upper bound and a stability analysis on the maximum number of neurons required by 22nd order RNNs to recognize any class of regular grammar. Extensive experiments on the Tomita grammars support our findings, demonstrating the importance of tensor connections in crafting computationally efficient RNNs. Finally, we show 2nd2^{nd} order RNNs are also interpretable by extraction and can extract state machines with higher success rates as compared to first-order RNNs. Our results extend the theoretical foundations of RNNs and offer promising avenues for future explainable AI research.Comment: 12 pages, 5 tables, 1 figur

    An Empirical Study on the Language Modal in Visual Question Answering

    Full text link
    Generalization beyond in-domain experience to out-of-distribution data is of paramount significance in the AI domain. Of late, state-of-the-art Visual Question Answering (VQA) models have shown impressive performance on in-domain data, partially due to the language priors bias which, however, hinders the generalization ability in practice. This paper attempts to provide new insights into the influence of language modality on VQA performance from an empirical study perspective. To achieve this, we conducted a series of experiments on six models. The results of these experiments revealed that, 1) apart from prior bias caused by question types, there is a notable influence of postfix-related bias in inducing biases, and 2) training VQA models with word-sequence-related variant questions demonstrated improved performance on the out-of-distribution benchmark, and the LXMERT even achieved a 10-point gain without adopting any debiasing methods. We delved into the underlying reasons behind these experimental results and put forward some simple proposals to reduce the models' dependency on language priors. The experimental results demonstrated the effectiveness of our proposed method in improving performance on the out-of-distribution benchmark, VQA-CPv2. We hope this study can inspire novel insights for future research on designing bias-reduction approaches.Comment: Accepted by IJCAI202

    Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era--The Human-like Authors are Already Here- A New Model

    Get PDF
    Artificial intelligence (AI) systems are creative, unpredictable, independent, autonomous, rational, evolving, capable of data collection, communicative, efficient, accurate, and have free choice among alternatives. Similar to humans, AI systems can autonomously create and generate creative works. The use of AI systems in the production of works, either for personal or manufacturing purposes, has become common in the 3A era of automated, autonomous, and advanced technology. Despite this progress, there is a deep and common concern in modern society that AI technology will become uncontrollable. There is therefore a call for social and legal tools for controlling AI systems’ functions and outcomes. This Article addresses the questions of the copyrightability of artworks generated by AI systems: ownership and accountability. The Article debates who should enjoy the benefits of copyright protection and who should be responsible for the infringement of rights and damages caused by AI systems that independently produce creative works. Subsequently, this Article presents the AI Multi- Player paradigm, arguing against the imposition of these rights and responsibilities on the AI systems themselves or on the different stakeholders, mainly the programmers who develop such systems. Most importantly, this Article proposes the adoption of a new model of accountability for works generated by AI systems: the AI Work Made for Hire (WMFH) model, which views the AI system as a creative employee or independent contractor of the user. Under this proposed model, ownership, control, and responsibility would be imposed on the humans or legal entities that use AI systems and enjoy its benefits. This model accurately reflects the human-like features of AI systems; it is justified by the theories behind copyright protection; and it serves as a practical solution to assuage the fears behind AI systems. In addition, this model unveils the powers behind the operation of AI systems; hence, it efficiently imposes accountability on clearly identifiable persons or legal entities. Since AI systems are copyrightable algorithms, this Article reflects on the accountability for AI systems in other legal regimes, such as tort or criminal law and in various industries using these systems
    • …
    corecore