555 research outputs found

    “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy

    Get PDF
    Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts

    A survey on bias in machine learning research

    Full text link
    Current research on bias in machine learning often focuses on fairness, while overlooking the roots or causes of bias. However, bias was originally defined as a "systematic error," often caused by humans at different stages of the research process. This article aims to bridge the gap between past literature on bias in research by providing taxonomy for potential sources of bias and errors in data and models. The paper focus on bias in machine learning pipelines. Survey analyses over forty potential sources of bias in the machine learning (ML) pipeline, providing clear examples for each. By understanding the sources and consequences of bias in machine learning, better methods can be developed for its detecting and mitigating, leading to fairer, more transparent, and more accurate ML models.Comment: Submitted to journal. arXiv admin note: substantial text overlap with arXiv:2308.0946

    University bulletin 2023-2024

    Get PDF
    This catalog for the University of South Carolina at Beaufort lists information about the college, the academic calendar, admission policies, degree programs, faculty and course descriptions

    To have done with theory? Baudrillard, or the literal confrontation with reality

    Get PDF
    Baudrillard, Eluding the temptation to reinterpret Jean Baudrillard once more, this work started from the ambition to consider his thought in its irreducibility, that is, in a radically literal way. Literalness is a recurring though overlooked term in Baudrillard’s oeuvre, and it is drawn from the direct concatenation of words in poetry or puns and other language games. It does not indicate a realist positivism but a principle that considers the metamorphoses and mutual alteration of things in their singularity without reducing them to a general equivalent (i.e. the meaning of words in a poem, which destroys its appearances). Reapplying the idea to Baudrillard and finding other singular routes through his “passwords” is a way to short-circuit its reductio ad realitatem and reaffirm its challenge to the hegemony of global integration. Even in the literature dedicated to it, this exercise has been rarer than the ‘hermeneutical’ one, where Baudrillard’s oeuvre was taken as a discourse to be interpreted and explained (finding an equivalent for its singularity). In plain polemic with any ideal of conformity between theory and reality (from which our present conformisms arguably derive, too), Baudrillard conceived thought not as something to be verified but as a series of hypotheses to be repeatedly radicalised – he often described it as a “spiral”, a form which challenges the codification of things, including its own. Coherent with this, the thesis does not consider Baudrillard’s work either a reflection or a prediction of reality but, instead, an out-and-out act, a precious singular object which, interrogated, ‘thinks’ us and our current events ‘back’. In the second part, Baudrillard’s hypotheses are taken further and measured in their capacity to challenge the reality of current events and phenomena. The thesis confronts the ‘hypocritical’ position of critical thinking, which accepts the present principle of reality. It questions the interminability of our condition, where death seems thinkable only as a senseless interruption of the apparatus. It also confronts the solidarity between orthodox and alternative realities of the COVID pandemic and the Ukrainian invasion, searching for what is irreducible to the perfect osmosis of “virtual and factual”. Drawing equally from the convulsions of globalisation and the psychopathologies of academics, from DeLillo’s fiction and Baudrillard’s lesser-studied influences, this study evaluates the irreversibility of our system against the increasingly silent challenges of radical thought. It looks for what an increasingly pessimistic late Baudrillard called ‘rogue singularities’: forms which, often outside the conventional realms one would expect to find them, constitute potential sources of the fragility of global power. ‘To have done with theory’ does not mean abandoning radical thought and, together with it, the singularity of humanity. It means, as the thesis concludes, the courage to leave conventional ideas of theory and listen to less audible voices which, at the heart of this “enormous conspiracy”, whisper — as a mysterious lady in Mariupol did to Putin — “It’s all not true! It’s all for show!”

    Control and Archaism

    Get PDF
    The presentation will delve into the relationship between control society and archaism. Deleuze’s conceptualization of control implies the reconfiguration of former spaces of discipline. While the Foucauldian model of discipline was characterized by enclosed spaces (such as prisons, armies, and churches), Deleuze’s notion of control highlights a continuous network where individuals are no longer molded but modulated. This prompts us to ponder the shift in the temporal structure that occurs during the transition from a disciplinary society to one governed by control. Specifically, this presentation aims to explore the disparities in our historical perspectives when viewed from disciplinary and control paradigms. In this context, I will explore Deleuze and Guattari's concept of ‘archaism’. According to Deleuze and Guattari, archaism is an inherent aspect of capitalism, its continual endeavor to reconstruct territoriality and replicate antiquated coding patterns. Capitalism necessitates archaism due to its lack of inherent belief structures. In essence, the system, which the duo name the ‘age of cynicism’, requires the revival of old codes to sustain its systems of subjugation and dominance. As my presentation will demonstrate, one can discern a transformation in the evolution of archaism as society shifts from discipline to control. By comparing the fascist archaism of the thirties in Germany and the archaism of contemporary alt-right movements, I will show that a disciplinary society presupposes a more centralized form of archaism, which is highly susceptible to state control and deeply ingrained in the institutional fabric of social life. Conversely, a control society implies a diversification and creativity in archaic attitudes, hinting at its potential for emancipation—a viewpoint emphasized by Deleuze and Guattari themselves in ’Anti-Oedipus’

    New perspectives on A.I. in sentencing. Human decision-making between risk assessment tools and protection of humans rights.

    Get PDF
    The aim of this thesis is to investigate a field that until a few years ago was foreign to and distant from the penal system. The purpose of this undertaking is to account for the role that technology could plays in the Italian Criminal Law system. More specifically, this thesis attempts to scrutinize a very intricate phase of adjudication. After deciding on the type of an individual's liability, a judge must decide on the severity of the penalty. This type of decision implies a prognostic assessment that looks to the future. It is precisely in this field and in prognostic assessments that, as has already been anticipated in the United, instruments and processes are inserted in the pre-trial but also in the decision-making phase. In this contribution, we attempt to describe the current state of this field, trying, as a matter of method, to select the most relevant or most used tools. Using comparative and qualitative methods, the uses of some of these instruments in the supranational legal system are analyzed. Focusing attention on the Italian system, an attempt was made to investigate the nature of the element of an individual's ‘social dangerousness’ (pericolosità sociale) and capacity to commit offences, types of assessments that are fundamental in our system because they are part of various types of decisions, including the choice of the best sanctioning treatment. It was decided to turn our attention to this latter field because it is believed that the judge does not always have the time, the means and the ability to assess all the elements of a subject and identify the best 'individualizing' treatment in order to fully realize the function of Article 27, paragraph 3 of the Constitution

    Bayesian computation in astronomy: novel methods for parallel and gradient-free inference

    Get PDF
    The goal of this thesis is twofold; introduce the fundamentals of Bayesian inference and computation focusing on astronomical and cosmological applications, and present recent advances in probabilistic computational methods developed by the author that aim to facilitate Bayesian data analysis for the next generation of astronomical observations and theoretical models. The first part of this thesis familiarises the reader with the notion of probability and its relevance for science through the prism of Bayesian reasoning, by introducing the key constituents of the theory and discussing its best practices. The second part includes a pedagogical introduction to the principles of Bayesian computation motivated by the geometric characteristics of probability distributions and followed by a detailed exposition of various methods including Markov chain Monte Carlo (MCMC), Sequential Monte Carlo (SMC) and Nested Sampling (NS). Finally, the third part presents two novel computational methods and their respective software implementations. The first such development is Ensemble Slice Sampling (ESS), a new class of MCMC algorithms that extend the applicability of the standard Slice Sampler by adaptively tuning its only hyperparameter and utilising an ensemble of parallel walkers in order to efficiently handle strong correlations between parameters. The parallel, black–box and gradient-free nature of the method renders it ideal for use in combination with computationally expensive and non–differentiable models often met in astronomy. ESS is implemented in Python in the well–tested and open-source software package called zeus that is specifically designed to tackle the computational challenges posed by modern astronomical and cosmological analyses. In particular, use of the code requires minimal, if any, hand–tuning of hyperparameters while its performance is insensitive to linear correlations and it can scale up to thousands of CPUs without any extra effort. The next contribution includes the introduction of Preconditioned Monte Carlo (PMC), a novel Monte Carlo method for Bayesian inference that facilitates effective sampling of probability distributions with non–trivial geometry. PMC utilises a Normalising Flow (NF) in order to decorrelate the parameters of the distribution and then proceeds by sampling from the preconditioned target distribution using an adaptive SMC scheme. PMC, through its Python implementation pocoMC, achieves excellent sampling performance, including accurate estimation of the model evidence, for highly correlated, non–Gaussian, and multimodal target distributions. Finally, the code is directly parallelisable, manifesting linear scaling up to thousands of CPUs
    • 

    corecore