4,434 research outputs found

    Making DNSSEC Future Proof

    Get PDF

    The Future of AI Accountability in the Financial Markets

    Get PDF
    Consumer interaction with the financial market ranges from applying for credit cards, to financing the purchase of a home, to buying and selling securities. And with each transaction, the lender, bank, and brokerage firm are likely utilizing artificial intelligence (AI) behind the scenes to augment their operations. While AI’s ability to process data at high speeds and in large quantities makes it an important tool for financial institutions, it is imperative to be attentive to the risks and limitations that accompany its use. In the context of financial markets, AI’s lack of decision-making transparency, often called the “black box problem,” along with AI’s dependence on quality data, present additional complexities when considering the aggregate effect of algorithms deployed in the market. Owing to these issues, the benefits of AI must be weighed against the particular risks that accompany the spread of this technology throughout the markets. Financial regulation, as it stands, is complex, expensive, and often involves overlapping regulations and regulators. Thus far, financial regulators have responded by publishing guidance and standards for firms utilizing AI tools, but they have stopped short of demanding access to source codes, setting specific standards for developers, or otherwise altering traditional regulatory frameworks. While regulators are no strangers to regulating new financial products or technology, fitting AI within the traditional frameworks of prudential regulation, registration requirements, supervision, and enforcement actions leaves concerning gaps in oversight. This Article examines the suitability of the current financial regulatory frameworks for overseeing AI in the financial markets. It suggests that regulators consider developing multi-faceted approaches to promote AI accountability. This Article recognizes the potential harms and likelihood for regulatory arbitrage if these regulatory gaps remain unattended and thus suggests focusing on key elements for future regulation—namely, the human developers and regulation of data to truly “hold AI accountable.” Therefore, holding AI accountable requires identifying the different ways in which sophisticated algorithms may cause harm to the markets and consumers if ineffectively regulated, and developing an approach that can flexibly respond to these broad concerns. Notably, this Article cautions against reliance on self-regulation and recommends that future policies take an adaptive approach to address current and future AI technologies

    An Enhanced Spectral Clustering Algorithm with S-Distance

    Get PDF
    This work is partially supported by the project "Prediction of diseases through computer assisted diagnosis system using images captured by minimally-invasive and non-invasive modalities", Computer Science and Engineering, PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur India (under ID: SPARCMHRD-231). This work is also partially supported by the project "Smart Solutions in Ubiquitous Computing Environments", Grant Agency of Excellence, University of Hradec Kralove, Faculty of Informatics and Management, Czech Republic (under ID: UHK-FIM-GE-2204/2021); project at Universiti Teknologi Malaysia (UTM) under Research University Grant Vot-20H04, Malaysia Research University Network (MRUN) Vot 4L876 and the Fundamental Research Grant Scheme (FRGS) Vot5F073 supported by the Ministry of Education Malaysia for the completion of the research.Calculating and monitoring customer churn metrics is important for companies to retain customers and earn more profit in business. In this study, a churn prediction framework is developed by modified spectral clustering (SC). However, the similarity measure plays an imperative role in clustering for predicting churn with better accuracy by analyzing industrial data. The linear Euclidean distance in the traditional SC is replaced by the non-linear S-distance (Sd). The Sd is deduced from the concept of S-divergence (SD). Several characteristics of Sd are discussed in this work. Assays are conducted to endorse the proposed clustering algorithm on four synthetics, eight UCI, two industrial databases and one telecommunications database related to customer churn. Three existing clustering algorithms-k-means, density-based spatial clustering of applications with noise and conventional SC-are also implemented on the above-mentioned 15 databases. The empirical outcomes show that the proposed clustering algorithm beats three existing clustering algorithms in terms of its Jaccard index, f-score, recall, precision and accuracy. Finally, we also test the significance of the clustering results by the Wilcoxon's signed-rank test, Wilcoxon's rank-sum test, and sign tests. The relative study shows that the outcomes of the proposed algorithm are interesting, especially in the case of clusters of arbitrary shape.project "Prediction of diseases through computer assisted diagnosis system using images captured by minimally-invasive and non-invasive modalities", Computer Science and Engineering, PDPM Indian Institute of Information Technology, Design and Manufacturing SPARCMHRD-231project "Smart Solutions in Ubiquitous Computing Environments", Grant Agency of Excellence, University of Hradec Kralove, Faculty of Informatics and Management, Czech Republic UHK-FIM-GE-2204/2021Universiti Teknologi Malaysia (UTM) 20H04Malaysia Research University Network (MRUN) 4L876Fundamental Research Grant Scheme (FRGS) by the Ministry of Education Malaysia 5F07

    Textual Assemblages and Transmission: Unified models for (Digital) Scholarly Editions and Text Digitisation

    Get PDF
    Scholarly editing and textual digitisation are typically seen as two distinct, though related, fields. Scholarly editing is replete with traditions and codified practices, while the digitisation of text-bearing material is a recent enterprise, governed more by practice than theory. From the perspective of scholarly editing, the mere digitisation of text is a world away from the intellectual engagement and rigour on which textual scholarship is founded. Recent developments have led to a more open-minded perspective. As scholarly editing has made increasing use of the digital medium, and textual digitisation begins to make use of scholarly editing tools and techniques, the more obvious distinctions dissolve. Such criteria as ‘critical engagement’ become insufficient grounds on which to base a clear distinction. However, this perspective is not without its risks either. It perpetuates the idea that a (digital) scholarly edition and a digitised text are interchangeable. This thesis argues that a real distinction can be drawn. It starts by considering scholarly editing and textual digitisation as textual transmissions. Starting from the ontological perspective of Deleuze and Guattari, it builds a framework capable for considering the processes behind scholarly editing and digitisation. In doing so, it uncovers a number of critical distinction. Scholarly editing creates a regime of representation that is self-consistent and self-validating. Textual digitisation does not. In the final chapters, this thesis uses the crowd-sourced Letters of 1916 project as a test-case for a new conceptualisation of a scholarly edition: one that is neither globally self-consistent nor self-validating, but which provides a conceptual model in which these absences might be mitigated against and the function of a scholarly edition fulfilled

    On the Merits and Limits of Replication and Negation for IS Research

    Get PDF
    A simple idea underpins the scientific process: All results should be subject to continued testing and questioning. Given the particularities of our international IS discipline, different viewpoints seem to be required to develop a picture of the merits and limits of testing and replication. Hence, the authors of this paper approach the topic from different perspectives. Following the ongoing discourse in neighbouring disciplines, we start by highlighting the significance of testing, replication and negation for scientific discourse as well as for the sponsors of research initiatives. Next, we discuss types of replication research and the challenges associated with each. In the third section, challenging questions are raised in the light of the ability of IS research for self-correction. Then, we address publication issues related to types of replications that require shifting editorial behaviors. The fifth section reflects on the possible use and interpretation of replication results in the light of contingency. As a key takeaway, the paper suggests ways to identify studies worth replicating in our field and reflects on possible roles of replication and testing for future IS research

    Validation of Score Meaning for the Next Generation of Assessments

    Get PDF
    Despite developments in research and practice on using examinee response process data in assessment design, the use of such data in test validation is rare. Validation of Score Meaning in the Next Generation of Assessments Using Response Processes highlights the importance of validity evidence based on response processes and provides guidance to measurement researchers and practitioners in creating and using such evidence as a regular part of the assessment validation process. Response processes refer to approaches and behaviors of examinees when they interpret assessment situations and formulate and generate solutions as revealed through verbalizations, eye movements, response times, or computer clicks. Such response process data can provide information about the extent to which items and tasks engage examinees in the intended ways. With contributions from the top researchers in the field of assessment, this volume includes chapters that focus on methodological issues and on applications across multiple contexts of assessment interpretation and use. In Part I of this book, contributors discuss the framing of validity as an evidence-based argument for the interpretation of the meaning of test scores, the specifics of different methods of response process data collection and analysis, and the use of response process data relative to issues of validation as highlighted in the joint standards on testing. In Part II, chapter authors offer examples that illustrate the use of response process data in assessment validation. These cases are provided specifically to address issues related to the analysis and interpretation of performance on assessments of complex cognition, assessments designed to inform classroom learning and instruction, and assessments intended for students with varying cultural and linguistic backgrounds

    What makes an interesting job? Job characteristic preferences and personality amongst undergraduates

    Get PDF
    This item is only available electronically.Understanding job applicants’ preferences towards job characteristics can help companies focus on promoting and developing the important aspects of workplace, which in turn is linked to better job satisfaction and productivity. By advertising specific job and organisational characteristics, companies aim to recruit applicants who are attracted to such characteristics, hence achieving a fit between its employees and the organisation. Currently, there is a lack of research investigating the underpinnings of JCPs. The current study aims to explore JCPs amongst undergraduate students and clarify the relationship between personality factors and JCPs. 109 Psychology undergraduate students were asked to rate the importance of 23 job characteristics and completed a personality trait and facet measure. The results showed that students rated employment conditions (salary, benefits, tenure and working hours) as more important to other than to themselves. There were also differences in perception with regards to the importance of task, social and organisational characteristics. It was also found that Extraversion, Openness and Conscientiousness were significant predictors of JCPs, and personality facets accounted for more variance in JCP than Big-Five personality traits. These findings have implications for company recruiters and human resource practitioner in areas of recruitment, selection and development, and provide insight into the use of personality assessment in these areas.Thesis (M.Psych(Organisational & Human Factors)) -- University of Adelaide, School of Psychology, 201
    corecore