7,385 research outputs found
Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse
This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses.
This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups.
In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in usersâ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018â6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena
Biochemical sensing based on metal-organic architectures
This research work is about the use of metalâorganic frameworks (MOFs) as a platform for biochemical sensing purposes.
Different metalâorganic architectures were used and individual approaches were pursued, such as the synthesis of electrically conductive hybrid MOF structures as chemiresistive sensing material and the integration of MOF particles into a polymer membrane to explore their potential for sweat biomarker detection using Raman spectroscopy. The focus in each project was on the application of our MOF as sensor material and the evaluation of the signal response upon exposure to relevant analytes.
The achievements presented in this work emphasize the great
potential that metalâorganic architectures have as active material for the sensing of biochemical analytes
Contested environmental futures: rankings, forecasts and indicators as sociotechnical endeavours
In a world where numbers and science are often taken as the voice of truth and reason, Quantitative Devices (QDs) represent the epitome of policy driven by facts rather than hunches. Despite the scholarly interest in understanding the role of quantification in policy, the actual production of rankings, forecasts, indexes and other QDs has, to a great extent, been left unattended. While appendixes and technical notebooks offer an explanation of how these devices are produced, they exclude aspects of their making that are arbitrarily considered "mundane." It is in the everyday performances at research centres that the micropolitics of knowledge production, imaginaries, and frustrations merge. These are vital dimensions to understand the potential, limitations and ethical consequences of QDs.
Using two participant observations as the starting point, this thesis offers a comprehensive critical analysis of the processes through which university-based research centres create QDs that represent the world. It addresses how researchers conceive quantitative data. It pays attention to the discourses of hope and expectation embedded in the devices. Finally, it considers the ethics of creating devices that cannot be replicated independently of their place of production.
Two QDs were analysed: the Violence Early Warning System (ViEWS) and the Environmental Performance Index (EPI). At Uppsala University, researchers created ViEWS to forecast the probability of drought-driven conflicts within the next 100 years. The EPI, produced at the Yale Centre for Environmental Law and Policy, ranks the performance of countries' environmental policies. This thesis challenges existing claims within Science and Technology Studies and the Sociology of Quantification that QDs co-produce knowledge within their realms. I argue that these devices act as vehicles for sociotechnical infrastructures to be consolidated with little debate among policymakers, given their understanding as scientific and objective tools. Moreover, for an indicator to be incorporated within a QD, it needs to be deemed as relevant for those making the devices but also valuable enough to have been previously quantified by data providers. Even more, existing sociotechnical inequalities, power relations and epistemic injustices could impede disadvantaged communities' (e.g., in the Global South) ability to challenge metrics originated in centres in the Global North. This thesis, therefore, demonstrates how the future QDs propose is unilateral and does not acknowledge the myriad possibilities that might arise from a diversity of worldviews. In other words, they cast a future designed to fit under the current status quo.
In sum, through two QDs focused on environmental-related, this thesis launches an inquiry into the elements that make up the imaginaries they propose following the everyday life of their producers. To achieve this, I discuss two core elements. First, the role of tacit knowledge and sociotechnical inequalities in reinforcing power relations between those with the means to quantify and those who might only accommodate proposed futures. Second, the dynamics between research centres and data providers in relation to what is quantified. By scrutinising mundanity, this work is a step forward in understanding the construction of sociotechnical imaginaries and infrastructures
Blockchain Technology: Disruptor or Enhnancer to the Accounting and Auditing Profession
The unique features of blockchain technology (BCT) - peer-to-peer network, distribution ledger, consensus decision-making, transparency, immutability, auditability, and cryptographic security - coupled with the success enjoyed by Bitcoin and other cryptocurrencies have encouraged many to assume that the technology would revolutionise virtually all aspects of business. A growing body of scholarship suggests that BCT would disrupt the accounting and auditing fields by changing accounting practices, disintermediating auditors, and eliminating financial fraud. BCT disrupts audits (Lombard et al.,2021), reduces the role of audit firms (Yermack 2017), undermines accountants' roles with software developers and miners (Fortin & Pimentel 2022); eliminates many management functions, transforms businesses (Tapscott & Tapscott, 2017), facilitates a triple-entry accounting system (Cai, 2021), and prevents fraudulent transactions (Dai, et al., 2017; Rakshit et al., 2022). Despite these speculations, scholars have acknowledged that the application of BCT in the accounting and assurance industry is underexplored and many existing studies are said to lack engagement with practitioners (Dai & Vasarhelyi, 2017; Lombardi et al., 2021; Schmitz & Leoni, 2019).
This study empirically explored whether BCT disrupts or enhances accounting and auditing fields. It also explored the relevance of audit in a BCT environment and the effectiveness of the BCT mechanism for fraud prevention and detection. The study further examined which technical skillsets accountants and auditors require in a BCT environment, and explored the incentives, barriers, and unintended consequences of the adoption of BCT in the accounting and auditing professions. The current COVID-19 environment was also investigated in terms of whether the pandemic has improved BCT adoption or not.
A qualitative exploratory study used semi-structured interviews to engage practitioners from blockchain start-ups, IT experts, financial analysts, accountants, auditors, academics, organisational leaders, consultants, and editors who understood the technology. With the aid of NVIVO qualitative analysis software, the views of 44 participants from 13 countries: New Zealand, Australia, United States, United Kingdom, Canada, Germany, Italy, Ireland, Hong Kong, India, Pakistan, United Arab Emirates, and South Africa were analysed.
The Technological, Organisational, and Environmental (TOE) framework with consequences of innovation context was adopted for this study. This expanded TOE framework was used as the theoretical lens to understand the disruption of BCT and its adoption in the accounting and auditing fields. Four clear patterns emerged. First, BCT is an emerging tool that accountants and auditors use mainly to analyse financial records because technology cannot disintermediate auditors from the financial system. Second, the technology can detect anomalies but cannot prevent financial fraud. Third, BCT has not been adopted by any organisation for financial reporting and accounting purposes, and accountants and auditors do not require new skillsets or an understanding of the BCT programming language to be able to operate in a BCT domain. Fourth, the advent of COVID-19 has not substantially enhanced the adoption of BCT. Additionally, this study highlights the incentives, barriers, and unintended consequences of adopting BCT as financial technology (FinTech). These findings shed light on important questions about BCT disrupting and disintermediating auditors, the extent of adoption in the accounting industry, preventing fraud and anomalies, and underscores the notion that blockchain, as an emerging technology, currently does not appear to be substantially disrupting the accounting and auditing profession.
This study makes methodological, theoretical, and practical contributions. At the methodological level, the study adopted the social constructivist-interpretivism paradigm with an exploratory qualitative method to engage and understand BCT as a disruptive innovation in the accounting industry. The engagement with practitioners from diverse fields, professions, and different countries provides a distinctive and innovative contribution to methodological and practical knowledge. At the theoretical level, the findings contribute to the literature by offering an integrated conceptual TOE framework. The framework offers a reference for practitioners, academics and policymakers seeking to appraise comprehensive factors influencing BCT adoption and its likely unintended consequences. The findings suggest that, at present, no organisations are using BCT for financial reporting and accounting systems. This study contributes to practice by highlighting the differences between initial expectations and practical applications of what BCT can do in the accounting and auditing fields. The study could not find any empirical evidence that BCT will disrupt audits, eliminate the roles of auditors in a financial system, and prevent and detect financial fraud. Also, there was no significant evidence that accountants and auditors required higher-level skillsets and an understanding of BCT programming language to be able to use the technology. Future research should consider the implications of an external audit firm as a node in a BCT network on the internal audit functions. It is equally important to critically examine the relevance of including programming languages or codes in the curriculum of undergraduate accounting students. Future research could also empirically evaluate if a BCT-enabled triple-entry system could prevent financial statements and management fraud
The automatic processing of multiword expressions in Irish
It is well-documented that Multiword Expressions (MWEs) pose a unique challenge
to a variety of NLP tasks such as machine translation, parsing, information retrieval,
and more. For low-resource languages such as Irish, these challenges can be exacerbated by the scarcity of data, and a lack of research in this topic. In order to
improve handling of MWEs in various NLP tasks for Irish, this thesis will address
both the lack of resources specifically targeting MWEs in Irish, and examine how
these resources can be applied to said NLP tasks.
We report on the creation and analysis of a number of lexical resources as part
of this PhD research. Ilfhocail, a lexicon of Irish MWEs, is created through extract-
ing MWEs from other lexical resources such as dictionaries. A corpus annotated
with verbal MWEs in Irish is created for the inclusion of Irish in the PARSEME
Shared Task 1.2. Additionally, MWEs were tagged in a bilingual EN-GA corpus
for inclusion in experiments in machine translation. For the purposes of annotation, a categorisation scheme for nine categories of MWEs in Irish is created, based
on combining linguistic analysis on these types of constructions and cross-lingual
frameworks for defining MWEs.
A case study in applying MWEs to NLP tasks is undertaken, with the exploration of incorporating MWE information while training Neural Machine Translation
systems. Finally, the topic of automatic identification of Irish MWEs is explored,
documenting the training of a system capable of automatically identifying Irish
MWEs from a variety of categories, and the challenges associated with developing
such a system.
This research contributes towards a greater understanding of Irish MWEs and
their applications in NLP, and provides a foundation for future work in exploring
other methods for the automatic discovery and identification of Irish MWEs, and
further developing the MWE resources described above
Application of deep learning methods in materials microscopy for the quality assessment of lithium-ion batteries and sintered NdFeB magnets
Die QualitĂ€tskontrolle konzentriert sich auf die Erkennung von Produktfehlern und die Ăberwachung von AktivitĂ€ten, um zu ĂŒberprĂŒfen, ob die Produkte den gewĂŒnschten QualitĂ€tsstandard erfĂŒllen. Viele AnsĂ€tze fĂŒr die QualitĂ€tskontrolle verwenden spezialisierte Bildverarbeitungssoftware, die auf manuell entwickelten Merkmalen basiert, die von Fachleuten entwickelt wurden, um Objekte zu erkennen und Bilder zu analysieren. Diese Modelle sind jedoch mĂŒhsam, kostspielig in der Entwicklung und schwer zu pflegen, wĂ€hrend die erstellte Lösung oft spröde ist und fĂŒr leicht unterschiedliche AnwendungsfĂ€lle erhebliche Anpassungen erfordert. Aus diesen GrĂŒnden wird die QualitĂ€tskontrolle in der Industrie immer noch hĂ€ufig manuell durchgefĂŒhrt, was zeitaufwĂ€ndig und fehleranfĂ€llig ist. Daher schlagen wir einen allgemeineren datengesteuerten Ansatz vor, der auf den jĂŒngsten Fortschritten in der Computer-Vision-Technologie basiert und Faltungsneuronale Netze verwendet, um reprĂ€sentative Merkmale direkt aus den Daten zu lernen. WĂ€hrend herkömmliche Methoden handgefertigte Merkmale verwenden, um einzelne Objekte zu erkennen, lernen Deep-Learning-AnsĂ€tze verallgemeinerbare Merkmale direkt aus den Trainingsproben, um verschiedene Objekte zu erkennen.
In dieser Dissertation werden Modelle und Techniken fĂŒr die automatisierte Erkennung von Defekten in lichtmikroskopischen Bildern von materialografisch prĂ€parierten Schnitten entwickelt. Wir entwickeln Modelle zur Defekterkennung, die sich grob in ĂŒberwachte und unĂŒberwachte Deep-Learning-Techniken einteilen lassen. Insbesondere werden verschiedene ĂŒberwachte Deep-Learning-Modelle zur Erkennung von Defekten in der Mikrostruktur von Lithium-Ionen-Batterien entwickelt, von binĂ€ren Klassifizierungsmodellen, die auf einem Sliding-Window-Ansatz mit begrenzten Trainingsdaten basieren, bis hin zu komplexen Defekterkennungs- und Lokalisierungsmodellen, die auf ein- und zweistufigen Detektoren basieren. Unser endgĂŒltiges Modell kann mehrere Klassen von Defekten in groĂen Mikroskopiebildern mit hoher Genauigkeit und nahezu in Echtzeit erkennen und lokalisieren.
Das erfolgreiche Trainieren von ĂŒberwachten Deep-Learning-Modellen erfordert jedoch in der Regel eine ausreichend groĂe Menge an markierten Trainingsbeispielen, die oft nicht ohne weiteres verfĂŒgbar sind und deren Beschaffung sehr kostspielig sein kann. Daher schlagen wir zwei AnsĂ€tze vor, die auf unbeaufsichtigtem Deep Learning zur Erkennung von Anomalien in der Mikrostruktur von gesinterten NdFeB-Magneten basieren, ohne dass markierte Trainingsdaten benötigt werden. Die Modelle sind in der Lage, Defekte zu erkennen, indem sie aus den Trainingsdaten indikative Merkmale von nur "normalen" Mikrostrukturmustern lernen. Wir zeigen experimentelle Ergebnisse der vorgeschlagenen Fehlererkennungssysteme, indem wir eine QualitĂ€tsbewertung an kommerziellen Proben von Lithium-Ionen-Batterien und gesinterten NdFeB-Magneten durchfĂŒhren
Getting the gist of it: An investigation of gist processing and the learning of novel gist categories
Gist extraction rapidly processes global structural regularities to provide access to the general meaning and global categorizations of our visual environment â the gist. Medical experts can also extract gist information from mammograms to categorize them as normal or abnormal. However, the visual properties influencing the gist of medical abnormality are largely unknown. It is also not known how medical experts, or any observer for that matter, learned to recognise the gist of new categories. This thesis investigated the processing and acquisition of the gist of abnormality. Chapter 2 observed no significant differences in performance between 500 ms and unlimited viewing time, suggesting that the gist of abnormality is fully accessible after 500 ms and remains available during further visual processing. Next, chapter 3 demonstrated that certain high-pass filters enhanced gist signals in mammograms at risk of future cancer, without affecting overall performance. These filters could be used to enhance mammograms for gist risk-factor scoring. Chapter 4âs multi-session training showed that perceptual exposure with global feedback is sufficient to induce learning of a new gist categorisation. However, learning was affected by individual differences and was not significantly retained after 7-10 days, suggesting that prolonged perceptual exposure might be needed for consolidation. Chapter 5 observed evidence for the neural signature of gist extraction in medical experts across a network of regions, where neural activity patterns showed clear individual differences. Overall, the findings of this thesis confirm the gist extraction of medical abnormality as a rapid, global process that is sensitive to spatial structural regularities. Additionally, it was shown that a gist category can be learned via global feedback, but this learning is hard to retain and is affected by individual differences. Similarly, individual differences were observed in the neural signature of gist extraction by medical experts
- âŠ