8,053 research outputs found
Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse
This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses.
This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups.
In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in usersâ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018â6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena
Exact and Approximated Log Alignments for Processes with Inter-case Dependencies
The execution of different cases of a process is often restricted by
inter-case dependencies through e.g., queueing or shared resources. Various
high-level Petri net formalisms have been proposed that are able to model and
analyze coevolving cases. In this paper, we focus on a formalism tailored to
conformance checking through alignments, which introduces challenges related to
constraints the model should put on interacting process instances and on
resource instances and their roles. We formulate requirements for modeling and
analyzing resource-constrained processes, compare several Petri net extensions
that allow for incorporating inter-case constraints. We argue that the Resource
Constrained -net is an appropriate formalism to be used the context of
conformance checking, which traditionally aligns cases individually failing to
expose deviations on inter-case dependencies. We provide formal mathematical
foundations of the globally aligned event log based on theory of partially
ordered sets and propose an approximation technique based on the composition of
individually aligned cases that resolves inter-case violations locally
The determinants of value addition: a crtitical analysis of global software engineering industry in Sri Lanka
It was evident through the literature that the perceived value delivery of the global software
engineering industry is low due to various facts. Therefore, this research concerns global
software product companies in Sri Lanka to explore the software engineering methods and
practices in increasing the value addition. The overall aim of the study is to identify the key
determinants for value addition in the global software engineering industry and critically
evaluate the impact of them for the software product companies to help maximise the value
addition to ultimately assure the sustainability of the industry.
An exploratory research approach was used initially since findings would emerge while the
study unfolds. Mixed method was employed as the literature itself was inadequate to
investigate the problem effectively to formulate the research framework. Twenty-three face-to-face online interviews were conducted with the subject matter experts covering all the
disciplines from the targeted organisations which was combined with the literature findings as
well as the outcomes of the market research outcomes conducted by both government and nongovernment institutes. Data from the interviews were analysed using NVivo 12. The findings
of the existing literature were verified through the exploratory study and the outcomes were
used to formulate the questionnaire for the public survey. 371 responses were considered after
cleansing the total responses received for the data analysis through SPSS 21 with alpha level
0.05. Internal consistency test was done before the descriptive analysis. After assuring the
reliability of the dataset, the correlation test, multiple regression test and analysis of variance
(ANOVA) test were carried out to fulfil the requirements of meeting the research objectives.
Five determinants for value addition were identified along with the key themes for each area.
They are staffing, delivery process, use of tools, governance, and technology infrastructure.
The cross-functional and self-organised teams built around the value streams, employing a
properly interconnected software delivery process with the right governance in the delivery
pipelines, selection of tools and providing the right infrastructure increases the value delivery.
Moreover, the constraints for value addition are poor interconnection in the internal processes,
rigid functional hierarchies, inaccurate selections and uses of tools, inflexible team
arrangements and inadequate focus for the technology infrastructure. The findings add to the
existing body of knowledge on increasing the value addition by employing effective processes,
practices and tools and the impacts of inaccurate applications the same in the global software
engineering industry
Recommended from our members
Credible to Whom? The Organizational Politics of Credibility in International Relations
Why do foreign policy decision makers care about the credibility of their own stateâs commitments? How does organizational identity shape policymakersâ concern for credibility, and in turn, their willingness to use force during crises? While much previous research examines how decision makers assess othersâ credibility, only recently have scholars questioned when and why leaders or their advisers prioritize their own stateâs credibility.
Building on classic scholarship in bureaucratic politics, I argue that organizational identity affects the dimensions of credibility that national security officials value, and ultimately, their policy advocacy around the use of force. Particular differences arise between military and diplomatic organizations; while military officials equate credibility with hard military capabilities, diplomats view credibility in terms of reputation, or demonstrating reliability and resolve to external parties.
During crises, military officials confine their advice on the use of force to what can be achieved given current capabilities, while diplomats exhibit higher willingness to use force as a signal of a strong commitment. I test these propositions using text analysis of archival records from two collections of U.S. national security policy documents, eight case studies of American, British, and French crisis decision making, and an original survey experiment involving more than 400 current or former U.S. national security officials. I demonstrate that credibility concerns affect the balance of hawkishness in advice that diplomats and military officials deliver to leaders as a function of organizational identity
The Future of Work and Digital Skills
The theme for the events was "The Future of Work and Digital Skills". The 4IR caused a
hollowing out of middle-income jobs (Frey & Osborne, 2017) but COVID-19 exposed the digital gap as survival depended mainly on digital infrastructure and connectivity. Almost overnight, organizations that had not invested in a digital strategy suddenly realized the need for such a strategy and the associated digital skills. The effects have been profound for those who struggled to adapt, while those who stepped up have reaped quite the reward.Therefore, there are no longer certainties about what the world will look like in a few years from now. However, there are certain ways to anticipate the changes that are occurring and plan on how to continually adapt to an increasingly changing world. Certain jobs will soon be lost and will not come back; other new jobs will however be created. Using data science and other predictive sciences, it is possible to anticipate, to the extent possible, the rate at which certain jobs will be replaced and new jobs created in different industries. Accordingly, the collocated events sought to bring together government, international organizations, academia, industry, organized labour and civil society to deliberate on how these changes are occurring in South Africa, how fast they are occurring and what needs to change in order to prepare society for the changes.Deutsche Gesellschaft fĂŒr Internationale Zusammenarbeit (GIZ)
British High Commission (BHC)School of Computin
Machine learning for managing structured and semi-structured data
As the digitalization of private, commercial, and public sectors advances rapidly, an increasing amount of data is becoming available. In order to gain insights or knowledge from these enormous amounts of raw data, a deep analysis is essential. The immense volume requires highly automated processes with minimal manual interaction. In recent years, machine learning methods have taken on a central role in this task. In addition to the individual data points, their interrelationships often play a decisive role, e.g. whether two patients are related to each other or whether they are treated by the same physician. Hence, relational learning is an important branch of research, which studies how to harness this explicitly available structural information between different data points. Recently, graph neural networks have gained importance. These can be considered an extension of convolutional neural networks from regular grids to general (irregular) graphs.
Knowledge graphs play an essential role in representing facts about entities in a machine-readable way. While great efforts are made to store as many facts as possible in these graphs, they often remain incomplete, i.e., true facts are missing. Manual verification and expansion of the graphs is becoming increasingly difficult due to the large volume of data and must therefore be assisted or substituted by automated procedures which predict missing facts. The field of knowledge graph completion can be roughly divided into two categories: Link Prediction and Entity Alignment. In Link Prediction, machine learning models are trained to predict unknown facts between entities based on the known facts. Entity Alignment aims at identifying shared entities between graphs in order to link several such knowledge graphs based on some provided seed alignment pairs.
In this thesis, we present important advances in the field of knowledge graph completion. For Entity Alignment, we show how to reduce the number of required seed alignments while maintaining performance by novel active learning techniques. We also discuss the power of textual features and show that graph-neural-network-based methods have difficulties with noisy alignment data. For Link Prediction, we demonstrate how to improve the prediction for unknown entities at training time by exploiting additional metadata on individual statements, often available in modern graphs. Supported with results from a large-scale experimental study, we present an analysis of the effect of individual components of machine learning models, e.g., the interaction function or loss criterion, on the task of link prediction. We also introduce a software library that simplifies the implementation and study of such components and makes them accessible to a wide research community, ranging from relational learning researchers to applied fields, such as life sciences. Finally, we propose a novel metric for evaluating ranking results, as used for both completion tasks. It allows for easier interpretation and comparison, especially in cases with different numbers of ranking candidates, as encountered in the de-facto standard evaluation protocols for both tasks.Mit der rasant fortschreitenden Digitalisierung des privaten, kommerziellen und öffentlichen Sektors werden immer gröĂere Datenmengen verfĂŒgbar. Um aus diesen enormen Mengen an Rohdaten Erkenntnisse oder Wissen zu gewinnen, ist eine tiefgehende Analyse unerlĂ€sslich. Das immense Volumen erfordert hochautomatisierte Prozesse mit minimaler manueller Interaktion. In den letzten Jahren haben Methoden des maschinellen Lernens eine zentrale Rolle bei dieser Aufgabe eingenommen. Neben den einzelnen Datenpunkten spielen oft auch deren ZusammenhĂ€nge eine entscheidende Rolle, z.B. ob zwei Patienten miteinander verwandt sind oder ob sie vom selben Arzt behandelt werden. Daher ist das relationale Lernen ein wichtiger Forschungszweig, der untersucht, wie diese explizit verfĂŒgbaren strukturellen Informationen zwischen verschiedenen Datenpunkten nutzbar gemacht werden können. In letzter Zeit haben Graph Neural Networks an Bedeutung gewonnen. Diese können als eine Erweiterung von CNNs von regelmĂ€Ăigen Gittern auf allgemeine (unregelmĂ€Ăige) Graphen betrachtet werden.
Wissensgraphen spielen eine wesentliche Rolle bei der Darstellung von Fakten ĂŒber EntitĂ€ten in maschinenlesbaren Form. Obwohl groĂe Anstrengungen unternommen werden, so viele Fakten wie möglich in diesen Graphen zu speichern, bleiben sie oft unvollstĂ€ndig, d. h. es fehlen Fakten. Die manuelle ĂberprĂŒfung und Erweiterung der Graphen wird aufgrund der groĂen Datenmengen immer schwieriger und muss daher durch automatisierte Verfahren unterstĂŒtzt oder ersetzt werden, die fehlende Fakten vorhersagen. Das Gebiet der WissensgraphenvervollstĂ€ndigung lĂ€sst sich grob in zwei Kategorien einteilen: Link Prediction und Entity Alignment. Bei der Link Prediction werden maschinelle Lernmodelle trainiert, um unbekannte Fakten zwischen EntitĂ€ten auf der Grundlage der bekannten Fakten vorherzusagen. Entity Alignment zielt darauf ab, gemeinsame EntitĂ€ten zwischen Graphen zu identifizieren, um mehrere solcher Wissensgraphen auf der Grundlage einiger vorgegebener Paare zu verknĂŒpfen.
In dieser Arbeit stellen wir wichtige Fortschritte auf dem Gebiet der VervollstĂ€ndigung von Wissensgraphen vor. FĂŒr das Entity Alignment zeigen wir, wie die Anzahl der benötigten Paare reduziert werden kann, wĂ€hrend die Leistung durch neuartige aktive Lerntechniken erhalten bleibt. Wir erörtern auch die LeistungsfĂ€higkeit von Textmerkmalen und zeigen, dass auf Graph-Neural-Networks basierende Methoden Schwierigkeiten mit verrauschten Paar-Daten haben. FĂŒr die Link Prediction demonstrieren wir, wie die Vorhersage fĂŒr unbekannte EntitĂ€ten zur Trainingszeit verbessert werden kann, indem zusĂ€tzliche Metadaten zu einzelnen Aussagen genutzt werden, die oft in modernen Graphen verfĂŒgbar sind. GestĂŒtzt auf Ergebnisse einer groĂ angelegten experimentellen Studie prĂ€sentieren wir eine Analyse der Auswirkungen einzelner Komponenten von Modellen des maschinellen Lernens, z. B. der Interaktionsfunktion oder des Verlustkriteriums, auf die Aufgabe der Link Prediction. AuĂerdem stellen wir eine Softwarebibliothek vor, die die Implementierung und Untersuchung solcher Komponenten vereinfacht und sie einer breiten Forschungsgemeinschaft zugĂ€nglich macht, die von Forschern im Bereich des relationalen Lernens bis hin zu angewandten Bereichen wie den Biowissenschaften reicht. SchlieĂlich schlagen wir eine neuartige Metrik fĂŒr die Bewertung von Ranking-Ergebnissen vor, wie sie fĂŒr beide Aufgaben verwendet wird. Sie ermöglicht eine einfachere Interpretation und einen leichteren Vergleich, insbesondere in FĂ€llen mit einer unterschiedlichen Anzahl von Kandidaten, wie sie in den de-facto Standardbewertungsprotokollen fĂŒr beide Aufgaben vorkommen
Freelance subtitlers in a subtitle production network in the OTT industry in Thailand: a longitudinal study
The present study sets out to investigate a subtitle production network in the over-the-top (OTT) industry in Thailand through the perspective of freelance subtitlers. A qualitative longitudinal research design was adopted to gain insights into (1) the way the work practices of freelance subtitlers are influenced by both human and non-human actors in the network, (2) the evolution of the network, and (3) how the freelance subtitlersâ perception of quality is influenced by changes occurring in the network. Eleven subtitlers were interviewed every six months over a period of two years, contributing to over 60 hours of interview data. The data analysis was informed by selected concepts from Actor-Network Theory (ANT) (Law 1992, 2009; Latour 1996, 2005; Mol 2010), and complemented by the three-dimensional quality model proposed by Abdallah (2016, 2017). Reflexive thematic analysis (Braun and Clarke 2019a, 2020b) was used to generate themes and sub-themes which address the research questions and tell compelling stories about the actor-network. It was found that from July 2017 to September 2019, the subtitle production network, which was sustained by complex interrelationships between actors, underwent a number of changes. The changes affected the work practices of freelance subtitlers in a more negative than positive way, demonstrating their precarious position in an industry that has widely adopted the vendor model (Moorkens 2017). Moreover, as perceived by the research participants, under increasingly undesirable working conditions, it became more challenging to maintain a quality process and to produce quality subtitles. Finally, translation technology and tools, including machine translation, were found to be key non-human actors that catalyse the changes in the network under study
A productive response to legacy system petrification
Requirements change. The requirements of a legacy information system change, often in unanticipated ways, and at a more rapid pace than the rate at which the information system itself can be evolved to support them. The capabilities of a legacy system progressively fall further and further behind their evolving requirements, in a degrading process termed petrification. As systems petrify, they deliver diminishing business value, hamper business effectiveness, and drain organisational resources. To address legacy systems, the first challenge is to understand how to shed their resistance to tracking requirements change. The second challenge is to ensure that a newly adaptable system never again petrifies into a change resistant legacy system. This thesis addresses both challenges. The approach outlined herein is underpinned by an agile migration process - termed Productive Migration - that homes in upon the specific causes of petrification within each particular legacy system and provides guidance upon how to address them. That guidance comes in part from a personalised catalogue of petrifying patterns, which capture recurring themes underlying petrification. These steer us to the problems actually present in a given legacy system, and lead us to suitable antidote productive patterns via which we can deal with those problems one by one. To prevent newly adaptable systems from again degrading into legacy systems, we appeal to a follow-on process, termed Productive Evolution, which embraces and keeps pace with change rather than resisting and falling behind it. Productive Evolution teaches us to be vigilant against signs of system petrification and helps us to nip them in the bud. The aim is to nurture systems that remain supportive of the business, that are adaptable in step with ongoing requirements change, and that continue to retain their value as significant business assets
AIUCD 2022 - Proceedings
Lâundicesima edizione del Convegno Nazionale dellâAIUCD-Associazione di Informatica Umanistica ha per titolo Culture digitali. Intersezioni: filosofia, arti, media. Nel titolo Ăš presente, in maniera esplicita, la richiesta di una riflessione, metodologica e teorica, sullâinterrelazione tra tecnologie digitali, scienze dellâinformazione, discipline filosofiche, mondo delle arti e cultural studies
- âŠ