11,139 research outputs found
Self-Supervised Learning to Prove Equivalence Between Straight-Line Programs via Rewrite Rules
We target the problem of automatically synthesizing proofs of semantic
equivalence between two programs made of sequences of statements. We represent
programs using abstract syntax trees (AST), where a given set of
semantics-preserving rewrite rules can be applied on a specific AST pattern to
generate a transformed and semantically equivalent program. In our system, two
programs are equivalent if there exists a sequence of application of these
rewrite rules that leads to rewriting one program into the other. We propose a
neural network architecture based on a transformer model to generate proofs of
equivalence between program pairs. The system outputs a sequence of rewrites,
and the validity of the sequence is simply checked by verifying it can be
applied. If no valid sequence is produced by the neural network, the system
reports the programs as non-equivalent, ensuring by design no programs may be
incorrectly reported as equivalent. Our system is fully implemented for a given
grammar which can represent straight-line programs with function calls and
multiple types. To efficiently train the system to generate such sequences, we
develop an original incremental training technique, named self-supervised
sample selection. We extensively study the effectiveness of this novel training
approach on proofs of increasing complexity and length. Our system, S4Eq,
achieves 97% proof success on a curated dataset of 10,000 pairs of equivalent
programsComment: 30 pages including appendi
Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics
Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts.
In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact -values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited.
In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical in least squares regression.
In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions
Bridging technology and educational psychology: an exploration of individual differences in technology-assisted language learning within an Algerian EFL setting
The implementation of technology in language learning and teaching has a great influence onthe teaching and learning process as a whole and its impact on the learners’ psychological state seems of paramount significance, since it could be either an aid or a barrier to students’ academic performance. This thesis therefore explores individual learner differences in technology-assisted language learning (TALL) and when using educational technologies in
higher education within an Algerian English as a Foreign Language (EFL) setting.
Although I initially intended to investigate the relationship between TALL and certain affective variables mainly motivation, anxiety, self-confidence, and learning styles inside the classroom, the collection and analysis of data shifted my focus to a holistic view of individual learner
differences in TALL environments and when using educational technologies within and beyond the classroom. In an attempt to bridge technology and educational psychology, this
ethnographic case study considers the nature of the impact of technology integration in language teaching and learning on the psychology of individual language learners inside and
outside the classroom. The study considers the reality constructed by participants and reveals multiple and distinctive views about the relationship between the use of educational technologies in higher education and individual learner differences. It took place in a university
in the north-west of Algeria and involved 27 main and secondary student and teacher participants. It consisted of focus-group discussions, follow-up discussions, teachers’
interviews, learners’ diaries, observation, and field notes. It was initially conducted within the classroom but gradually expanded to other settings outside the classroom depending on the availability of participants, their actions, and activities.
The study indicates that the impact of technology integration in EFL learning on individual learner differences is both complex and dynamic. It is complex in the sense that it is shown in multiple aspects and reflected on the students and their differences. In addition to various positive and different negative influences of different technology uses and the different psychological reactions among students to the same technology scenario, the study reveals the
unrecognised different manifestations of similar psychological traits in the same ELT technology scenario. It is also dynamic since it is characterised by constant change according to contextual approaches to and practical realities of technology integration in language teaching and learning in the setting, including discrepancies between students’ attitudes and teacher’ actions, mismatches between technological experiences inside and outside the classroom, local concerns and generalised beliefs about TALL in the context, and the rapid and unplanned shift to online educational delivery during the Covid-19 pandemic situation.
The study may therefore be of interest, not only to Algerian teachers and students, but also to academics and institutions in other contexts through considering the complex and dynamic
impact of TALL and technology integration at higher education on individual differences, and to academics in similar low-resource contexts by undertaking a context approach to technology integration
A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms
Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data.
A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability.
To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity.
A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case.
The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change.
The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence
On the Mechanism of Building Core Competencies: a Study of Chinese Multinational Port Enterprises
This study aims to explore how Chinese multinational port enterprises (MNPEs) build
their core competencies. Core competencies are firms’special capabilities and sources
to gain sustainable competitive advantage (SCA) in marketplace, and the concept led
to extensive research and debates. However, few studies include inquiries about the
mechanisms of building core competencies in the context of Chinese MNPEs.
Accordingly, answers were sought to three research questions:
1. What are the core competencies of the Chinese MNPEs?
2. What are the mechanisms that the Chinese MNPEs use to build their core
competencies?
3. What are the paths that the Chinese MNPEs pursue to build their resources bases?
The study adopted a multiple-case study design, focusing on building mechanism of
core competencies with RBV. It selected purposively five Chinese leading MNPEs
and three industry associations as Case Companies.
The study revealed three main findings. First, it identified three generic core
competencies possessed by Case Companies, i.e., innovation in business models and
operations, utilisation of technologies, and acquisition of strategic resources. Second,
it developed the conceptual framework of the Mechanism of Building Core
Competencies (MBCC), which is a process of change of collective learning in
effective and efficient utilization of resources of a firm in response to critical events.
Third, it proposed three paths to build core competencies, i.e., enhancing collective
learning, selecting sustainable processes, and building resource base.
The study contributes to the knowledge of core competencies and RBV in three ways:
(1) presenting three generic core competencies of the Chinese MNPEs, (2) proposing
a new conceptual framework to explain how Chinese MNPEs build their core
competencies, (3) suggesting a solid anchor point (MBCC) to explain the links among
resources, core competencies, and SCA. The findings set benchmarks for Chinese
logistics industry and provide guidelines to build core competencies
The determinants of value addition: a crtitical analysis of global software engineering industry in Sri Lanka
It was evident through the literature that the perceived value delivery of the global software
engineering industry is low due to various facts. Therefore, this research concerns global
software product companies in Sri Lanka to explore the software engineering methods and
practices in increasing the value addition. The overall aim of the study is to identify the key
determinants for value addition in the global software engineering industry and critically
evaluate the impact of them for the software product companies to help maximise the value
addition to ultimately assure the sustainability of the industry.
An exploratory research approach was used initially since findings would emerge while the
study unfolds. Mixed method was employed as the literature itself was inadequate to
investigate the problem effectively to formulate the research framework. Twenty-three face-to-face online interviews were conducted with the subject matter experts covering all the
disciplines from the targeted organisations which was combined with the literature findings as
well as the outcomes of the market research outcomes conducted by both government and nongovernment institutes. Data from the interviews were analysed using NVivo 12. The findings
of the existing literature were verified through the exploratory study and the outcomes were
used to formulate the questionnaire for the public survey. 371 responses were considered after
cleansing the total responses received for the data analysis through SPSS 21 with alpha level
0.05. Internal consistency test was done before the descriptive analysis. After assuring the
reliability of the dataset, the correlation test, multiple regression test and analysis of variance
(ANOVA) test were carried out to fulfil the requirements of meeting the research objectives.
Five determinants for value addition were identified along with the key themes for each area.
They are staffing, delivery process, use of tools, governance, and technology infrastructure.
The cross-functional and self-organised teams built around the value streams, employing a
properly interconnected software delivery process with the right governance in the delivery
pipelines, selection of tools and providing the right infrastructure increases the value delivery.
Moreover, the constraints for value addition are poor interconnection in the internal processes,
rigid functional hierarchies, inaccurate selections and uses of tools, inflexible team
arrangements and inadequate focus for the technology infrastructure. The findings add to the
existing body of knowledge on increasing the value addition by employing effective processes,
practices and tools and the impacts of inaccurate applications the same in the global software
engineering industry
Recommended from our members
Co-design As Healing: Exploring The Experiences Of Participants Facing Mental Health Problems
This thesis is an exploration of the healing role of co-design in mental health. Although co-design projects conducted within mental health settings are rising, existing literature tends to focus on the object of design and its outcomes while the experiences of participants per se remain largely unexplored. The guiding research question of this study is not how we design things that improve mental health, but how co-designing, as an act, might do so.
The thesis presents two projects that were organized in collaboration with the mental health charity Islington Mind and the Psychosis Therapy Project (PTP) in London.
The project at Islington Mind used a structured design process inviting participants to design for wellbeing. A case study analysis provides insights on how participants were impacted, summarizing key challenges and opportunities.
The design at PTP worked towards creating a collective brief in an emergent fashion, finally culminating in a board game. The experiences of participants were explored through Interpretative Phenomenological Analysis (IPA), using semi-structured interview data. The analysis served to identify key themes characterising the experience of co-design such as contributing, connecting, thinking and intentioning. In addition, a mixed-methods analysis of questionnaires and interview data exploring participants' wellbeing, showed that all participants who engaged fairly consistently in the project improved after the project ended, although some participants' scores returned to baseline six months later.
Reflecting on both projects, an approach to facilitation within mental health is outlined, detailing how the dimensions of weaving and layered participation, nurturing mattering and facilitating attitudes interlace. This contribution raises awareness of tacit dimensions in the practice of facilitation, articulating the nuances of how to encourage and sustain meaningful and ethical engagement and offering insights into a range of tools. It highlights the importance of remaining reflexive in relation to attitudes and emotions and discusses practical methodological and ethical challenges and ways to resolve them which can be of benefit to researchers embarking on a similar journey.
The thesis also offers detailed insights on how methodologies from different fields were integrated into a whole, arguing for transparency and reflexivity about epistemological assumptions, and how underlying paradigms shift in an interdisciplinary context.
Based on the overall findings, the thesis makes a case for considering design as healing (or a designerly way of healing), highlighting implications at a systems, social and individual level. It makes an original contribution to our understanding of design, highlighting its healing character, and proposes a new way to support mental health. The participants in this study not only had increased their own wellbeing through co-designing, but were also empowered and contributed towards healing the world. Hence, the thesis argues for a unique, holistic perspective of design and mental health, recognizing the interconnectedness of the individual, social and systemic dimensions of the healing processes that are ignited
Post-Millennial Queer Sensibility: Collaborative Authorship as Disidentification in Queer Intertextual Commodities
This dissertation is examining LGBTQ+ audiences and creatives collaborating in the creation of new media texts like web shows, podcasts, and video games. The study focuses on three main objects or media texts: Carmilla (web series), Welcome to Night Vale (podcast), and Undertale (video game). These texts are transmedia objects or intertextual commodities. I argue that by using queer gestures of collaborative authorship that reaches out to the audience for canonical contribution create an emerging queer production culture that disidentifies with capitalism even as it negotiates capitalistic structures. The post-millennial queer sensibility is a constellation of aesthetics, self-representation, alternative financing, and interactivity that prioritizes community, trust, and authenticity using new technologies for co-creation.
Within my study, there are four key tactics or queer gestures being explored: remediation, radical ambiguity and multi-forms as queer aesthetics, audience self-representation, alternative financing like micropatronage & licensed fan-made merchandise, and interactivity as performance. The goal of this project is to better understand the changing conceptions of authorship/ownership, canon/fanon (official text/fan created extensions), and community/capitalism in queer subcultures as an indicator of the potential change in more mainstream cultural attitudes. The project takes into consideration a variety of intersecting identities including gender, race, class, and of course sexual orientation in its analysis. By examining the legal discourse around collaborative authorship, the real-life production practices, and audience-creator interactions and attitudes, this study provides insight into how media creatives work with audiences to co-create self-representative media, the motivations, and rewards for creative, audiences, and owners. This study aims to contribute towards a fuller understanding of queer production cultures and audience reception of these media texts, of which there is relatively little academic information. Specifically, the study mines for insights into the changing attitudes towards authorship, ownership, and collaboration within queer indie media projects, especially as these objects are relying on the self-representation of both audiences and creatives in the formation of the text
Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos
Data charts are widely used in our daily lives, being present in regular media,
such as newspapers, magazines, web pages, books, and many others. A well constructed
data chart leads to an intuitive understanding of its underlying data
and in the same way, when data charts have wrong design choices, a redesign
of these representations might be needed. However, in most cases, these
charts are shown as a static image, which means that the original data are not
usually available. Therefore, automatic methods could be applied to extract the
underlying data from the chart images to allow these changes. The task of
recognizing charts and extracting data from them is complex, largely due to the
variety of chart types and their visual characteristics.
Computer Vision techniques for image classification and object detection are
widely used for the problem of recognizing charts, but only in images without
any disturbance. Other features in real-world images that can make this task
difficult are not present in most literature works, like photo distortions, noise,
alignment, etc. Two computer vision techniques that can assist this task and
have been little explored in this context are perspective detection and
correction. These methods transform a distorted and noisy chart in a clear
chart, with its type ready for data extraction or other uses. The task of
reconstructing data is straightforward, as long the data is available the
visualization can be reconstructed, but the scenario of reconstructing it on the
same context is complex.
Using a Visualization Grammar for this scenario is a key component, as these
grammars usually have extensions for interaction, chart layers, and multiple
views without requiring extra development effort.
This work presents a model for automated support for custom recognition, and
reconstruction of charts in images. The model automatically performs the
process steps, such as reverse engineering, turning a static chart back into its
data table for later reconstruction, while allowing the user to make modifications
in case of uncertainties. This work also features a model-based architecture
along with prototypes for various use cases. Validation is performed step by
step, with methods inspired by the literature. This work features three use
cases providing proof of concept and validation of the model.
The first use case features usage of chart recognition methods focused on
documents in the real-world, the second use case focus on vocalization of
charts, using a visualization grammar to reconstruct a chart in audio format,
and the third use case presents an Augmented Reality application that
recognizes and reconstructs charts in the same context (a piece of paper)
overlaying the new chart and interaction widgets. The results showed that with
slight changes, chart recognition and reconstruction methods are now ready for
real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando
presentes nos meios de comunicação regulares, tais como jornais, revistas,
páginas web, livros, e muitos outros. Um gráfico bem construÃdo leva a uma
compreensão intuitiva dos seus dados inerentes e da mesma forma, quando
os gráficos de dados têm escolhas de conceção erradas, poderá ser
necessário um redesenho destas representações. Contudo, na maioria dos
casos, estes gráficos são mostrados como uma imagem estática, o que
significa que os dados originais não estão normalmente disponÃveis. Portanto,
poderiam ser aplicados métodos automáticos para extrair os dados inerentes
das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de
reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande
parte devido à variedade de tipos de gráficos e à s suas caracterÃsticas visuais.
As técnicas de Visão Computacional para classificação de imagens e deteção
de objetos são amplamente utilizadas para o problema de reconhecimento de
gráficos, mas apenas em imagens sem qualquer ruÃdo. Outras caracterÃsticas
das imagens do mundo real que podem dificultar esta tarefa não estão
presentes na maioria das obras literárias, como distorções fotográficas, ruÃdo,
alinhamento, etc. Duas técnicas de visão computacional que podem ajudar
nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e
correção da perspetiva. Estes métodos transformam um gráfico distorcido e
ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados
ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que
os dados estejam disponÃveis a visualização pode ser reconstruÃda, mas o
cenário de reconstrução no mesmo contexto é complexo.
A utilização de uma Gramática de Visualização para este cenário é um
componente chave, uma vez que estas gramáticas têm normalmente
extensões para interação, camadas de gráficos, e visões múltiplas sem exigir
um esforço extra de desenvolvimento.
Este trabalho apresenta um modelo de suporte automatizado para o
reconhecimento personalizado, e reconstrução de gráficos em imagens
estáticas. O modelo executa automaticamente as etapas do processo, tais
como engenharia inversa, transformando um gráfico estático novamente na
sua tabela de dados para posterior reconstrução, ao mesmo tempo que
permite ao utilizador fazer modificações em caso de incertezas. Este trabalho
também apresenta uma arquitetura baseada em modelos, juntamente com
protótipos para vários casos de utilização. A validação é efetuada passo a
passo, com métodos inspirados na literatura. Este trabalho apresenta três
casos de uso, fornecendo prova de conceito e validação do modelo.
O primeiro caso de uso apresenta a utilização de métodos de reconhecimento
de gráficos focando em documentos no mundo real, o segundo caso de uso
centra-se na vocalização de gráficos, utilizando uma gramática de visualização
para reconstruir um gráfico em formato áudio, e o terceiro caso de uso
apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói
gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos
gráficos e widgets de interação. Os resultados mostraram que com pequenas
alterações, os métodos de reconhecimento e reconstrução dos gráficos estão
agora prontos para os gráficos do mundo real, tendo em consideração o
tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic
DataProVe: Fully Automated Conformance Verification Between Data Protection Policies and System Architectures
Privacy and data protection by design are relevant parts of the General Data Protection Regulation (GDPR), in which businesses and organisations are encouraged to implement measures at an early stage of the system design phase to fulfil data protection requirements. This paper addresses the policy and system architecture design and propose two variants of privacy policy language and architecture description language, respectively, for specifying and verifying data protection and privacy requirements. In addition, we develop a fully automated algorithm based on logic, for verifying three types of conformance relations (privacy, data protection, and functional conformance) between a policy and an architecture specified in our languages’ variants. Compared to related works, this approach supports a more systematic and fine-grained analysis of the privacy, data protection, and functional properties of a system. Our theoretical methods are then implemented as a software tool called DataProVe and its feasibility is demonstrated based on the centralised and decentralised approaches of COVID-19 contact tracing applications
- …