9,117 research outputs found
Technical Dimensions of Programming Systems
Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art.
In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too.
We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them.
We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions.
Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems.
Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants
Implementing Health Impact Assessment as a Required Component of Government Policymaking: A Multi-Level Exploration of the Determinants of Healthy Public Policy
It is widely understood that the public policies of ‘non-health’ government sectors have greater impacts on population health than those of the traditional healthcare realm. Health Impact Assessment (HIA) is a decision support tool that identifies and promotes the health benefits of policies while also mitigating their unintended negative consequences. Despite numerous calls to do so, the Ontario government has yet to implement HIA as a required component of policy development. This dissertation therefore sought to identify the contexts and factors that may both enable and impede HIA use at the sub-national (i.e., provincial, territorial, or state) government level.
The three integrated articles of this dissertation provide insights into specific aspects of the policy process as they relate to HIA. Chapter one details a case study of purposive information-seeking among public servants within Ontario’s Ministry of Education (MOE). Situated within Ontario’s Ministry of Health (MOH), chapter two presents a case study of policy collaboration between health and ‘non-health’ ministries. Finally, chapter three details a framework analysis of the political factors supporting health impact tool use in two sub-national jurisdictions – namely, Québec and South Australia.
MOE respondents (N=9) identified four components of policymaking ‘due diligence’, including evidence retrieval, consultation and collaboration, referencing, and risk analysis. As prospective HIA users, they also confirmed that information is not routinely sought to mitigate the potential negative health impacts of education-based policies. MOH respondents (N=8) identified the bureaucratic hierarchy as the brokering mechanism for inter-ministerial policy development. As prospective HIA stewards, they also confirmed that the ministry does not proactively flag the potential negative health impacts of non-health sector policies. Finally, ‘lessons learned’ from case articles specific to Québec (n=12) and South Australia (n=17) identified the political factors supporting tool use at different stages of the policy cycle, including agenda setting (‘policy elites’ and ‘political culture’), implementation (‘jurisdiction’), and sustained implementation (‘institutional power’).
This work provides important insights into ‘real life’ policymaking. By highlighting existing facilitators of and barriers to HIA use, the findings offer a useful starting point from which proponents may tailor context-specific strategies to sustainably implement HIA at the sub-national government level
TOWARDS AN UNDERSTANDING OF EFFORTFUL FUNDRAISING EXPERIENCES: USING INTERPRETATIVE PHENOMENOLOGICAL ANALYSIS IN FUNDRAISING RESEARCH
Physical-activity oriented community fundraising has experienced an exponential growth in popularity over the past 15 years. The aim of this study was to explore the value of effortful fundraising experiences, from the point of view of participants, and explore the impact that these experiences have on people’s lives. This study used an IPA approach to interview 23 individuals, recognising the role of participants as proxy (nonprofessional) fundraisers for charitable organisations, and the unique organisation donor dynamic that this creates. It also bought together relevant psychological theory related to physical activity fundraising experiences (through a narrative literature review) and used primary interview data to substantiate these. Effortful fundraising experiences are examined in detail to understand their significance to participants, and how such experiences influence their connection with a charity or cause. This was done with an idiographic focus at first, before examining convergences and divergences across the sample. This study found that effortful fundraising experiences can have a profound positive impact upon community fundraisers in both the short and the long term. Additionally, it found that these experiences can be opportunities for charitable organisations to create lasting meaningful relationships with participants, and foster mutually beneficial lifetime relationships with them. Further research is needed to test specific psychological theory in this context, including self-esteem theory, self determination theory, and the martyrdom effect (among others)
How to Be a God
When it comes to questions concerning the nature of Reality, Philosophers and Theologians have the answers.
Philosophers have the answers that can’t be proven right. Theologians have the answers that can’t be proven wrong.
Today’s designers of Massively-Multiplayer Online Role-Playing Games create realities for a living. They can’t spend centuries mulling over the issues: they have to face them head-on. Their practical experiences can indicate which theoretical proposals actually work in practice.
That’s today’s designers. Tomorrow’s will have a whole new set of questions to answer.
The designers of virtual worlds are the literal gods of those realities. Suppose Artificial Intelligence comes through and allows us to create non-player characters as smart as us. What are our responsibilities as gods? How should we, as gods, conduct ourselves?
How should we be gods
A productive response to legacy system petrification
Requirements change. The requirements of a legacy information system change, often in unanticipated ways, and at a more rapid pace than the rate at which the information system itself can be evolved to support them. The capabilities of a legacy system progressively fall further and further behind their evolving requirements, in a degrading process termed petrification. As systems petrify, they deliver diminishing business value, hamper business effectiveness, and drain organisational resources. To address legacy systems, the first challenge is to understand how to shed their resistance to tracking requirements change. The second challenge is to ensure that a newly adaptable system never again petrifies into a change resistant legacy system. This thesis addresses both challenges. The approach outlined herein is underpinned by an agile migration process - termed Productive Migration - that homes in upon the specific causes of petrification within each particular legacy system and provides guidance upon how to address them. That guidance comes in part from a personalised catalogue of petrifying patterns, which capture recurring themes underlying petrification. These steer us to the problems actually present in a given legacy system, and lead us to suitable antidote productive patterns via which we can deal with those problems one by one. To prevent newly adaptable systems from again degrading into legacy systems, we appeal to a follow-on process, termed Productive Evolution, which embraces and keeps pace with change rather than resisting and falling behind it. Productive Evolution teaches us to be vigilant against signs of system petrification and helps us to nip them in the bud. The aim is to nurture systems that remain supportive of the business, that are adaptable in step with ongoing requirements change, and that continue to retain their value as significant business assets
In search of 'The people of La Manche': A comparative study of funerary practices in the Transmanche region during the late Neolithic and Early Bronze Age (250BC-1500BC)
This research project sets out to discover whether archaeological evidence dating between 2500 BC - 1500 BC from supposed funerary contexts in Kent, flanders and north-eastern Transmanche France is sufficient to make valid comparisons between social and cultural structures on either side of the short-sea Channel region. Evidence from the beginning of the period primarily comes in the form of the widespread Beaker phenomenon. Chapter 5 shows that this class of data is abundant in Kent but quite sparse in the Continental zones - most probably because it has not survived well. This problem also affects the human depositional evidence catalogued in Chapter 6, particularly in Fanders but also in north-eastern Transmanche France. This constricts comparative analysis, however, the abundant data from Kent means that general trends are still discernible. The quality and volume of data relating to the distribution, location, morphology and use of circular monuments in all three zones is far better - as demonstrated in Chapter 7 -mostly due to extensive aerial surveying over several decades. When the datasets are taken as a whole, it becomes possible to successfully apply various forms of comparative analyses. Most remarkably, this has revealed that some monuments apparently have encoded within them a sophisticated and potentially symbolically charged geometric shape. This, along with other less contentious evidence, demonstrates a level of conformity that strongly suggests a stratum of cultural homogeneity existed throughout the Transmanche region during the period 2500 BC - 1500 BC. The fact that such changes as are apparent seem to have developed simultaneously in each of the zones adds additional weight to the theory that contact throughout the Transmanche region was endemic. Even so, it may not have been continuous; there may actually have been times of relative isolation - the data is simply too course to eliminate such a possibility
Machine learning and large scale cancer omic data: decoding the biological mechanisms underpinning cancer
Many of the mechanisms underpinning cancer risk and tumorigenesis are still not
fully understood. However, the next-generation sequencing revolution and the
rapid advances in big data analytics allow us to study cells
and complex phenotypes at unprecedented depth and breadth. While experimental
and clinical data are still fundamental to validate findings and confirm
hypotheses, computational biology is key for the analysis of system- and
population-level data for detection of hidden patterns and the generation of
testable hypotheses.
In this work, I tackle two main questions regarding cancer risk and tumorigenesis
that require novel computational methods for the analysis of system-level omic
data. First, I focused on how frequent, low-penetrance inherited variants modulate
cancer risk in the broader population. Genome-Wide Association Studies (GWAS)
have shown that Single Nucleotide Polymorphisms (SNP) contribute to cancer risk
with multiple subtle effects, but they are still failing to give further insight
into their synergistic effects. I developed a novel hierarchical Bayesian
regression model, BAGHERA, to estimate heritability at the gene-level from GWAS
summary statistics. I then used BAGHERA to analyse data from 38 malignancies in
the UK Biobank. I showed that genes with high heritable risk are involved in key
processes associated with cancer and are often localised in genes that are
somatically mutated drivers.
Heritability, like many other omics analysis methods, study the effects of DNA
variants on single genes in isolation. However, we know that most biological
processes require the interplay of multiple genes and we often lack a broad
perspective on them. For the second part of this thesis, I then worked on the
integration of Protein-Protein Interaction (PPI) graphs and omics data, which
bridges this gap and recapitulates these interactions at a system level. First,
I developed a modular and scalable Python package, PyGNA, that enables
robust statistical testing of genesets' topological properties. PyGNA complements
the literature with a tool that can be routinely introduced in bioinformatics
automated pipelines. With PyGNA I processed multiple genesets obtained from
genomics and transcriptomics data. However, topological properties alone have
proven to be insufficient to fully characterise complex phenotypes.
Therefore, I focused on a model that allows to combine topological and functional
data to detect multiple communities associated with a phenotype. Detecting
cancer-specific submodules is still an open problem, but it has the potential to
elucidate mechanisms detectable only by integrating multi-omics data. Building
on the recent advances in Graph Neural Networks (GNN), I present a supervised
geometric deep learning model that combines GNNs and Stochastic Block Models
(SBM). The model is able to learn multiple graph-aware representations, as
multiple joint SBMs, of the attributed network, accounting for nodes
participating in multiple processes. The simultaneous estimation of structure
and function provides an interpretable picture of how genes interact in specific
conditions and it allows to detect novel putative pathways associated with
cancer
A Syntactical Reverse Engineering Approach to Fourth Generation Programming Languages Using Formal Methods
Fourth-generation programming languages (4GLs) feature rapid development with minimum configuration required by developers. However, 4GLs can suffer from limitations such as high maintenance cost and legacy software practices.
Reverse engineering an existing large legacy 4GL system into a currently maintainable programming language can be a cheaper and more effective solution than rewriting from scratch. Tools do not exist so far, for reverse engineering proprietary XML-like and model-driven 4GLs where the full language specification is not in the public domain.
This research has developed a novel method of reverse engineering some of the syntax of such 4GLs (with Uniface as an exemplar) derived from a particular system, with a view to providing a reliable method to translate/transpile that system's code and data structures into a modern object-oriented language (such as C\#).
The method was also applied, although only to a limited extent, to some other 4GLs, Informix and Apex, to show that it was in principle more broadly applicable. A novel testing method that the syntax had been successfully translated was provided using 'abstract syntax trees'.
The novel method took manually crafted grammar rules, together with Encapsulated Document Object Model based data from the source language and then used parsers to produce syntactically valid and equivalent code in the target/output language.
This proof of concept research has provided a methodology plus sample code to automate part of the process. The methodology comprised a set of manual or semi-automated steps. Further automation is left for future research.
In principle, the author's method could be extended to allow the reverse engineering recovery of the syntax of systems developed in other proprietary 4GLs. This would reduce time and cost for the ongoing maintenance of such systems by enabling their software engineers to work using modern object-oriented languages, methodologies, tools and techniques
- …