13,525 research outputs found
Dependently Typing R Vectors, Arrays, and Matrices
The R programming language is widely used in large-scale data analyses. It
contains especially rich built-in support for dealing with vectors, arrays, and
matrices. These operations feature prominently in the applications that form
R's raison d'\^etre, making their behavior worth understanding. Furthermore,
ostensibly for programmer convenience, their behavior in R is a notable
extension over the corresponding operations in mathematics, thereby offering
some challenges for specification and static verification.
We report on progress towards statically typing this aspect of the R
language. The interesting aspects of typing, in this case, warn programmers
about violating bounds, so the types must necessarily be dependent. We explain
the ways in which R extends standard mathematical behavior. We then show how
R's behavior can be specified in LiquidHaskell, a dependently-typed extension
to Haskell. In the general case, actually verifying library and client code is
currently beyond LiquidHaskell's reach; therefore, this work provides
challenges and opportunities both for typing R and for progress in
dependently-typed programming languages.Comment: 10 page
Exploring the Training Factors that Influence the Role of Teaching Assistants to Teach to Students With SEND in a Mainstream Classroom in England
With the implementation of inclusive education having become increasingly valued over the years, the training of Teaching Assistants (TAs) is now more important than ever, given that they work alongside pupils with special educational needs and disabilities (hereinafter SEND) in mainstream education classrooms. The current study explored the training factors that influence the role of TAs when it comes to teaching SEND students in mainstream classrooms in England during their one-year training period. This work aimed to increase understanding of how the training of TAs is seen to influence the development of their personal knowledge and professional skills. The study has significance for our comprehension of the connection between the TAsā training and the quality of education in the classroom. In addition, this work investigated whether there existed a correlation between the teaching experience of TAs and their background information, such as their gender, age, grade level taught, years of teaching experience, and qualification level.
A critical realist theoretical approach was adopted for this two-phased study, which involved the mixing of adaptive and grounded theories respectively. The multi-method project featured 13 case studies, each of which involved a trainee TA, his/her college tutor, and the classroom teacher who was supervising the trainee TA. The analysis was based on using semi-structured interviews, various questionnaires, and non-participant observation methods for each of these case studies during the TAās one-year training period. The primary analysis of the research was completed by comparing the various kinds of data collected from the participants in the first and second data collection stages of each case. Further analysis involved cross-case analysis using a grounded theory approach, which made it possible to draw conclusions and put forth several core propositions. Compared with previous research, the findings of the current study reveal many implications for the training and deployment conditions of TAs, while they also challenge the prevailing approaches in many aspects, in addition to offering more diversified, enriched, and comprehensive explanations of the critical pedagogical issues
Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics
Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts.
In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact -values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited.
In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical in least squares regression.
In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions
A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms
Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data.
A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability.
To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity.
A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case.
The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change.
The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the āproblem of implementationā and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sectorās emergence
The determinants of value addition: a crtitical analysis of global software engineering industry in Sri Lanka
It was evident through the literature that the perceived value delivery of the global software
engineering industry is low due to various facts. Therefore, this research concerns global
software product companies in Sri Lanka to explore the software engineering methods and
practices in increasing the value addition. The overall aim of the study is to identify the key
determinants for value addition in the global software engineering industry and critically
evaluate the impact of them for the software product companies to help maximise the value
addition to ultimately assure the sustainability of the industry.
An exploratory research approach was used initially since findings would emerge while the
study unfolds. Mixed method was employed as the literature itself was inadequate to
investigate the problem effectively to formulate the research framework. Twenty-three face-to-face online interviews were conducted with the subject matter experts covering all the
disciplines from the targeted organisations which was combined with the literature findings as
well as the outcomes of the market research outcomes conducted by both government and nongovernment institutes. Data from the interviews were analysed using NVivo 12. The findings
of the existing literature were verified through the exploratory study and the outcomes were
used to formulate the questionnaire for the public survey. 371 responses were considered after
cleansing the total responses received for the data analysis through SPSS 21 with alpha level
0.05. Internal consistency test was done before the descriptive analysis. After assuring the
reliability of the dataset, the correlation test, multiple regression test and analysis of variance
(ANOVA) test were carried out to fulfil the requirements of meeting the research objectives.
Five determinants for value addition were identified along with the key themes for each area.
They are staffing, delivery process, use of tools, governance, and technology infrastructure.
The cross-functional and self-organised teams built around the value streams, employing a
properly interconnected software delivery process with the right governance in the delivery
pipelines, selection of tools and providing the right infrastructure increases the value delivery.
Moreover, the constraints for value addition are poor interconnection in the internal processes,
rigid functional hierarchies, inaccurate selections and uses of tools, inflexible team
arrangements and inadequate focus for the technology infrastructure. The findings add to the
existing body of knowledge on increasing the value addition by employing effective processes,
practices and tools and the impacts of inaccurate applications the same in the global software
engineering industry
DataProVe: Fully Automated Conformance Verification Between Data Protection Policies and System Architectures
Privacy and data protection by design are relevant parts of the General Data Protection Regulation (GDPR), in which businesses and organisations are encouraged to implement measures at an early stage of the system design phase to fulfil data protection requirements. This paper addresses the policy and system architecture design and propose two variants of privacy policy language and architecture description language, respectively, for specifying and verifying data protection and privacy requirements. In addition, we develop a fully automated algorithm based on logic, for verifying three types of conformance relations (privacy, data protection, and functional conformance) between a policy and an architecture specified in our languagesā variants. Compared to related works, this approach supports a more systematic and fine-grained analysis of the privacy, data protection, and functional properties of a system. Our theoretical methods are then implemented as a software tool called DataProVe and its feasibility is demonstrated based on the centralised and decentralised approaches of COVID-19 contact tracing applications
On the Expressive Power of the Normal Form for Branching-Time Temporal logics
With the emerging applications that involve complex distributed systems branching-time specifications are specifically important as they reflect dynamic and non-deterministic nature of such applications.
We describe the expressive power of a simple yet powerful branching-time specification framework ā branching-time normal form, which has been developed as part of clausal resolution for branching-time temporal logics. We show the encoding of BĀØuchi Tree Automata in the language of the normal form, thus representing, syntactically, tree automata in a high-level way. Thus we can treat BNF as a normal form for the latter. These results enable us (1) to translate given problem specifications into the normal form and apply as a verification method a deductive reasoning technique ā the clausal temporal resolution; (2) to apply one of the core components of the resolution method - the loop searching to extract, syntactically, hidden invariants in a wide range of complex temporal specifications
DEEP REINFORCEMENT LEARNING AND MODEL PREDICTIVE CONTROL APPROACHES FOR THE SCHEDULED OPERATION OF DOMESTIC REFRIGERATORS
Excess capacity of the UKās national grid is widely quoted to be reducing to around 4% over the coming years as a consequence of increased economic growth (and hence power usage) and reductions in power generation plants. There is concern that short term variations in power demand could lead to serious wide-scale disruption on a national scale. This is therefore spawning greater attention on augmenting traditional generation plants with renewable and localized energy storage technologies, and consideration of improved demand side responses (DSR), where power consumers are incentivized to switch off assets when the grid is under pressure. It is estimated, for instance, that refrigeration/HVAC systems alone could account for ~14% of the total UK energy usage, with refrigeration and water heating/cooling systems, in particular, being able to act as real-time ābufferā technologies that can be demand-managed to accommodate transient demands by being switched-off for short periods without damaging their outputs. Large populations of thermostatically controlled loads (TCLs) hold significant potential for performing ancillary services in power systems since they are well-established and widely distributed around the power network. In the domestic sector, refrigerators and freezers collectively constitute a very large electrical load since they are continuously connected and are present in almost most households. The rapid proliferation of the āInternet of Thingsā (IoT) now affords the opportunity to monitor and visualise smart buildings appliances performance and specifically, schedule the operation of the widely distributed domestic refrigerator and freezers to collectively improve energy efficiency and reduce peak power consumption on the electrical grid. To accomplish this, this research proposes the real-time estimation of the thermal mass of individual refrigerators in a network using on-line parameter identification, and the co-ordinated (ON-OFF) scheduling of the refrigerator compressors to maintain their respective temperatures within specified hysteresis bandsācommensurate with accommodating food safety standards. Custom Model Predictive Control (MPC) schemes and a Machine Learning algorithm (Reinforcement Learning) are researched to realize an appropriate scheduling methodology which is implemented through COTS IoT hardware. Benefits afforded by the proposed schemes are investigated through experimental trials which show that the co-ordinated operation of domestic refrigerators can 1) reduce the peak power consumption as seen from the perspective of the electrical power grid (i.e. peak power shaving), 2) can adaptively control the temperature hysteresis band of individual refrigerators to increase operational efficiency, and 3) contribute to a widely distributed aggregated load shed for Demand Side Response purposes in order to aid grid stability. Comparative studies of measurements from experimental trials show that the co-ordinated scheduling of refrigerators allows energy savings of between 19% and 29% compared to their traditional isolated (non-co-operative) operation. Moreover, by adaptively changing the hysteresis bands of individual fridges in response to changes in thermal behaviour, a further 20% of savings in energy are possible at local refrigerator level, thereby providing benefits to both network supplier and individual consumer
Industry 4.0: product digital twins for remanufacturing decision-making
Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a productās life cycle.
The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The modelās architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an assetās through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary āsmartā capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored
Investigating illicit drug use in adolescent students in England
Abstract The Smoking Drinking Drug Use Survey of adolescents aged 11 to 15 years living in England shows that lifetime drug use by adolescents aged 11 to 15 years has increased (15% to 24%) from 2014 to 2018 (NHS Digital, 2017, 2021b). This upward trend is despite the implementation of drug policies focused on reducing supply, possession, and manufacture of illicit drugs. Based on the premise that drug use is a socially learnt behaviour, the main objective of this research is to investigate whether social learning factors (imitation, parental reinforcement, peer association and attitudes to drug use) mediate drug use in adolescents aged 11 to 15 years living in England. The second objective is to identify which social learning factors mediate drug use by ages, region, and gender. Using the Social Structure Social Learning (SSSL) theory as a framework for the research, this study contributes to the literature by identifying a) the strongest social learning behaviour for each age, gender and region in England and b) the mechanism (mediation) by which social learning affects drug use. This research employs rich data on drug use drawn from the Smoking Drinking Drug Use Survey 2016, a cross-sectional survey of adolescents aged 11-15 years across England (as of October 2021 the data for the most recent survey 2018 was not available for analysis). Mediation analysis was used to evaluate which social learning factors mediate the association between age, gender, region and drug use. The results showed that there were differences in learning behaviours that were specific to age, gender and region. For example, the most significant social learning behaviour for drug use among boys was āimitation of friendsā, whilst for females, it was āpeer associationā among females (i.e. having a perception that peers are using drugs). In addition, having āpositive attitudes to glueā (i.e. āit is ok to try glueā) was the strongest learning behaviour for drug use among younger individuals (i.e. at ages 11 to 13). Furthermore, whilst in Northern England, the strongest learning behaviour was having āpositive attitudes to cannabisā, in London peer association was found to be the strongest learning pathway to drug use. Family disapproval of drug use (āpersuade me not to take drugsā) was found to be a protective factor against drug use for all ages except for age 11 and 12 years and those living in the East Midlands and London. In these cases, more authoritarian parenting āā strong parental disapproval (āstop me from taking drugsā) was found to be a protective factor. This research offers two main contributions to the literature. First, it shows empirical linkages between constructs built using SSSL theory that have not been previously explored within a population of young adolescents in England. Second, it identifies the effects and degree to which social learning affects the relationship between drug use and social structure. Overall, this research also contributes to an improved theoretical rationale for existing SSSL associations; that is, social learning can behave as a mediator or a moderator depending on the context. The evidence produced by this thesis could also have potentially relevant policy implications. More specifically, the differences in the social learning behaviours may suggest the need to implement more targeted prevention policies aimed by age, gender and regional groups of young adolescents
- ā¦