8,373 research outputs found
Recommended from our members
The impact of employees' working relations in creating and retaining trust: the case of the Bahrain Olympic Committee
Introduction: This thesis investigates the impact of employees’ working relations in creating, maintaining and retaining trust in the Bahrain Olympic Committee (BOC).
Aim: The main aim of this thesis is to determine how the three groups of Organisational Trust variables, namely Social System Elements (SSE), Factors of Trustworthiness (FoT) and Third-Party Gossip (TPG), affect employees’ Organisational Trust (OTR) in the BOC and promote Organisational Citizenship Behaviour (OCB). To answer this main aim, a conceptual framework was created that focused on exploring the following research aims: (1) the interrelationship between SSE and FoT, (2) the effect of SSE on OTR, (3) the impact of TPG on OTR and (4) the effect of OTR on overall OCB.
Methodology: The study uses a mixed-method case study research style that included in-depth semi-structured interviews with 17 managers, an online questionnaire survey with 320 employees of the BOC and an analysis of the BOC’s Annual Reports from 2015 to 2018.
Results: The qualitative and quantitative findings indicate, firstly, that there is a significant interrelationship between SSE and FoT, establishing that SSE’s perception of organisational justice (OJ), including that FoTs benevolence and integrity as the most important factors in yielding employees’ trust in the BOC. Secondly, it has been established that SSEs have significant direct and indirect effects on OTR. Thirdly, negative and positive TPG concurrently occurred in the BOC and the prevalence of negative TPG poses more impact on OTR. Finally, this study’s findings demonstrated OTR’s effect in generating OCB, including that Civic Virtue was rated as the most preferred of the five OCB themes; this indicates the managers’ and the employees’ strong emotional attachment and support of the activities taking place at the BOC.
Contributions: Overall, this thesis substantially contributes to OTR literature, particularly in the context of the Middle East. It also proposes several insightful recommendations for future research and practical implications for practitioners in the field of Organisational Trust
Downstream-agnostic Adversarial Examples
Self-supervised learning usually uses a large amount of unlabeled data to
pre-train an encoder which can be used as a general-purpose feature extractor,
such that downstream users only need to perform fine-tuning operations to enjoy
the benefit of "large model". Despite this promising prospect, the security of
pre-trained encoder has not been thoroughly investigated yet, especially when
the pre-trained encoder is publicly available for commercial use.
In this paper, we propose AdvEncoder, the first framework for generating
downstream-agnostic universal adversarial examples based on the pre-trained
encoder. AdvEncoder aims to construct a universal adversarial perturbation or
patch for a set of natural images that can fool all the downstream tasks
inheriting the victim pre-trained encoder. Unlike traditional adversarial
example works, the pre-trained encoder only outputs feature vectors rather than
classification labels. Therefore, we first exploit the high frequency component
information of the image to guide the generation of adversarial examples. Then
we design a generative attack framework to construct adversarial
perturbations/patches by learning the distribution of the attack surrogate
dataset to improve their attack success rates and transferability. Our results
show that an attacker can successfully attack downstream tasks without knowing
either the pre-training dataset or the downstream dataset. We also tailor four
defenses for pre-trained encoders, the results of which further prove the
attack ability of AdvEncoder.Comment: This paper has been accepted by the International Conference on
Computer Vision (ICCV '23, October 2--6, 2023, Paris, France
Reinforcement learning in large state action spaces
Reinforcement learning (RL) is a promising framework for training intelligent agents which learn to optimize long term utility by directly interacting with the environment. Creating RL methods which scale to large state-action spaces is a critical problem towards ensuring real world deployment of RL systems. However, several challenges limit the applicability of RL to large scale settings. These include difficulties with exploration, low sample efficiency, computational intractability, task constraints like decentralization and lack of guarantees about important properties like performance, generalization and robustness in potentially unseen scenarios.
This thesis is motivated towards bridging the aforementioned gap. We propose several principled algorithms and frameworks for studying and addressing the above challenges RL. The proposed methods cover a wide range of RL settings (single and multi-agent systems (MAS) with all the variations in the latter, prediction and control, model-based and model-free methods, value-based and policy-based methods). In this work we propose the first results on several different problems: e.g. tensorization of the Bellman equation which allows exponential sample efficiency gains (Chapter 4), provable suboptimality arising from structural constraints in MAS(Chapter 3), combinatorial generalization results in cooperative MAS(Chapter 5), generalization results on observation shifts(Chapter 7), learning deterministic policies in a probabilistic RL framework(Chapter 6). Our algorithms exhibit provably enhanced performance and sample efficiency along with better scalability. Additionally, we also shed light on generalization aspects of the agents under different frameworks. These properties have been been driven by the use of several advanced tools (e.g. statistical machine learning, state abstraction, variational inference, tensor theory).
In summary, the contributions in this thesis significantly advance progress towards making RL agents ready for large scale, real world applications
Specificity of the innate immune responses to different classes of non-tuberculous mycobacteria
Mycobacterium avium is the most common nontuberculous mycobacterium (NTM) species causing infectious disease. Here, we characterized a M. avium infection model in zebrafish larvae, and compared it to M. marinum infection, a model of tuberculosis. M. avium bacteria are efficiently phagocytosed and frequently induce granuloma-like structures in zebrafish larvae. Although macrophages can respond to both mycobacterial infections, their migration speed is faster in infections caused by M. marinum. Tlr2 is conservatively involved in most aspects of the defense against both mycobacterial infections. However, Tlr2 has a function in the migration speed of macrophages and neutrophils to infection sites with M. marinum that is not observed with M. avium. Using RNAseq analysis, we found a distinct transcriptome response in cytokine-cytokine receptor interaction for M. avium and M. marinum infection. In addition, we found differences in gene expression in metabolic pathways, phagosome formation, matrix remodeling, and apoptosis in response to these mycobacterial infections. In conclusion, we characterized a new M. avium infection model in zebrafish that can be further used in studying pathological mechanisms for NTM-caused diseases
Bio-inspired optimization in integrated river basin management
Water resources worldwide are facing severe challenges in terms of quality and quantity. It is essential to conserve, manage, and optimize water resources and their quality through integrated water resources management (IWRM). IWRM is an interdisciplinary field that works on multiple levels to maximize the socio-economic and ecological benefits of water resources. Since this is directly influenced by the river’s ecological health, the point of interest should start at the basin-level. The main objective of this study is to evaluate the application of bio-inspired optimization techniques in integrated river basin management (IRBM). This study demonstrates the application of versatile, flexible and yet simple metaheuristic bio-inspired algorithms in IRBM.
In a novel approach, bio-inspired optimization algorithms Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) are used to spatially distribute mitigation measures within a basin to reduce long-term annual mean total nitrogen (TN) concentration at the outlet of the basin. The Upper Fuhse river basin developed in the hydrological model, Hydrological Predictions for the Environment (HYPE), is used as a case study. ACO and PSO are coupled with the HYPE model to distribute a set of measures and compute the resulting TN reduction. The algorithms spatially distribute nine crop and subbasin-level mitigation measures under four categories. Both algorithms can successfully yield a discrete combination of measures to reduce long-term annual mean TN concentration. They achieved an 18.65% reduction, and their performance was on par with each other. This study has established the applicability of these bio-inspired optimization algorithms in successfully distributing the TN mitigation measures within the river basin.
Stakeholder involvement is a crucial aspect of IRBM. It ensures that researchers and policymakers are aware of the ground reality through large amounts of information collected from the stakeholder. Including stakeholders in policy planning and decision-making legitimizes the decisions and eases their implementation. Therefore, a socio-hydrological framework is developed and tested in the Larqui river basin, Chile, based on a field survey to explore the conditions under which the farmers would implement or extend the width of vegetative filter strips (VFS) to prevent soil erosion. The framework consists of a behavioral, social model (extended Theory of Planned Behavior, TPB) and an agent-based model (developed in NetLogo) coupled with the results from the vegetative filter model (Vegetative Filter Strip Modeling System, VFSMOD-W). The results showed that the ABM corroborates with the survey results and the farmers are willing to extend the width of VFS as long as their utility stays positive. This framework can be used to develop tailor-made policies for river basins based on the conditions of the river basins and the stakeholders' requirements to motivate them to adopt sustainable practices.
It is vital to assess whether the proposed management plans achieve the expected results for the river basin and if the stakeholders will accept and implement them. The assessment via simulation tools ensures effective implementation and realization of the target stipulated by the decision-makers. In this regard, this dissertation introduces the application of bio-inspired optimization techniques in the field of IRBM. The successful discrete combinatorial optimization in terms of the spatial distribution of mitigation measures by ACO and PSO and the novel socio-hydrological framework using ABM prove the forte and diverse applicability of bio-inspired optimization algorithms
Instance-based Learning with Prototype Reduction for Real-Time Proportional Myocontrol: A Randomized User Study Demonstrating Accuracy-preserving Data Reduction for Prosthetic Embedded Systems
This work presents the design, implementation and validation of learning
techniques based on the kNN scheme for gesture detection in prosthetic control.
To cope with high computational demands in instance-based prediction, methods
of dataset reduction are evaluated considering real-time determinism to allow
for the reliable integration into battery-powered portable devices. The
influence of parameterization and varying proportionality schemes is analyzed,
utilizing an eight-channel-sEMG armband. Besides offline cross-validation
accuracy, success rates in real-time pilot experiments (online target
achievement tests) are determined. Based on the assessment of specific dataset
reduction techniques' adequacy for embedded control applications regarding
accuracy and timing behaviour, Decision Surface Mapping (DSM) proves itself
promising when applying kNN on the reduced set. A randomized, double-blind user
study was conducted to evaluate the respective methods (kNN and kNN with
DSM-reduction) against Ridge Regression (RR) and RR with Random Fourier
Features (RR-RFF). The kNN-based methods performed significantly better
(p<0.0005) than the regression techniques. Between DSM-kNN and kNN, there was
no statistically significant difference (significance level 0.05). This is
remarkable in consideration of only one sample per class in the reduced set,
thus yielding a reduction rate of over 99% while preserving success rate. The
same behaviour could be confirmed in an extended user study. With k=1, which
turned out to be an excellent choice, the runtime complexity of both kNN (in
every prediction step) as well as DSM-kNN (in the training phase) becomes
linear concerning the number of original samples, favouring dependable wearable
prosthesis applications
Recommended from our members
Rare-Event Estimation and Calibration for Large-Scale Stochastic Simulation Models
Stochastic simulation has been widely applied in many domains. More recently, however, the rapid surge of sophisticated problems such as safety evaluation of intelligent systems has posed various challenges to conventional statistical methods. Motivated by these challenges, in this thesis, we develop novel methodologies with theoretical guarantees and numerical applications to tackle them from different perspectives.
In particular, our works can be categorized into two areas: (1) rare-event estimation (Chapters 2 to 5) where we develop approaches to estimating the probabilities of rare events via simulation; (2) model calibration (Chapters 6 and 7) where we aim at calibrating the simulation model so that it is close to reality.
In Chapter 2, we study rare-event simulation for a class of problems where the target hitting sets of interest are defined via modern machine learning tools such as neural networks and random forests. We investigate an importance sampling scheme that integrates the dominating point machinery in large deviations and sequential mixed integer programming to locate the underlying dominating points. We provide efficiency guarantees and numerical demonstration of our approach.
In Chapter 3, we propose a new efficiency criterion for importance sampling, which we call probabilistic efficiency. Conventionally, an estimator is regarded as efficient if its relative error is sufficiently controlled. It is widely known that when a rare-event set contains multiple "important regions" encoded by the dominating points, importance sampling needs to account for all of them via mixing to achieve efficiency. We argue that the traditional analysis recipe could suffer from intrinsic looseness by using relative error as an efficiency criterion. Thus, we propose the new efficiency notion to tighten this gap. In particular, we show that under the standard Gartner-Ellis large deviations regime, an importance sampling that uses only the most significant dominating points is sufficient to attain this efficiency notion.
In Chapter 4, we consider the estimation of rare-event probabilities using sample proportions output by crude Monte Carlo. Due to the recent surge of sophisticated rare-event problems, efficiency-guaranteed variance reduction may face implementation challenges, which motivate one to look at naive estimators. In this chapter we construct confidence intervals for the target probability using this naive estimator from various techniques, and then analyze their validity as well as tightness respectively quantified by the coverage probability and relative half-width.
In Chapter 5, we propose the use of extreme value analysis, in particular the peak-over-threshold method which is popularly employed for extremal estimation of real datasets, in the simulation setting. More specifically, we view crude Monte Carlo samples as data to fit on a generalized Pareto distribution. We test this idea on several numerical examples. The results show that in the absence of efficient variance reduction schemes, it appears to offer potential benefits to enhance crude Monte Carlo estimates.
In Chapter 6, we investigate a framework to develop calibration schemes in parametric settings, which satisfies rigorous frequentist statistical guarantees via a basic notion that we call eligibility set designed to bypass non-identifiability via a set-based estimation. We investigate a feature extraction-then-aggregation approach to construct these sets that target at multivariate outputs. We demonstrate our methodology on several numerical examples, including an application to calibration of a limit order book market simulator.
In Chapter 7, we study a methodology to tackle the NASA Langley Uncertainty Quantification Challenge, a model calibration problem under both aleatory and epistemic uncertainties. Our methodology is based on an integration of distributionally robust optimization and importance sampling. The main computation machinery in this integrated methodology amounts to solving sampled linear programs. We present theoretical statistical guarantees of our approach via connections to nonparametric hypothesis testing, and numerical performances including parameter calibration and downstream decision and risk evaluation tasks
- …