8,333 research outputs found
What skills pay more? The changing demand and return to skills for professional workers
Technology is disrupting labor markets. We analyze the demand and reward for skills at occupation and state level across two time periods using job postings. First, we use principal components analysis to derive nine skills groups: ‘collaborative leader’, ‘interpersonal & organized’, ‘big data’, ‘cloud computing’, ‘programming’, ‘machine learning’, ‘research’, ‘math’ and ‘analytical’. Second, we comment on changes in the price and demand for skills over time. Third, we analyze non-linear returns to all skills groups and their interactions. We find that ‘collaborative leader’ skills become significant over time and that legacy data skills are replaced over time by innovative ones
Advances in machine learning algorithms for financial risk management
In this thesis, three novel machine learning techniques are introduced to address distinct
yet interrelated challenges involved in financial risk management tasks. These approaches
collectively offer a comprehensive strategy, beginning with the precise classification of credit
risks, advancing through the nuanced forecasting of financial asset volatility, and ending
with the strategic optimisation of financial asset portfolios.
Firstly, a Hybrid Dual-Resampling and Cost-Sensitive technique has been proposed to combat the prevalent issue of class imbalance in financial datasets, particularly in credit risk
assessment. The key process involves the creation of heuristically balanced datasets to effectively address the problem. It uses a resampling technique based on Gaussian mixture
modelling to generate a synthetic minority class from the minority class data and concurrently uses k-means clustering on the majority class. Feature selection is then performed
using the Extra Tree Ensemble technique. Subsequently, a cost-sensitive logistic regression
model is then applied to predict the probability of default using the heuristically balanced
datasets. The results underscore the effectiveness of our proposed technique, with superior
performance observed in comparison to other imbalanced preprocessing approaches. This
advancement in credit risk classification lays a solid foundation for understanding individual
financial behaviours, a crucial first step in the broader context of financial risk management.
Building on this foundation, the thesis then explores the forecasting of financial asset volatility, a critical aspect of understanding market dynamics. A novel model that combines a
Triple Discriminator Generative Adversarial Network with a continuous wavelet transform
is proposed. The proposed model has the ability to decompose volatility time series into
signal-like and noise-like frequency components, to allow the separate detection and monitoring of non-stationary volatility data. The network comprises of a wavelet transform
component consisting of continuous wavelet transforms and inverse wavelet transform components, an auto-encoder component made up of encoder and decoder networks, and a
Generative Adversarial Network consisting of triple Discriminator and Generator networks.
The proposed Generative Adversarial Network employs an ensemble of unsupervised loss derived from the Generative Adversarial Network component during training, supervised
loss and reconstruction loss as part of its framework. Data from nine financial assets are
employed to demonstrate the effectiveness of the proposed model. This approach not only
enhances our understanding of market fluctuations but also bridges the gap between individual credit risk assessment and macro-level market analysis.
Finally the thesis ends with a novel proposal of a novel technique or Portfolio optimisation. This involves the use of a model-free reinforcement learning strategy for portfolio
optimisation using historical Low, High, and Close prices of assets as input with weights of
assets as output. A deep Capsules Network is employed to simulate the investment strategy, which involves the reallocation of the different assets to maximise the expected return
on investment based on deep reinforcement learning. To provide more learning stability in
an online training process, a Markov Differential Sharpe Ratio reward function has been
proposed as the reinforcement learning objective function. Additionally, a Multi-Memory
Weight Reservoir has also been introduced to facilitate the learning process and optimisation of computed asset weights, helping to sequentially re-balance the portfolio throughout
a specified trading period. The use of the insights gained from volatility forecasting into
this strategy shows the interconnected nature of the financial markets. Comparative experiments with other models demonstrated that our proposed technique is capable of achieving
superior results based on risk-adjusted reward performance measures.
In a nut-shell, this thesis not only addresses individual challenges in financial risk management but it also incorporates them into a comprehensive framework; from enhancing the
accuracy of credit risk classification, through the improvement and understanding of market
volatility, to optimisation of investment strategies. These methodologies collectively show
the potential of the use of machine learning to improve financial risk management
Impact of Imaging and Distance Perception in VR Immersive Visual Experience
Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor.
In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured.
The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training.
The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments.
We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization.
The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference.
This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published
The threat of ransomware in the food supply chain: a challenge for food defence
In the food industry, the level of awareness of the need for food defence strategies has accelerated in recent years, in particular, mitigating the threat of ransomware. During the Covid-19 pandemic there were a number of high-profile organised food defence attacks on the food industry using ransomware, leading to imperative questions over the extent of the sector’s vulnerability to cyber-attack. This paper explores food defence through the lens of contemporary ransomware attacks in order to frame the need for an effective ransomware defence strategy at organisational and industry level. Food defence strategies have historically focused on extortion and sabotage as threats, but often in terms of physical rather than cyber-related attacks. The globalisation, digitalisation and integration of food supply chains can increase the level of vulnerability to ransomware. Ransomware is an example of an organised food defence threat that can operationalise both extortion and sabotage, but the perpetrators are remote, non-visible and often anonymous. Organisations need to adopt an effective food defence strategy that reduces the risk of a ransomware attack and can enable targeted and swift action in the event an incident occurs. Further collaboration between government and the private sector is needed for the development of effective governance structures addressing the risk of ransomware attacks. The novelty of this article lies in analysing the issue of ransomware attacks from the perspective of the food sector and food defence strategy. This study is of potential interest to academics, policy makers and those working in the industry
Being Multicultural in the Workplace
As the workforce becomes increasingly diverse and organizations elevate their efforts to address issues of diversity, equity, and inclusion (DEI), it is critical to engage in a deeper investigation of the experiences of multicultural individuals at work. In this qualitative study, nine multicultural individuals were interviewed using a sociological lens to gain their perspective on the relationship between their identity and their work experiences. The primary research questions that guided this study were: (a) how do multicultural individuals influence the workplace? In turn, (b) how do their workplace experiences affect their identity and sense of self? Data was coded and thoroughly analyzed for emergent themes. This study provides important insight into how multicultural individuals define their multicultural identity, the personal and professional qualities they feel they bring to the workplace, and the challenges they confront due to their identity. This study also discusses the availability of resources related to diversity, equity, and inclusion and what they feel they need to have a more equitable and supportive work experience. This study clarifies the social construction of inequality that occurs as multicultural individuals interact with their colleagues and employers and the potential impact these interactions have on their well-being and the productivity of the organization for which they work. The participants’ stories suggest the need for greater cultural competence among all employees, as well as greater representation of diversity, additional DEI programs, and more effective communication
A Critical Review Of Post-Secondary Education Writing During A 21st Century Education Revolution
Educational materials are effective instruments which provide information and report new discoveries uncovered by researchers in specific areas of academia. Higher education, like other education institutions, rely on instructional materials to inform its practice of educating adult learners. In post-secondary education, developmental English programs are tasked with meeting the needs of dynamic populations, thus there is a continuous need for research in this area to support its changing landscape. However, the majority of scholarly thought in this area centers on K-12 reading and writing. This paucity presents a phenomenon to the post-secondary community. This research study uses a qualitative content analysis to examine peer-reviewed journals from 2003-2017, developmental online websites, and a government issued document directed toward reforming post-secondary developmental education programs. These highly relevant sources aid educators in discovering informational support to apply best practices for student success. Developmental education serves the purpose of addressing literacy gaps for students transitioning to college-level work. The findings here illuminate the dearth of material offered to developmental educators. This study suggests the field of literacy research is fragmented and highlights an apparent blind spot in scholarly literature with regard to English writing instruction. This poses a quandary for post-secondary literacy researchers in the 21st century and establishes the necessity for the literacy research community to commit future scholarship toward equipping college educators teaching writing instruction to underprepared adult learners
Causal Strategic Classification: A Tale of Two Shifts
When users can benefit from certain predictive outcomes, they may be prone to
act to achieve those outcome, e.g., by strategically modifying their features.
The goal in strategic classification is therefore to train predictive models
that are robust to such behavior. However, the conventional framework assumes
that changing features does not change actual outcomes, which depicts users as
"gaming" the system. Here we remove this assumption, and study learning in a
causal strategic setting where true outcomes do change. Focusing on accuracy as
our primary objective, we show how strategic behavior and causal effects
underlie two complementing forms of distribution shift. We characterize these
shifts, and propose a learning algorithm that balances between these two forces
and over time, and permits end-to-end training. Experiments on synthetic and
semi-synthetic data demonstrate the utility of our approach
Recommended from our members
The impact of enterprise social networking on knowledge sharing between academic staff in higher education
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHigher education institutions have always considered knowledge sharing critical for research excellence and finding proper methods for sharing knowledge across academic staff has therefore been a major issue for universities and knowledge management research. Recent evidence shows that many universities have embraced enterprise social networking tools to improve communication, relationships, partnerships, and knowledge sharing. To date, there is little understanding of the critical factors for online knowledge sharing behaviour between academic staff, and the impact of these factors on work benefits for academic staff which differ between consumptive users and contributive users in higher education. This study employed the extended unified theory of acceptance and use of technology (UTAUT) to examine factors affecting knowledge sharing about the consumptive use and contributive use of enterprise social network (ESN) behaviour. The study adopts a critical realism philosophical approach and employed a grounded theory mixed methods. The conceptual model was validated through structural equation modelling based on an online survey of 254 academic staff using enterprise social networking as a part of their work in the United Kingdom. The findings have significant theoretical and practical implications for researchers and policy makers. The research has developed a cohesive ESN use model by extending and modifying the unified theory of acceptance and use of technology. The findings indicate significant differences around factors affecting consumptive and contributive usage patterns within ESNs. Due to advances in communication technologies, this research argues that a previous model suggested by Venkatesh et al. (2003) is no longer fit for purpose and the new communication tools can lead to improved knowledge in higher education. This research also makes valuable contributions to universities from a managerial viewpoint, suggesting that universities could help their scholars find a more comprehensive range of funding sources matching scholars' ideas
Auditable and performant Byzantine consensus for permissioned ledgers
Permissioned ledgers allow users to execute transactions against a data store, and retain proof of their execution in a replicated ledger. Each replica verifies the transactions’ execution and ensures that, in perpetuity, a committed transaction cannot be removed from the ledger. Unfortunately, this is not guaranteed by today’s permissioned ledgers, which can be re-written if an arbitrary number of replicas collude. In addition, the transaction throughput of permissioned ledgers is low, hampering real-world deployments, by not taking advantage of multi-core CPUs and hardware accelerators.
This thesis explores how permissioned ledgers and their consensus protocols can be made auditable in perpetuity; even when all replicas collude and re-write the ledger. It also addresses how Byzantine consensus protocols can be changed to increase the execution throughput of complex transactions. This thesis makes the following contributions:
1. Always auditable Byzantine consensus protocols. We present a permissioned ledger system that can assign blame to individual replicas regardless of how many of them misbehave. This is achieved by signing and storing consensus protocol messages in the ledger and providing clients with signed, universally-verifiable receipts.
2. Performant transaction execution with hardware accelerators. Next, we describe a cloud-based ML inference service that provides strong integrity guarantees, while staying compatible with current inference APIs. We change the Byzantine consensus protocol to execute machine learning (ML) inference computation on GPUs to optimize throughput and latency of ML inference computation.
3. Parallel transactions execution on multi-core CPUs. Finally, we introduce a permissioned ledger that executes transactions, in parallel, on multi-core CPUs. We separate the execution of transactions between the primary and secondary replicas. The primary replica executes transactions on multiple CPU cores and creates a dependency graph of the transactions that the backup replicas utilize to execute transactions in parallel.Open Acces
Mapping the Focal Points of WordPress: A Software and Critical Code Analysis
Programming languages or code can be examined through numerous analytical lenses. This project is a critical analysis of WordPress, a prevalent web content management system, applying four modes of inquiry. The project draws on theoretical perspectives and areas of study in media, software, platforms, code, language, and power structures. The applied research is based on Critical Code Studies, an interdisciplinary field of study that holds the potential as a theoretical lens and methodological toolkit to understand computational code beyond its function. The project begins with a critical code analysis of WordPress, examining its origins and source code and mapping selected vulnerabilities. An examination of the influence of digital and computational thinking follows this. The work also explores the intersection of code patching and vulnerability management and how code shapes our sense of control, trust, and empathy, ultimately arguing that a rhetorical-cultural lens can be used to better understand code\u27s controlling influence. Recurring themes throughout these analyses and observations are the connections to power and vulnerability in WordPress\u27 code and how cultural, processual, rhetorical, and ethical implications can be expressed through its code, creating a particular worldview. Code\u27s emergent properties help illustrate how human values and practices (e.g., empathy, aesthetics, language, and trust) become encoded in software design and how people perceive the software through its worldview. These connected analyses reveal cultural, processual, and vulnerability focal points and the influence these entanglements have concerning WordPress as code, software, and platform. WordPress is a complex sociotechnical platform worthy of further study, as is the interdisciplinary merging of theoretical perspectives and disciplines to critically examine code. Ultimately, this project helps further enrich the field by introducing focal points in code, examining sociocultural phenomena within the code, and offering techniques to apply critical code methods
- …