7,203 research outputs found
Discussing the role of TikTok sharing practices in everyday social life
A crucial element of TikTok consumption is the act of sharing TikTok videos with others, such as friends. In this article I draw on fieldwork with young adult TikTok users based in the United Kingdom to investigate this practice. I show how people use TikTokâs For You Page as a resource to facilitate social relationships at a distance and in settings of physical co-presence. I highlight how TikTok clips are shared in a phatic manner to activate social relationships, for example through communicating messages of âthinking about youâ or relating to others through referencing TikTok memes in everyday conversations. Attending to sharing practices, I argue, provides a fruitful way to understand how self-identities and interpersonal relationships are articulated in increasingly social media environments increasingly organized around the logic of âpersonalizationâ
Applications of Deep Learning Models in Financial Forecasting
In financial markets, deep learning techniques sparked a revolution, reshaping conventional approaches and amplifying predictive capabilities. This thesis explored the applications of deep learning models to unravel insights and methodologies aimed at advancing financial forecasting.
The crux of the research problem lies in the applications of predictive models within financial domains, characterised by high volatility and uncertainty. This thesis investigated the application of advanced deep-learning methodologies in the context of financial forecasting, addressing the challenges posed by the dynamic nature of financial markets. These challenges were tackled by exploring a range of techniques, including convolutional neural networks (CNNs), long short-term memory networks (LSTMs), autoencoders (AEs), and variational autoencoders (VAEs), along with
approaches such as encoding financial time series into images. Through analysis, methodologies such as transfer learning, convolutional neural networks, long short-term memory networks, generative modelling, and image encoding of time series data were examined. These methodologies collectively offered a comprehensive toolkit for extracting meaningful insights from financial data.
The present work investigated the practicality of a deep learning CNN-LSTM model within the Directional Change framework to predict significant DC eventsâa task crucial for timely decisionmaking in financial markets. Furthermore, the potential of autoencoders and variational autoencoders to enhance financial forecasting accuracy and remove noise from financial time series data was explored. Leveraging their capacity within financial time series, these models offered promising avenues for improved data representation and subsequent forecasting. To further contribute to
financial prediction capabilities, a deep multi-model was developed that harnessed the power of pre-trained computer vision models. This innovative approach aimed to predict the VVIX, utilising the cross-disciplinary synergy between computer vision and financial forecasting. By integrating knowledge from these domains, novel insights into the prediction of market volatility were provided
A survey on vulnerability of federated learning: A learning algorithm perspective
Federated Learning (FL) has emerged as a powerful paradigm for training Machine Learning (ML), particularly Deep Learning (DL) models on multiple devices or servers while maintaining data localized at ownersâ sites. Without centralizing data, FL holds promise for scenarios where data integrity, privacy and security and are critical. However, this decentralized training process also opens up new avenues for opponents to launch unique attacks, where it has been becoming an urgent need to understand the vulnerabilities and corresponding defense mechanisms from a learning algorithm perspective. This review paper takes a comprehensive look at malicious attacks against FL, categorizing them from new perspectives on attack origins and targets, and providing insights into their methodology and impact. In this survey, we focus on threat models targeting the learning process of FL systems. Based on the source and target of the attack, we categorize existing threat models into four types, Data to Model (D2M), Model to Data (M2D), Model to Model (M2M) and composite attacks. For each attack type, we discuss the defense strategies proposed, highlighting their effectiveness, assumptions and potential areas for improvement. Defense strategies have evolved from using a singular metric to excluding malicious clients, to employing a multifaceted approach examining client models at various phases. In this survey paper, our research indicates that the to-learn data, the learning gradients, and the learned model at different stages all can be manipulated to initiate malicious attacks that range from undermining model performance, reconstructing private local data, and to inserting backdoors. We have also seen these threat are becoming more insidious. While earlier studies typically amplified malicious gradients, recent endeavors subtly alter the least significant weights in local models to bypass defense measures. This literature review provides a holistic understanding of the current FL threat landscape and highlights the importance of developing robust, efficient, and privacy-preserving defenses to ensure the safe and trusted adoption of FL in real-world applications. The categorized bibliography can be found at: https://github.com/Rand2AI/Awesome-Vulnerability-of-Federated-Learning
On the Generation of Realistic and Robust Counterfactual Explanations for Algorithmic Recourse
This recent widespread deployment of machine learning algorithms presents many new challenges. Machine learning algorithms are usually opaque and can be particularly difficult to interpret. When humans are involved, algorithmic and automated decisions can negatively impact peopleâs lives. Therefore, end users would like to be insured against potential harm. One popular way to achieve this is to provide end users access to algorithmic recourse, which gives end users negatively affected by algorithmic decisions the opportunity to reverse unfavorable decisions, e.g., from a loan denial to a loan acceptance. In this thesis, we design recourse algorithms to meet various end user needs. First, we propose methods for the generation of realistic recourses. We use generative models to suggest recourses likely to occur under the data distribution. To this end, we shift the recourse action from the input space to the generative modelâs latent space, allowing to generate counterfactuals that lie in regions with data support. Second, we observe that small changes applied to the recourses prescribed to end users likely invalidate the suggested recourse after being nosily implemented in practice. Motivated by this observation, we design methods for the generation of robust recourses and for assessing the robustness of recourse algorithms to data deletion requests. Third, the lack of a commonly used code-base for counterfactual explanation and algorithmic recourse algorithms and the vast array of evaluation measures in literature make it difficult to compare the per formance of different algorithms. To solve this problem, we provide an open source benchmarking library that streamlines the evaluation process and can be used for benchmarking, rapidly developing new methods, and setting up new
experiments. In summary, our work contributes to a more reliable interaction of end users and machine learned models by covering fundamental aspects of the recourse process and suggests new solutions towards generating realistic and robust counterfactual explanations for algorithmic recourse
A survey on vulnerability of federated learning: A learning algorithm perspective
Federated Learning (FL) has emerged as a powerful paradigm for training Machine Learning (ML), particularly Deep Learning (DL) models on multiple devices or servers while maintaining data localized at ownersâ sites. Without centralizing data, FL holds promise for scenarios where data integrity, privacy and security and are critical. However, this decentralized training process also opens up new avenues for opponents to launch unique attacks, where it has been becoming an urgent need to understand the vulnerabilities and corresponding defense mechanisms from a learning algorithm perspective. This review paper takes a comprehensive look at malicious attacks against FL, categorizing them from new perspectives on attack origins and targets, and providing insights into their methodology and impact. In this survey, we focus on threat models targeting the learning process of FL systems. Based on the source and target of the attack, we categorize existing threat models into four types, Data to Model (D2M), Model to Data (M2D), Model to Model (M2M) and composite attacks. For each attack type, we discuss the defense strategies proposed, highlighting their effectiveness, assumptions and potential areas for improvement. Defense strategies have evolved from using a singular metric to excluding malicious clients, to employing a multifaceted approach examining client models at various phases. In this survey paper, our research indicates that the to-learn data, the learning gradients, and the learned model at different stages all can be manipulated to initiate malicious attacks that range from undermining model performance, reconstructing private local data, and to inserting backdoors. We have also seen these threat are becoming more insidious. While earlier studies typically amplified malicious gradients, recent endeavors subtly alter the least significant weights in local models to bypass defense measures. This literature review provides a holistic understanding of the current FL threat landscape and highlights the importance of developing robust, efficient, and privacy-preserving defenses to ensure the safe and trusted adoption of FL in real-world applications. The categorized bibliography can be found at: https://github.com/Rand2AI/Awesome-Vulnerability-of-Federated-Learning
E-learning in the Cloud Computing Environment: Features, Architecture, Challenges and Solutions
The need to constantly and consistently improve the quality and quantity of the educational system is essential. E-learning has emerged from the rapid cycle of change and the expansion of new technologies. Advances in information technology have increased network bandwidth, data access speed, and reduced data storage costs. In recent years, the implementation of cloud computing in educational settings has garnered the interest of major companies, leading to substantial investments in this area. Cloud computing improves engineering education by providing an environment that can be accessed from anywhere and allowing access to educational resources on demand. Cloud computing is a term used to describe the provision of hosting services on the Internet. It is predicted to be the next generation of information technology architecture and offers great potential to enhance productivity and reduce costs. Cloud service providers offer their processing and memory resources to users. By paying for the use of these resources, users can access them for their calculations and processing anytime and anywhere. Cloud computing provides the ability to increase productivity, save information technology resources, and enhance computing power, converting processing power into a tool with constant access capabilities. The use of cloud computing in a system that supports remote education has its own set of characteristics and requires a unique strategy. Students can access a wide variety of instructional engineering materials at any time and from any location, thanks to cloud computing. Additionally, they can share their materials with other community members. The use of cloud computing in e-learning offers several advantages, such as unlimited computing resources, high scalability, and reduced costs associated with e-learning. An improvement in the quality of teaching and learning is achieved through the use of flexible cloud computing, which offers a variety of resources for educators and students. In light of this, the current research presents cloud computing technology as a suitable and superior option for e-learning systems
Unleashing the power of artificial intelligence for climate action in industrial markets
Artificial Intelligence (AI) is a game-changing capability in industrial markets that can accelerate humanity's race against climate change. Positioned in a resource-hungry and pollution-intensive industry, this study explores AI-powered climate service innovation capabilities and their overall effects. The study develops and validates an AI model, identifying three primary dimensions and nine subdimensions. Based on a dataset in the fast fashion industry, the findings show that the AI-powered climate service innovation capabilities significantly influence both environmental and market performance, in which environmental performance acts as a partial mediator. Specifically, the results identify the key elements of an AI-informed framework for climate action and show how this can be used to develop a range of mitigation, adaptation and resilience initiatives in response to climate change
Graph Neural Network-based EEG Classification:A Survey
Graph neural networks (GNN) are increasingly used to classify EEG for tasks such as emotion recognition, motor imagery and neurological diseases and disorders. A wide range of methods have been proposed to design GNN-based classifiers. Therefore, there is a need for a systematic review and categorisation of these approaches. We exhaustively search the published literature on this topic and derive several categories for comparison. These categories highlight the similarities and differences among the methods. The results suggest a prevalence of spectral graph convolutional layers over spatial. Additionally, we identify standard forms of node features, with the most popular being the raw EEG signal and differential entropy. Our results summarise the emerging trends in GNN-based approaches for EEG classification. Finally, we discuss several promising research directions, such as exploring the potential of transfer learning methods and appropriate modelling of cross-frequency interactions.</p
Recommended from our members
Humans in the Loop: People at the Heart of Systems Development
Despite increased automation in the process, people are (still) at the heart of software systems development. This chapter adopts a sociotechnical perspective and explores three areas that characterize the role of humans in software systems development: people as creators, people as users, and people in partnership with systems. Software is created by specialist developers such as software engineers and non-specialists such as âmakers.â Software developers build communities and operate within several cultures (e.g., professional, company, and national), all of which affect both the development process and the resulting product. Software is used by people. Users also operate within communities and cultures which influence product use, and how systems are used feeds back into future systems development. People and systems are interdependent: they work in partnership to achieve a wide range of goals. However, software both supports what people want to do and shapes what can be done
- âŠ