9,222 research outputs found
Towards Sybil Resilience in Decentralized Learning
Federated learning is a privacy-enforcing machine learning technology but
suffers from limited scalability. This limitation mostly originates from the
internet connection and memory capacity of the central parameter server, and
the complexity of the model aggregation function. Decentralized learning has
recently been emerging as a promising alternative to federated learning. This
novel technology eliminates the need for a central parameter server by
decentralizing the model aggregation across all participating nodes. Numerous
studies have been conducted on improving the resilience of federated learning
against poisoning and Sybil attacks, whereas the resilience of decentralized
learning remains largely unstudied. This research gap serves as the main
motivator for this study, in which our objective is to improve the Sybil
poisoning resilience of decentralized learning.
We present SybilWall, an innovative algorithm focused on increasing the
resilience of decentralized learning against targeted Sybil poisoning attacks.
By combining a Sybil-resistant aggregation function based on similarity between
Sybils with a novel probabilistic gossiping mechanism, we establish a new
benchmark for scalable, Sybil-resilient decentralized learning.
A comprehensive empirical evaluation demonstrated that SybilWall outperforms
existing state-of-the-art solutions designed for federated learning scenarios
and is the only algorithm to obtain consistent accuracy over a range of
adversarial attack scenarios. We also found SybilWall to diminish the utility
of creating many Sybils, as our evaluations demonstrate a higher success rate
among adversaries employing fewer Sybils. Finally, we suggest a number of
possible improvements to SybilWall and highlight promising future research
directions
Advancing Adversarial Training by Injecting Booster Signal
Recent works have demonstrated that deep neural networks (DNNs) are highly
vulnerable to adversarial attacks. To defend against adversarial attacks, many
defense strategies have been proposed, among which adversarial training has
been demonstrated to be the most effective strategy. However, it has been known
that adversarial training sometimes hurts natural accuracy. Then, many works
focus on optimizing model parameters to handle the problem. Different from the
previous approaches, in this paper, we propose a new approach to improve the
adversarial robustness by using an external signal rather than model
parameters. In the proposed method, a well-optimized universal external signal
called a booster signal is injected into the outside of the image which does
not overlap with the original content. Then, it boosts both adversarial
robustness and natural accuracy. The booster signal is optimized in parallel to
model parameters step by step collaboratively. Experimental results show that
the booster signal can improve both the natural and robust accuracies over the
recent state-of-the-art adversarial training methods. Also, optimizing the
booster signal is general and flexible enough to be adopted on any existing
adversarial training methods.Comment: Accepted at IEEE Transactions on Neural Networks and Learning System
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
The rapid growth of demanding applications in domains applying multimedia
processing and machine learning has marked a new era for edge and cloud
computing. These applications involve massive data and compute-intensive tasks,
and thus, typical computing paradigms in embedded systems and data centers are
stressed to meet the worldwide demand for high performance. Concurrently, the
landscape of the semiconductor field in the last 15 years has constituted power
as a first-class design concern. As a result, the community of computing
systems is forced to find alternative design approaches to facilitate
high-performance and/or power-efficient computing. Among the examined
solutions, Approximate Computing has attracted an ever-increasing interest,
with research works applying approximations across the entire traditional
computing stack, i.e., at software, hardware, and architectural levels. Over
the last decade, there is a plethora of approximation techniques in software
(programs, frameworks, compilers, runtimes, languages), hardware (circuits,
accelerators), and architectures (processors, memories). The current article is
Part I of our comprehensive survey on Approximate Computing, and it reviews its
motivation, terminology and principles, as well it classifies and presents the
technical details of the state-of-the-art software and hardware approximation
techniques.Comment: Under Review at ACM Computing Survey
The Globalization of Artificial Intelligence: African Imaginaries of Technoscientific Futures
Imaginaries of artificial intelligence (AI) have transcended geographies of the Global North and become increasingly entangled with narratives of economic growth, progress, and modernity in Africa. This raises several issues such as the entanglement of AI with global technoscientific capitalism and its impact on the dissemination of AI in Africa. The lack of African perspectives on the development of AI exacerbates concerns of raciality and inclusion in the scientific research, circulation, and adoption of AI. My argument in this dissertation is that innovation in AI, in both its sociotechnical imaginaries and political economies, excludes marginalized countries, nations and communities in ways that not only bar their participation in the reception of AI, but also as being part and parcel of its creation.
Underpinned by decolonial thinking, and perspectives from science and technology studies and African studies, this dissertation looks at how AI is reconfiguring the debate about development and modernization in Africa and the implications for local sociotechnical practices of AI innovation and governance. I examined AI in international development and industry across Kenya, Ghana, and Nigeria, by tracing Canada’s AI4D Africa program and following AI start-ups at AfriLabs. I used multi-sited case studies and discourse analysis to examine the data collected from interviews, participant observations, and documents.
In the empirical chapters, I first examine how local actors understand the notion of decolonizing AI and show that it has become a sociotechnical imaginary. I then investigate the political economy of AI in Africa and argue that despite Western efforts to integrate the African AI ecosystem globally, the AI epistemic communities in the continent continue to be excluded from dominant AI innovation spaces. Finally, I examine the emergence of a Pan-African AI imaginary and argue that AI governance can be understood as a state-building experiment in post-colonial Africa. The main issue at stake is that the lack of African perspectives in AI leads to negative impacts on innovation and limits the fair distribution of the benefits of AI across nations, countries, and communities, while at the same time excludes globally marginalized epistemic communities from the imagination and creation of AI
Evaluation of different segmentation-based approaches for skin disorders from dermoscopic images
Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Sala Llonch, Roser, Mata Miquel, Christian, Munuera, JosepSkin disorders are the most common type of cancer in the world and the incident has been lately increasing over the past decades. Even with the most complex and advanced technologies, current image acquisition systems do not permit a reliable identification of the skin lesion by visual examination due to the challenging structure of the malignancy. This promotes the need for the implementation of automatic skin lesion segmentation methods in order to assist in physicians’ diagnostic when determining the lesion's region and to serve as a preliminary step for the classification of the skin lesion. Accurate and precise segmentation is crucial for a rigorous screening and monitoring of the disease's progression.
For the purpose of the commented concern, the present project aims to accomplish a state-of-the-art review about the most predominant conventional segmentation models for skin lesion segmentation, alongside with a market analysis examination. With the rise of automatic segmentation tools, a wide number of algorithms are currently being used, but many are the drawbacks when employing them for dermatological disorders due to the high-level presence of artefacts in the image acquired.
In light of the above, three segmentation techniques have been selected for the completion of the work: level set method, an algorithm combining GrabCut and k-means methods and an intensity automatic algorithm developed by Hospital Sant Joan de Déu de Barcelona research group. In addition, a validation of their performance is conducted for a further implementation of them in clinical training. The proposals, together with the got outcomes, have been accomplished by means of a publicly available skin lesion image database
Management strategies and contributory factors for resistance exercise-induced muscle damage: an exploration of dietary protein, exercise load, and sex
The World Health Organisation recommends that resistance exercise be performed at least twice per week to benefit general health and wellbeing. However, resistance exercise is associated with acute muscle damage that potentially can dampen muscle adaptations promoted by chronic resistance training. The extent to which muscle is damaged by exercise is influenced by various factors, including age, training status, exercise type, and – notable to this thesis – sex. To this end, establishing sex-specific management strategies for exercise-induced muscle damage (EIMD) is important to optimise the benefits of exercise. Two EIMD management strategies were focussed on in this thesis: dietary protein supplementation and exercise load manipulation.
It was identified in this thesis that research into the impact both of protein supplementation and exercise load on EIMD heavily underrepresent female populations (chapters 3 and 5), despite well-documented sex differences in EIMD responses. Therefore, future research priority should be placed on bridging the sex data gap by conducting high-quality studies centralising around female-focussed and sex-comparative methodological designs.
Both peri-exercise protein supplementation and exercise load manipulation in favour of lighter loads were revealed to be effective management strategies for resistance EIMD in males through systematic and scoping review of the current literature (chapters 3 and 5, respectively). Due to a lack of data from females, it is only appropriate for these strategies to be recommended for males at present. To decipher whether protein supplementation and lower exercise loads are beneficial for managing EIMD in females, a randomised controlled trial (RCT) (chapter 4) and a protocol for an RCT (chapter 6) involving male and female participants are presented in this thesis.
The incorporation of ecologically-valid resistance exercise in the RCT in chapter 4 highlighted that even mild muscle damage is attenuated in females, reflected in diminished increases in post-exercise creatine kinase concentration and muscle soreness compared with males; however, the reason for this difference requires further investigation. This study, while supporting sex differences, contrasted previous studies, as neither males nor females experienced an attenuation of EIMD during milk protein supplementation. This difference likely owed to the lower severity of muscle damage induced in the current study relative to previous studies, and accordingly, future research should seek to discover alternative management strategies for mild EIMD. A protocol for an RCT examining the impact of exercise load on EIMD in untrained males and females is described in Chapter 6 of this thesis and may be used as guidance for researchers developing similar, sex-comparative studies. It was hypothesised that females will experience attenuated muscle damage relative to males and low-load exercise will induce less muscle damage than high-load exercise in both sexes.
A lack of methodological consistency among EIMD studies was a recurring finding throughout this thesis, which posed an issue when attempting to compare between-study outcomes and reach a consensus. Achieving greater uniformity in study designs by adopting comparable methods relating to EIMD markers and time-points of assessment would help improve understanding of the factors influencing the magnitude of EIMD and effective management strategies. While there are limitations with several EIMD markers – for example the variability of biomarkers and subjectivity of perceptual assessments – once the optimal markers are determined, these should be consistently used moving forward.
Overall, this thesis has contributed to the current body of knowledge by demonstrating that milk protein ingestion is not an effective management strategy for muscle damage following ecologically-valid resistance exercise; therefore, alternative strategies to mitigate mild muscle damage should be investigated. Further, this work supported previous reports of sex differences in EIMD and indicated that the attenuation of EIMD in females relative to males was not attributed to sex differences in body composition; thus, the aetiology of such differences necessitates further exploration by means of high-quality sex comparative research. Finally, this thesis reached the consensus recommendation that lower exercise loads can be utilised to reduce muscle damage in males; nonetheless, supporting evidence for the application of this recommendation to females is required
REMOVING THE MASK: VIDEO FINGERPRINTING ATTACKS OVER TOR
The Onion Router (Tor) is used by adversaries and warfighters alike to encrypt session information and gain anonymity on the internet. Since its creation in 2002, Tor has gained popularity by terrorist organizations, human traffickers, and illegal drug distributors who wish to use Tor services to mask their identity while engaging in illegal activities. Fingerprinting attacks assist in thwarting these attempts. Website fingerprinting (WF) attacks have been proven successful at linking a user to the website they have viewed over an encrypted Tor connection. With consumer video streaming traffic making up a large majority of internet traffic and sites like YouTube remaining in the top visited sites in the world, it is just as likely that adversaries are using videos to spread misinformation, illegal content, and terrorist propaganda. Video fingerprinting (VF) attacks look to use encrypted network traffic to predict the content of encrypted video sessions in closed- and open-world scenarios. This research builds upon an existing dataset of encrypted video session data and use statistical analysis to train a machine-learning classifier, using deep fingerprinting (DF), to predict videos viewed over Tor. DF is a machine learning technique that relies on the use of convolutional neural networks (CNN) and can be used to conduct VF attacks against Tor. By analyzing the results of these experiments, we can more accurately identify malicious video streaming activity over Tor.CivilianApproved for public release. Distribution is unlimited
An empirical investigation of the relationship between integration, dynamic capabilities and performance in supply chains
This research aimed to develop an empirical understanding of the relationships between integration,
dynamic capabilities and performance in the supply chain domain, based on which, two conceptual
frameworks were constructed to advance the field. The core motivation for the research was that, at
the stage of writing the thesis, the combined relationship between the three concepts had not yet
been examined, although their interrelationships have been studied individually.
To achieve this aim, deductive and inductive reasoning logics were utilised to guide the qualitative
study, which was undertaken via multiple case studies to investigate lines of enquiry that would
address the research questions formulated. This is consistent with the author’s philosophical
adoption of the ontology of relativism and the epistemology of constructionism, which was considered
appropriate to address the research questions. Empirical data and evidence were collected, and
various triangulation techniques were employed to ensure their credibility. Some key features of
grounded theory coding techniques were drawn upon for data coding and analysis, generating two
levels of findings. These revealed that whilst integration and dynamic capabilities were crucial in
improving performance, the performance also informed the former. This reflects a cyclical and
iterative approach rather than one purely based on linearity. Adopting a holistic approach towards
the relationship was key in producing complementary strategies that can deliver sustainable supply
chain performance.
The research makes theoretical, methodological and practical contributions to the field of supply
chain management. The theoretical contribution includes the development of two emerging
conceptual frameworks at the micro and macro levels. The former provides greater specificity, as it
allows meta-analytic evaluation of the three concepts and their dimensions, providing a detailed
insight into their correlations. The latter gives a holistic view of their relationships and how they are
connected, reflecting a middle-range theory that bridges theory and practice. The methodological
contribution lies in presenting models that address gaps associated with the inconsistent use of
terminologies in philosophical assumptions, and lack of rigor in deploying case study research
methods. In terms of its practical contribution, this research offers insights that practitioners could
adopt to enhance their performance. They can do so without necessarily having to forgo certain
desired outcomes using targeted integrative strategies and drawing on their dynamic capabilities
FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model
Federated learning (FL) is a distributed machine learning paradigm allowing
multiple clients to collaboratively train a global model without sharing their
local data. However, FL entails exposing the model to various participants.
This poses a risk of unauthorized model distribution or resale by the malicious
client, compromising the intellectual property rights of the FL group. To deter
such misbehavior, it is essential to establish a mechanism for verifying the
ownership of the model and as well tracing its origin to the leaker among the
FL participants. In this paper, we present FedTracker, the first FL model
protection framework that provides both ownership verification and
traceability. FedTracker adopts a bi-level protection scheme consisting of
global watermark mechanism and local fingerprint mechanism. The former
authenticates the ownership of the global model, while the latter identifies
which client the model is derived from. FedTracker leverages Continual Learning
(CL) principles to embedding the watermark in a way that preserves the utility
of the FL model on both primitive task and watermark task. FedTracker also
devises a novel metric to better discriminate different fingerprints.
Experimental results show FedTracker is effective in ownership verification,
traceability, and maintains good fidelity and robustness against various
watermark removal attacks
- …