9,125 research outputs found
Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
The rapid growth of demanding applications in domains applying multimedia
processing and machine learning has marked a new era for edge and cloud
computing. These applications involve massive data and compute-intensive tasks,
and thus, typical computing paradigms in embedded systems and data centers are
stressed to meet the worldwide demand for high performance. Concurrently, the
landscape of the semiconductor field in the last 15 years has constituted power
as a first-class design concern. As a result, the community of computing
systems is forced to find alternative design approaches to facilitate
high-performance and/or power-efficient computing. Among the examined
solutions, Approximate Computing has attracted an ever-increasing interest,
with research works applying approximations across the entire traditional
computing stack, i.e., at software, hardware, and architectural levels. Over
the last decade, there is a plethora of approximation techniques in software
(programs, frameworks, compilers, runtimes, languages), hardware (circuits,
accelerators), and architectures (processors, memories). The current article is
Part I of our comprehensive survey on Approximate Computing, and it reviews its
motivation, terminology and principles, as well it classifies and presents the
technical details of the state-of-the-art software and hardware approximation
techniques.Comment: Under Review at ACM Computing Survey
MAS: Towards Resource-Efficient Federated Multiple-Task Learning
Federated learning (FL) is an emerging distributed machine learning method
that empowers in-situ model training on decentralized edge devices. However,
multiple simultaneous FL tasks could overload resource-constrained devices. In
this work, we propose the first FL system to effectively coordinate and train
multiple simultaneous FL tasks. We first formalize the problem of training
simultaneous FL tasks. Then, we present our new approach, MAS (Merge and
Split), to optimize the performance of training multiple simultaneous FL tasks.
MAS starts by merging FL tasks into an all-in-one FL task with a multi-task
architecture. After training for a few rounds, MAS splits the all-in-one FL
task into two or more FL tasks by using the affinities among tasks measured
during the all-in-one training. It then continues training each split of FL
tasks based on model parameters from the all-in-one training. Extensive
experiments demonstrate that MAS outperforms other methods while reducing
training time by 2x and reducing energy consumption by 40%. We hope this work
will inspire the community to further study and optimize training simultaneous
FL tasks.Comment: ICCV'23. arXiv admin note: substantial text overlap with
arXiv:2207.0420
Evaluation of different segmentation-based approaches for skin disorders from dermoscopic images
Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Sala Llonch, Roser, Mata Miquel, Christian, Munuera, JosepSkin disorders are the most common type of cancer in the world and the incident has been lately increasing over the past decades. Even with the most complex and advanced technologies, current image acquisition systems do not permit a reliable identification of the skin lesion by visual examination due to the challenging structure of the malignancy. This promotes the need for the implementation of automatic skin lesion segmentation methods in order to assist in physicians’ diagnostic when determining the lesion's region and to serve as a preliminary step for the classification of the skin lesion. Accurate and precise segmentation is crucial for a rigorous screening and monitoring of the disease's progression.
For the purpose of the commented concern, the present project aims to accomplish a state-of-the-art review about the most predominant conventional segmentation models for skin lesion segmentation, alongside with a market analysis examination. With the rise of automatic segmentation tools, a wide number of algorithms are currently being used, but many are the drawbacks when employing them for dermatological disorders due to the high-level presence of artefacts in the image acquired.
In light of the above, three segmentation techniques have been selected for the completion of the work: level set method, an algorithm combining GrabCut and k-means methods and an intensity automatic algorithm developed by Hospital Sant Joan de DĂ©u de Barcelona research group. In addition, a validation of their performance is conducted for a further implementation of them in clinical training. The proposals, together with the got outcomes, have been accomplished by means of a publicly available skin lesion image database
Using machine learning to predict pathogenicity of genomic variants throughout the human genome
Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität.
Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores.
Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt.
Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity.
Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants.
The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency.
In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org
Dynamic Adversarial Resource Allocation: the dDAB Game
This work proposes a dynamic and adversarial resource allocation problem in a
graph environment, which is referred to as the dynamic Defender-Attacker Blotto
(dDAB) game. A team of defender robots is tasked to ensure numerical advantage
at every node in the graph against a team of attacker robots. The engagement is
formulated as a discrete-time dynamic game, where the two teams reallocate
their robots in sequence and each robot can move at most one hop at each time
step. The game terminates with the attacker's victory if any node has more
attacker robots than defender robots. Our goal is to identify the necessary and
sufficient number of defender robots to guarantee defense. Through a
reachability analysis, we first solve the problem for the case where the
attacker team stays as a single group. The results are then generalized to the
case where the attacker team can freely split and merge into subteams.
Crucially, our analysis indicates that there is no incentive for the attacker
team to split, which significantly reduces the search space for the attacker's
winning strategies and also enables us to design defender counter-strategies
using superposition. We also present an efficient numerical algorithm to
identify the necessary and sufficient number of defender robots to defend a
given graph. Finally, we present illustrative examples to verify the efficacy
of the proposed framework
Bio-inspired optimization in integrated river basin management
Water resources worldwide are facing severe challenges in terms of quality and quantity. It is essential to conserve, manage, and optimize water resources and their quality through integrated water resources management (IWRM). IWRM is an interdisciplinary field that works on multiple levels to maximize the socio-economic and ecological benefits of water resources. Since this is directly influenced by the river’s ecological health, the point of interest should start at the basin-level. The main objective of this study is to evaluate the application of bio-inspired optimization techniques in integrated river basin management (IRBM). This study demonstrates the application of versatile, flexible and yet simple metaheuristic bio-inspired algorithms in IRBM.
In a novel approach, bio-inspired optimization algorithms Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) are used to spatially distribute mitigation measures within a basin to reduce long-term annual mean total nitrogen (TN) concentration at the outlet of the basin. The Upper Fuhse river basin developed in the hydrological model, Hydrological Predictions for the Environment (HYPE), is used as a case study. ACO and PSO are coupled with the HYPE model to distribute a set of measures and compute the resulting TN reduction. The algorithms spatially distribute nine crop and subbasin-level mitigation measures under four categories. Both algorithms can successfully yield a discrete combination of measures to reduce long-term annual mean TN concentration. They achieved an 18.65% reduction, and their performance was on par with each other. This study has established the applicability of these bio-inspired optimization algorithms in successfully distributing the TN mitigation measures within the river basin.
Stakeholder involvement is a crucial aspect of IRBM. It ensures that researchers and policymakers are aware of the ground reality through large amounts of information collected from the stakeholder. Including stakeholders in policy planning and decision-making legitimizes the decisions and eases their implementation. Therefore, a socio-hydrological framework is developed and tested in the Larqui river basin, Chile, based on a field survey to explore the conditions under which the farmers would implement or extend the width of vegetative filter strips (VFS) to prevent soil erosion. The framework consists of a behavioral, social model (extended Theory of Planned Behavior, TPB) and an agent-based model (developed in NetLogo) coupled with the results from the vegetative filter model (Vegetative Filter Strip Modeling System, VFSMOD-W). The results showed that the ABM corroborates with the survey results and the farmers are willing to extend the width of VFS as long as their utility stays positive. This framework can be used to develop tailor-made policies for river basins based on the conditions of the river basins and the stakeholders' requirements to motivate them to adopt sustainable practices.
It is vital to assess whether the proposed management plans achieve the expected results for the river basin and if the stakeholders will accept and implement them. The assessment via simulation tools ensures effective implementation and realization of the target stipulated by the decision-makers. In this regard, this dissertation introduces the application of bio-inspired optimization techniques in the field of IRBM. The successful discrete combinatorial optimization in terms of the spatial distribution of mitigation measures by ACO and PSO and the novel socio-hydrological framework using ABM prove the forte and diverse applicability of bio-inspired optimization algorithms
Recommended from our members
Effects of Particle Swarm Optimisation on a Hybrid Load Balancing Approach for Resource Optimisation in Internet of Things
This article belongs to the Special Issue Emerging Machine Learning Techniques in Industrial Internet of ThingsCopyright © 2023 by the authors. The internet of things, a collection of diversified distributed nodes, implies a varying choice of activities ranging from sleep monitoring and tracking of activities, to more complex activities such as data analytics and management. With an increase in scale comes even greater complexities, leading to significant challenges such as excess energy dissipation, which can lead to a decrease in IoT devices’ lifespan. Internet of things’ (IoT) multiple variable activities and ample data management greatly influence devices’ lifespan, making resource optimisation a necessity. Existing methods with respect to aspects of resource management and optimisation are limited in their concern of devices energy dissipation. This paper therefore proposes a decentralised approach, which contains an amalgamation of efficient clustering techniques, edge computing paradigms, and a hybrid algorithm, targeted at curbing resource optimisation problems and life span issues associated with IoT devices. The decentralised topology aimed at the resource optimisation of IoT places equal importance on resource allocation and resource scheduling, as opposed to existing methods, by incorporating aspects of the static (round robin), dynamic (resource-based), and clustering (particle swarm optimisation) algorithms, to provide a solid foundation for an optimised and secure IoT. The simulation constructs five test-case scenarios and uses performance indicators to evaluate the effects the proposed model has on resource optimisation in IoT. The simulation results indicate the superiority of the PSOR2B to the ant colony, the current centralised optimisation approach, LEACH, and C-LBCA.This research received no external funding
Multilink and AUV-Assisted Energy-Efficient Underwater Emergency Communications
Recent development in wireless communications has provided many reliable
solutions to emergency response issues, especially in scenarios with
dysfunctional or congested base stations. Prior studies on underwater emergency
communications, however, remain under-studied, which poses a need for combining
the merits of different underwater communication links (UCLs) and the
manipulability of unmanned vehicles. To realize energy-efficient underwater
emergency communications, we develop a novel underwater emergency communication
network (UECN) assisted by multiple links, including underwater light,
acoustic, and radio frequency links, and autonomous underwater vehicles (AUVs)
for collecting and transmitting underwater emergency data. First, we determine
the optimal emergency response mode for an underwater sensor node (USN) using
greedy search and reinforcement learning (RL), so that isolated USNs (I-USNs)
can be identified. Second, according to the distribution of I-USNs, we dispatch
AUVs to assist I-USNs in data transmission, i.e., jointly optimizing the
locations and controls of AUVs to minimize the time for data collection and
underwater movement. Finally, an adaptive clustering-based multi-objective
evolutionary algorithm is proposed to jointly optimize the number of AUVs and
the transmit power of I-USNs, subject to a given set of constraints on transmit
power, signal-to-interference-plus-noise ratios (SINRs), outage probabilities,
and energy, which achieves the best tradeoff between the maximum emergency
response time (ERT) and the total energy consumption (EC). Simulation results
indicate that our proposed approach outperforms benchmark schemes in terms of
energy efficiency (EE), contributing to underwater emergency communications.Comment: 15 page
Decentralized Machine Learning based Energy Efficient Routing and Intrusion Detection in Unmanned Aerial Network (UAV)
Decentralized machine learning (FL) is a system that uses federated learning (FL). Without disclosing locally stored sensitive information, FL enables multiple clients to work together to solve conventional distributed ML problems coordinated by a central server. In order to classify FLs, this research relies heavily on machine learning and deep learning techniques. The next generation of wireless networks is anticipated to incorporate unmanned aerial vehicles (UAVs) like drones into both civilian and military applications. The use of artificial intelligence (AI), and more specifically machine learning (ML) methods, to enhance the intelligence of UAV networks is desirable and necessary for the aforementioned uses. Unfortunately, most existing FL paradigms are still centralized, with a singular entity accountable for network-wide ML model aggregation and fusion. This is inappropriate for UAV networks, which frequently feature unreliable nodes and connections, and provides a possible single point of failure. There are many challenges by using high mobility of UAVs, of loss of packet frequent and difficulties in the UAV between the weak links, which affect the reliability while delivering data. An earlier UAV failure is happened by the unbalanced conception of energy and lifetime of the network is decreased; this will accelerate consequently in the overall network. In this paper, we focused mainly on the technique of security while maintaining UAV network in surveillance context, all information collected from different kinds of sources. The trust policies are based on peer-to-peer information which is confirmed by UAV network. A pre-shared UAV list or used by asymmetric encryption security in the proposal system. The wrong information can be identified when the UAV the network is hijacked physically by using this proposed technique. To provide secure routing path by using Secure Location with Intrusion Detection System (SLIDS) and conservation of energy-based prediction of link breakage done by location-based energy efficient routing (LEER) for discovering path of degree connectivity. Thus, the proposed novel architecture is named as Decentralized Federate Learning- Secure Location with Intrusion Detection System (DFL-SLIDS), which achieves 98% of routing overhead, 93% of end-to-end delay, 92% of energy efficiency, 86.4% of PDR and 97% of throughput
A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey
The growing interest in the Metaverse has generated momentum for members of
academia and industry to innovate toward realizing the Metaverse world. The
Metaverse is a unique, continuous, and shared virtual world where humans embody
a digital form within an online platform. Through a digital avatar, Metaverse
users should have a perceptual presence within the environment and can interact
and control the virtual world around them. Thus, a human-centric design is a
crucial element of the Metaverse. The human users are not only the central
entity but also the source of multi-sensory data that can be used to enrich the
Metaverse ecosystem. In this survey, we study the potential applications of
Brain-Computer Interface (BCI) technologies that can enhance the experience of
Metaverse users. By directly communicating with the human brain, the most
complex organ in the human body, BCI technologies hold the potential for the
most intuitive human-machine system operating at the speed of thought. BCI
technologies can enable various innovative applications for the Metaverse
through this neural pathway, such as user cognitive state monitoring, digital
avatar control, virtual interactions, and imagined speech communications. This
survey first outlines the fundamental background of the Metaverse and BCI
technologies. We then discuss the current challenges of the Metaverse that can
potentially be addressed by BCI, such as motion sickness when users experience
virtual environments or the negative emotional states of users in immersive
virtual applications. After that, we propose and discuss a new research
direction called Human Digital Twin, in which digital twins can create an
intelligent and interactable avatar from the user's brain signals. We also
present the challenges and potential solutions in synchronizing and
communicating between virtual and physical entities in the Metaverse
- …