17,219 research outputs found
Finding Favourite Tuples on Data Streams with Provably Few Comparisons
One of the most fundamental tasks in data science is to assist a user with
unknown preferences in finding high-utility tuples within a large database. To
accurately elicit the unknown user preferences, a widely-adopted way is by
asking the user to compare pairs of tuples. In this paper, we study the problem
of identifying one or more high-utility tuples by adaptively receiving user
input on a minimum number of pairwise comparisons. We devise a single-pass
streaming algorithm, which processes each tuple in the stream at most once,
while ensuring that the memory size and the number of requested comparisons are
in the worst case logarithmic in , where is the number of all tuples. An
important variant of the problem, which can help to reduce human error in
comparisons, is to allow users to declare ties when confronted with pairs of
tuples of nearly equal utility. We show that the theoretical guarantees of our
method can be maintained for this important problem variant. In addition, we
show how to enhance existing pruning techniques in the literature by leveraging
powerful tools from mathematical programming. Finally, we systematically
evaluate all proposed algorithms over both synthetic and real-life datasets,
examine their scalability, and demonstrate their superior performance over
existing methods.Comment: To appear in KDD 202
Challenges in the Design and Implementation of IoT Testbeds in Smart-Cities : A Systematic Review
Advancements in wireless communication and the increased accessibility to low-cost sensing and data processing IoT technologies have increased the research and development of urban monitoring systems. Most smart city research projects rely on deploying proprietary IoT testbeds for indoor and outdoor data collection. Such testbeds typically rely on a three-tier architecture composed of the Endpoint, the Edge, and the Cloud. Managing the system's operation whilst considering the security and privacy challenges that emerge, such as data privacy controls, network security, and security updates on the devices, is challenging. This work presents a systematic study of the challenges of developing, deploying and managing urban monitoring testbeds, as experienced in a series of urban monitoring research projects, followed by an analysis of the relevant literature. By identifying the challenges in the various projects and organising them under the V-model development lifecycle levels, we provide a reference guide for future projects. Understanding the challenges early on will facilitate current and future smart-cities IoT research projects to reduce implementation time and deliver secure and resilient testbeds
GeoYCSB: A Benchmark Framework for the Performance and Scalability Evaluation of Geospatial NoSQL Databases
The proliferation of geospatial applications has tremendously increased the variety, velocity, and volume of spatial data that data stores have to manage. Traditional relational databases reveal limitations in handling such big geospatial data, mainly due to their rigid schema requirements and limited scalability. Numerous NoSQL databases have emerged and actively serve as alternative data stores for big spatial data. This study presents a framework, called GeoYCSB, developed for benchmarking NoSQL databases with geospatial workloads. To develop GeoYCSB, we extend YCSB, a de facto benchmark framework for NoSQL systems, by integrating into its design architecture the new components necessary to support geospatial workloads. GeoYCSB supports both microbenchmarks and macrobenchmarks and facilitates the use of real datasets in both. It is extensible to evaluate any NoSQL database, provided they support spatial queries, using geospatial workloads performed on datasets of any geometric complexity. We use GeoYCSB to benchmark two leading document stores, MongoDB and Couchbase, and present the experimental results and analysis. Finally, we demonstrate the extensibility of GeoYCSB by including a new dataset consisting of complex geometries and using it to benchmark a system with a wide variety of geospatial queries: Apache Accumulo, a wide-column store, with the GeoMesa framework applied on top
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
The rapid growth of demanding applications in domains applying multimedia
processing and machine learning has marked a new era for edge and cloud
computing. These applications involve massive data and compute-intensive tasks,
and thus, typical computing paradigms in embedded systems and data centers are
stressed to meet the worldwide demand for high performance. Concurrently, the
landscape of the semiconductor field in the last 15 years has constituted power
as a first-class design concern. As a result, the community of computing
systems is forced to find alternative design approaches to facilitate
high-performance and/or power-efficient computing. Among the examined
solutions, Approximate Computing has attracted an ever-increasing interest,
with research works applying approximations across the entire traditional
computing stack, i.e., at software, hardware, and architectural levels. Over
the last decade, there is a plethora of approximation techniques in software
(programs, frameworks, compilers, runtimes, languages), hardware (circuits,
accelerators), and architectures (processors, memories). The current article is
Part I of our comprehensive survey on Approximate Computing, and it reviews its
motivation, terminology and principles, as well it classifies and presents the
technical details of the state-of-the-art software and hardware approximation
techniques.Comment: Under Review at ACM Computing Survey
Using machine learning to predict pathogenicity of genomic variants throughout the human genome
Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität.
Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores.
Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt.
Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity.
Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants.
The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency.
In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org
Fairness Testing: A Comprehensive Survey and Analysis of Trends
Unfair behaviors of Machine Learning (ML) software have garnered increasing
attention and concern among software engineers. To tackle this issue, extensive
research has been dedicated to conducting fairness testing of ML software, and
this paper offers a comprehensive survey of existing studies in this field. We
collect 100 papers and organize them based on the testing workflow (i.e., how
to test) and testing components (i.e., what to test). Furthermore, we analyze
the research focus, trends, and promising directions in the realm of fairness
testing. We also identify widely-adopted datasets and open-source tools for
fairness testing
LATEST ADVANCES ON SECURITY ARCHITECTURE FOR 5G TECHNOLOGY AND SERVICES
The roll out of the deployment of the 5G technology has been ongoing globally. The
deployment of the technologies associated with 5G has seen mixed reaction as regards its
prospects to improve communication services in all spares of life amid its security concerns. The
security concerns of 5G network lies in its architecture and other technologies that optimize the
performance of its architecture. There are many fractions of 5G security architecture in the
literature, a holistic security architectural structure will go a long way in tackling the security
challenges. In this paper, the review of the security challenges of the 5G technology based on its
architecture is presented along with their proposed solutions. This review was carried out with
some keywords relating to 5G securities and architecture; this was used to retrieve appropriate
literature for fitness of purpose. The 5G security architectures are mojorly centered around the
seven network security layers; thereby making each of the layers a source of security concern on
the 5G network. Many of the 5G security challenges are related to authentication and authorization
such as denial-of-service attacks, man in the middle attack and eavesdropping. Different methods
both hardware (Unmanned Aerial Vehicles, field programmable logic arrays) and software (Artificial
intelligence, Machine learning, Blockchain, Statistical Process Control) has been proposed for
mitigating the threats. Other technologies applicable to 5G security concerns includes: Multi-radio
access technology, smart-grid network and light fidelity. The implementation of these solutions
should be reviewed on a timely basis because of the dynamic nature of threats which will greatly
reduce the occurrence of security attacks on the 5G network
AlerTiger: Deep Learning for AI Model Health Monitoring at LinkedIn
Data-driven companies use AI models extensively to develop products and
intelligent business solutions, making the health of these models crucial for
business success. Model monitoring and alerting in industries pose unique
challenges, including a lack of clear model health metrics definition, label
sparsity, and fast model iterations that result in short-lived models and
features. As a product, there are also requirements for scalability,
generalizability, and explainability. To tackle these challenges, we propose
AlerTiger, a deep-learning-based MLOps model monitoring system that helps AI
teams across the company monitor their AI models' health by detecting anomalies
in models' input features and output score over time. The system consists of
four major steps: model statistics generation, deep-learning-based anomaly
detection, anomaly post-processing, and user alerting. Our solution generates
three categories of statistics to indicate AI model health, offers a two-stage
deep anomaly detection solution to address label sparsity and attain the
generalizability of monitoring new models, and provides holistic reports for
actionable alerts. This approach has been deployed to most of LinkedIn's
production AI models for over a year and has identified several model issues
that later led to significant business metric gains after fixing
- …