7,932 research outputs found
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
The rapid growth of demanding applications in domains applying multimedia
processing and machine learning has marked a new era for edge and cloud
computing. These applications involve massive data and compute-intensive tasks,
and thus, typical computing paradigms in embedded systems and data centers are
stressed to meet the worldwide demand for high performance. Concurrently, the
landscape of the semiconductor field in the last 15 years has constituted power
as a first-class design concern. As a result, the community of computing
systems is forced to find alternative design approaches to facilitate
high-performance and/or power-efficient computing. Among the examined
solutions, Approximate Computing has attracted an ever-increasing interest,
with research works applying approximations across the entire traditional
computing stack, i.e., at software, hardware, and architectural levels. Over
the last decade, there is a plethora of approximation techniques in software
(programs, frameworks, compilers, runtimes, languages), hardware (circuits,
accelerators), and architectures (processors, memories). The current article is
Part I of our comprehensive survey on Approximate Computing, and it reviews its
motivation, terminology and principles, as well it classifies and presents the
technical details of the state-of-the-art software and hardware approximation
techniques.Comment: Under Review at ACM Computing Survey
2P-BFT-Log: 2-Phase Single-Author Append-Only Log for Adversarial Environments
Replicated append-only logs sequentially order messages from the same author
such that their ordering can be eventually recovered even with out-of-order and
unreliable dissemination of individual messages. They are widely used for
implementing replicated services in both clouds and peer-to-peer environments
because they provide simple and efficient incremental reconciliation. However,
existing designs of replicated append-only logs assume replicas faithfully
maintain the sequential properties of logs and do not provide eventual
consistency when malicious participants fork their logs by disseminating
different messages to different replicas for the same index, which may result
in partitioning of replicas according to which branch was first replicated.
In this paper, we present 2P-BFT-Log, a two-phase replicated append-only log
that provides eventual consistency in the presence of forks from malicious
participants such that all correct replicas will eventually agree either on the
most recent message of a valid log (first phase) or on the earliest point at
which a fork occurred as well as on an irrefutable proof that it happened
(second phase). We provide definitions, algorithms, and proofs of the key
properties of the design, and explain one way to implement the design onto Git,
an eventually consistent replicated database originally designed for
distributed version control.
Our design enables correct replicas to faithfully implement the
happens-before relationship first introduced by Lamport that underpins most
existing distributed algorithms, with eventual detection of forks from
malicious participants to exclude the latter from further progress. This opens
the door to adaptations of existing distributed algorithms to a cheaper detect
and repair paradigm, rather than the more common and expensive systematic
prevention of incorrect behaviour.Comment: Fixed 'two-phase' typ
Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring
Artificially intelligent perception is increasingly present in the lives of
every one of us. Vehicles are no exception, (...) In the near future, pattern
recognition will have an even stronger role in vehicles, as self-driving cars
will require automated ways to understand what is happening around (and within)
them and act accordingly. (...) This doctoral work focused on advancing
in-vehicle sensing through the research of novel computer vision and pattern
recognition methodologies for both biometrics and wellbeing monitoring. The
main focus has been on electrocardiogram (ECG) biometrics, a trait well-known
for its potential for seamless driver monitoring. Major efforts were devoted to
achieving improved performance in identification and identity verification in
off-the-person scenarios, well-known for increased noise and variability. Here,
end-to-end deep learning ECG biometric solutions were proposed and important
topics were addressed such as cross-database and long-term performance,
waveform relevance through explainability, and interlead conversion. Face
biometrics, a natural complement to the ECG in seamless unconstrained
scenarios, was also studied in this work. The open challenges of masked face
recognition and interpretability in biometrics were tackled in an effort to
evolve towards algorithms that are more transparent, trustworthy, and robust to
significant occlusions. Within the topic of wellbeing monitoring, improved
solutions to multimodal emotion recognition in groups of people and
activity/violence recognition in in-vehicle scenarios were proposed. At last,
we also proposed a novel way to learn template security within end-to-end
models, dismissing additional separate encryption processes, and a
self-supervised learning approach tailored to sequential data, in order to
ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022
to the University of Port
Detection of Malware in Large Networks using Deep Auto Encoders
Data mining and machine learning have been heavily studied in recent years with the purpose of detecting sophisticated malware. The majority of these approaches rely on architectures that do not involve deeply enough into the learning process, despite the fact that they have yielded excellent results. This is because deep learning is finding increasing application in both business and academia thanks due to its skills in feature learning. In this paper, we develop a Deep Auto Encoder (DAE) based detection mechanism to detect the malwares crawling in the large scale networks. The DAE act as an unsupervised deep learning model that helps in detecting the malwares. The simulation is conducted on two different datasets to test the robustness of the model. The results show that the proposed method has higher rate of accuracy in detecting the attacks than other methods
An IoT architecture for decision support system in precision livestock
Sustainable animal production is a primary goal of technological development in
the livestock industry. However, it is crucial to master the livestock environment due
to the susceptibility of animals to variables such as temperature and humidity, which
can cause illness, production losses, and discomfort. Thus, livestock production systems
require monitoring, reasoning, and mitigating unwanted conditions with automated actions.
The principal contribution of this study is the introduction of a self-adaptive architecture
named e-Livestock to handle animal production decisions. Two case studies were conducted
involving a system derived from the e-Livestock architecture, encompassing a Compost
Barn production system - an environment and technology where bovine milk production
occurs. The outcomes demonstrate the effectiveness of e-Livestock in three key aspects: (i)
abstraction of disruptive technologies based on the Internet of Things (IoT) and Artificial
Intelligence and their incorporation into a single architecture specific to the livestock
domain, (ii) support for the reuse and derivation of an adaptive self-architecture to
support the engineering of a decision support system for the livestock subdomain, and (iii)
support for empirical studies in a real smart farm to facilitate future technology transfer
to the industry. Therefore, our research’s main contribution is developing an architecture
combining machine learning techniques and ontology to support more complex decisions
when considering a large volume of data generated on farms. The results revealed that the
e-Livestock architecture could support monitoring, reasoning, forecasting, and automated
actions in a milk production/Compost Barn environment.Na indústria pecuária, a produção animal sustentável é o principal objetivo do
desenvolvimento tecnológico. Porém, é fundamental manter boas condições no ambiente
devido à suscetibilidade dos animais a variáveis como temperatura e umidade, que podem
causar doenças, perdas de produção e desconforto. Assim, os sistemas de produção pecuária
requerem monitoramento, controle e mitigação das condições indesejadas através de ações
automatizadas. A principal contribuição deste estudo é a introdução de uma arquitetura
auto-adaptativa denominada e-Livestock para apoiar as decisões relacionadas à produção
animal. Foram conduzidos dois estudos de caso, envolvendo a arquitetura e-Livestock,
que foi utilizada no sistema de produção Compost Barn - ambiente e tecnologia onde
ocorre a produção de gado leiteiro. Os resultados demonstraram a utilidade do e-Livestock
para avaliar três aspectos principais: (i) abstração de tecnologias disruptivas baseadas em
Internet das Coisas (IoT) e Inteligência Artificial, e sua incorporação em uma arquitetura
única, específica para o domínio da pecuária, (ii) suporte para a reutilização e derivação
de uma arquitetura auto-adaptativa para apoiar o desenvolvimento de uma aplicação de
apoio à decisão para o subdomínio da pecuária e (iii) suporte para estudos empíricos em
uma fazenda inteligente real para facilitar a transferência de tecnologia para a indústria.
Portanto, a principal contribuição dessa pesquisa é o desenvolvimento de uma arquitetura
combinando técnicas de machine learning e ontologia para apoiar decisões mais complexas
ao considerar um grande volume de dados gerados nas fazendas. Os resultados revelaram
que a arquitetura e-Livestock pode apoiar monitoramento, controle, previsão e ações
automatizadas em um ambiente de produção de leite/Compost Barn.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superio
Towards Scalable Real-time Analytics:: An Architecture for Scale-out of OLxP Workloads
We present an overview of our work on the SAP HANA Scale-out Extension, a novel distributed database architecture designed to support large scale analytics over real-time data. This platform permits high performance OLAP with massive scale-out capabilities, while concurrently allowing OLTP workloads. This dual capability enables analytics over real-time changing data and allows fine grained user-specified service level agreements (SLAs) on data freshness. We advocate the decoupling of core database components such as query processing, concurrency control, and persistence, a design choice made possible by advances in high-throughput low-latency networks and storage devices. We provide full ACID guarantees and build on a logical timestamp mechanism to provide MVCC-based snapshot isolation, while not requiring synchronous updates of replicas. Instead, we use asynchronous update propagation guaranteeing consistency with timestamp validation. We provide a view into the design and development of a large scale data management platform for real-time analytics, driven by the needs of modern enterprise customers
- …