7,008 research outputs found
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
Recent studies show that deep reinforcement learning (DRL) agents tend to
overfit to the task on which they were trained and fail to adapt to minor
environment changes. To expedite learning when transferring to unseen tasks, we
propose a novel approach to representing the current task using reward machines
(RM), state machine abstractions that induce subtasks based on the current
task's rewards and dynamics. Our method provides agents with symbolic
representations of optimal transitions from their current abstract state and
rewards them for achieving these transitions. These representations are shared
across tasks, allowing agents to exploit knowledge of previously encountered
symbols and transitions, thus enhancing transfer. Our empirical evaluation
shows that our representations improve sample efficiency and few-shot transfer
in a variety of domains.Comment: IJCAI Workshop on Planning and Reinforcement Learning, 202
Recommended from our members
It's about Time: Analytical Time Periodization
This paper presents a novel approach to the problem of time periodization, which involves dividing the time span of a complex dynamic phenomenon into periods that enclose different relatively stable states or development trends. The challenge lies in finding such a division of the time that takes into account diverse behaviours of multiple components of the phenomenon while being simple and easy to interpret. Despite the importance of this problem, it has not received sufficient attention in the fields of visual analytics and data science. We use a real-world example from aviation and an additional usage scenario on analysing mobility trends during the COVID-19 pandemic to develop and test an analytical workflow that combines computational and interactive visual techniques. We highlight the differences between the two cases and show how they affect the use of different techniques. Through our investigation of possible variations in the time periodization problem, we discuss the potential of our approach to be used in various applications. Our contributions include defining and investigating an earlier neglected problem type, developing a practical and reproducible approach to solving problems of this type, and uncovering potential for formalization and development of computational methods
Writing Facts: Interdisciplinary Discussions of a Key Concept in Modernity
"Fact" is one of the most crucial inventions of modern times. Susanne Knaller discusses the functions of this powerful notion in the arts and the sciences, its impact on aesthetic models and systems of knowledge. The practice of writing provides an effective procedure to realize and to understand facts. This concerns preparatory procedures, formal choices, models of argumentation, and narrative patterns. By considering "writing facts" and "writing facts", the volume shows why and how "facts" are a result of knowledge, rules, and norms as well as of description, argumentation, and narration. This approach allows new perspectives on »fact« and its impact on modernity
5G RAN/MEC Slicing and Admission Control using Deep Reinforcement Learning
The 5G RAN functions can be virtualized and distributed across the radio unit (RU), distributed unit (DU), and centralized unit (CU) to facilitate flexible resource management. Complemented by multi-access edge computing (MEC), these components create network slices tailored for applications with diverse quality of service (QoS) requirements. However, as the requests for various slices arrive dynamically over time and the network resources are limited, it is non-trivial for an infrastructure provider (InP) to optimize its long-term revenue from real-time admission and embedding of slice requests. Prior works have leveraged Deep Reinforcement Learning (DRL) to address this problem, however, these solutions either do not scale to realistic topologies, require re-training of the DRL agents when facing topology changes, or do not consider the slice admission and embedding problems jointly. In this thesis, we use multi-agent DRL and Graph Attention Networks (GATs) to address these limitations. Specifically, we propose novel topology-independent admission and slicing agents that are scalable and generalizable to large and different metropolitan networks. Results show that the proposed approach converges faster and achieves up to 35.2% and 20% gain in revenue compared to heuristics and other DRL-based approaches, respectively. Additionally, we demonstrate that our approach is generalizable to scenarios and substrate networks previously unseen during training, as it maintains superior performance without re-training or re-tuning. Finally, we extract the attention maps of the GAT, and analyze them to detect potential bottlenecks and efficiently improve network performance and InP revenue through eliminating them
Augmented Behavioral Annotation Tools, with Application to Multimodal Datasets and Models: A Systematic Review
Annotation tools are an essential component in the creation of datasets for machine learning purposes. Annotation tools have evolved greatly since the turn of the century, and now commonly include collaborative features to divide labor efficiently, as well as automation employed to amplify human efforts. Recent developments in machine learning models, such as Transformers, allow for training upon very large and sophisticated multimodal datasets and enable generalization across domains of knowledge. These models also herald an increasing emphasis on prompt engineering to provide qualitative fine-tuning upon the model itself, adding a novel emerging layer of direct machine learning annotation. These capabilities enable machine intelligence to recognize, predict, and emulate human behavior with much greater accuracy and nuance, a noted shortfall of which have contributed to algorithmic injustice in previous techniques. However, the scale and complexity of training data required for multimodal models presents engineering challenges. Best practices for conducting annotation for large multimodal models in the most safe and ethical, yet efficient, manner have not been established. This paper presents a systematic literature review of crowd and machine learning augmented behavioral annotation methods to distill practices that may have value in multimodal implementations, cross-correlated across disciplines. Research questions were defined to provide an overview of the evolution of augmented behavioral annotation tools in the past, in relation to the present state of the art. (Contains five figures and four tables)
Say That Again: The role of multimodal redundancy in communication and context
With several modes of expression, such as facial expressions, body language, and speech working together to convey meaning, social communication is rich in redundancy. While typically relegated to signal preservation, this study investigates the role of cross-modal redundancies in establishing performance context, focusing on unaided, solo performances. Drawing on information theory, I operationalize redundancy as predictability and use an array of machine learning models to featurize speakers\u27 facial expressions, body poses, movement speeds, acoustic features, and spoken language from 24 TEDTalks and 16 episodes of Comedy Central Stand-Up Presents. This analysis demonstrates that it is possible to distinguish between these performance types based on cross-modal predictions, while also highlighting the significant amount of prediction supported by the signals’ synchrony across modalities. Further research is needed to unravel the complexities of redundancy\u27s place in social communication, paving the way for more effective and engaging communication strategies
Offene-Welt-Strukturen: Architektur, Stadt- und Naturlandschaft im Computerspiel
Welche Rolle spielen Algorithmen für den Bildbau und die Darstellung von Welt und Wetter in Computerspielen? Wie beeinflusst die Gestaltung der Räume, Level und Topografien die Entscheidungen und das Verhalten der Spieler_innen? Ist der Brutalismus der erste genuine Architekturstil der Computerspiele? Welche Bedeutung haben Landschaftsgärten und Nationalparks im Strukturieren von Spielwelten? Wie wird Natur in Zeiten des Klimawandels dargestellt? Insbesondere in den letzten 20 Jahren adaptieren digitale Spielwelten akribischer denn je Merkmale der physisch-realen Welt. Durch aufwändige Produktionsverfahren und komplexe Visualisierungsstrategien wird die Angleichung an unsere übrige Alltagswelt stets in Abhängigkeit von Spielmechanik und Weltlichkeit erzeugt. Wie sich spätestens am Beispiel der Open-World-Spiele zeigt, führt die Übernahme bestimmter Weltbilder und Bildtraditionen zu ideologischen Implikationen, die weit über die bisher im Fokus der Forschung stehenden, aus anderen Medienformaten transferierten Erzählkonventionen hinausgehen. Mit seiner Theorie der Architektur als medialem Scharnier legt der Autor offen, dass digitale Spielwelten medienspezifische Eigenschaften aufweisen, die bisher nicht zu greifen waren und der Erforschung harrten. Durch Verschränken von Konzepten aus u.a. Medienwissenschaft, Game Studies, Philosophie, Architekturtheorie, Humangeografie, Landschaftstheorie und Kunstgeschichte erarbeitet Bonner ein transdisziplinäres Theoriemodell und ermöglicht anhand der daraus entwickelten analytischen Methoden erstmals, die komplexe Struktur heutiger Computerspiele - vom Indie Game bis zur AAA Open World - zu verstehen und zu benennen. Mit "Offene-Welt-Strukturen" wird die Architektonik digitaler Spielwelten umfassend zugänglich
On Making Fiction: Frankenstein and the Life of Stories
Fiction is generally understood to be a fascinating, yet somehow deficient affair, merely derivative of reality. What if we could, instead, come up with an affirmative approach that takes stories seriously in their capacity to bring forth a substance of their own? Iconic texts such as Mary Shelley's Frankenstein and its numerous adaptations stubbornly resist our attempts to classify them as mere representations of reality. The author shows how these texts insist that we take them seriously as agents and interlocutors in our world- and culture-making activities. Drawing on this analysis, she develops a theory of narrative fiction as a generative practice
Production Optimization Indexed to the Market Demand Through Neural Networks
Connectivity, mobility and real-time data analytics are the prerequisites for a new model of intelligent
production management that facilitates communication between machines, people and
processes and uses technology as the main driver.
Many works in the literature treat maintenance and production management in separate approaches,
but there is a link between these areas, with maintenance and its actions aimed at ensuring the
smooth operation of equipment to avoid unnecessary downtime in production.
With the advent of technology, companies are rushing to solve their problems by resorting to technologies
in order to fit into the most advanced technological concepts, such as industries 4.0 and
5.0, which are based on the principle of process automation. This approach brings together database
technologies, making it possible to monitor the operation of equipment and have the opportunity
to study patterns of data behavior that can alert us to possible failures.
The present thesis intends to forecast the pulp production indexed to the stock market value.The
forecast will be made by means of the pulp production variables of the presses and the stock exchange
variables supported by artificial intelligence (AI) technologies, aiming to achieve an effective
planning. To support the decision of efficient production management, in this thesis algorithms
were developed and validated with from five pulp presses, as well as data from other sources, such
as steel production and stock exchange, which were relevant to validate the robustness of the model.
This thesis demonstrated the importance of data processing methods and that they have great relevance
in the model input since they facilitate the process of training and testing the models. The
chosen technologies demonstrated good efficiency and versatility in performing the prediction of
the values of the variables of the equipment, also demonstrating robustness and optimization in
computational processing. The thesis also presents proposals for future developments, namely
in further exploration of these technologies, so that there are market variables that can calibrate
production through forecasts supported on these same variables.Conectividade, mobilidade e análise de dados em tempo real são pré-requisitos para um novo
modelo de gestão inteligente da produção que facilita a comunicação entre máquinas, pessoas e
processos, e usa a tecnologia como motor principal.
Muitos trabalhos na literatura tratam a manutenção e a gestão da produção em abordagens separadas,
mas existe uma correlação entre estas áreas, sendo que a manutenção e as suas políticas
têm como premissa garantir o bom funcionamento dos equipamentos de modo a evitar paragens
desnecessárias na linha de produção.
Com o advento da tecnologia há uma corrida das empresas para solucionar os seus problemas
recorrendo às tecnologias, visando a sua inserção nos conceitos tecnológicos, mais avançados,
tais como as indústrias 4.0 e 5.0, as quais têm como princípio a automatização dos processos.
Esta abordagem junta as tecnologias de sistema de informação, sendo possível fazer o acompanhamento
do funcionamento dos equipamentos e ter a possibilidade de realizar o estudo de padrões
de comportamento dos dados que nos possam alertar para possíveis falhas.
A presente tese pretende prever a produção da pasta de papel indexada às bolsas de valores. A
previsão será feita por via das variáveis da produção da pasta de papel das prensas e das variáveis
da bolsa de valores suportadas em tecnologias de artificial intelligence (IA), tendo como objectivo
conseguir um planeamento eficaz. Para suportar a decisão de uma gestão da produção eficiente,
na presente tese foram desenvolvidos algoritmos, validados em dados de cinco prensas de pasta de
papel, bem como dados de outras fontes, tais como, de Produção de Aço e de Bolsas de Valores,
os quais se mostraram relevantes para a validação da robustez dos modelos.
A presente tese demonstrou a importância dos métodos de tratamento de dados e que os mesmos
têm uma grande relevância na entrada do modelo, visto que facilita o processo de treino e testes dos
modelos. As tecnologias escolhidas demonstraram uma boa eficiência e versatilidade na realização
da previsão dos valores das variáveis dos equipamentos, demonstrando ainda robustez e otimização
no processamento computacional.
A tese apresenta ainda propostas para futuros desenvolvimentos, designadamente na exploração
mais aprofundada destas tecnologias, de modo a que haja variáveis de mercado que possam calibrar
a produção através de previsões suportadas nestas mesmas variáveis
- …