1,037 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Understanding Hackers' Work: An Empirical Study of Offensive Security Practitioners
Offensive security-tests are a common way to pro-actively discover potential
vulnerabilities. They are performed by specialists, often called
penetration-testers or white-hat hackers. The chronic lack of available
white-hat hackers prevents sufficient security test coverage of software.
Research into automation tries to alleviate this problem by improving the
efficiency of security testing. To achieve this, researchers and tool builders
need a solid understanding of how hackers work, their assumptions, and pain
points.
In this paper, we present a first data-driven exploratory qualitative study
of twelve security professionals, their work and problems occurring therein. We
perform a thematic analysis to gain insights into the execution of security
assignments, hackers' thought processes and encountered challenges.
This analysis allows us to conclude with recommendations for researchers and
tool builders to increase the efficiency of their automation and identify novel
areas for research.Comment: 11 pages, we have chosen the category "Software Engineering" and not
"Cryptography and Security" as while this is a paper about security
practices, we target software engineering researcher
Mapping the Focal Points of WordPress: A Software and Critical Code Analysis
Programming languages or code can be examined through numerous analytical lenses. This project is a critical analysis of WordPress, a prevalent web content management system, applying four modes of inquiry. The project draws on theoretical perspectives and areas of study in media, software, platforms, code, language, and power structures. The applied research is based on Critical Code Studies, an interdisciplinary field of study that holds the potential as a theoretical lens and methodological toolkit to understand computational code beyond its function. The project begins with a critical code analysis of WordPress, examining its origins and source code and mapping selected vulnerabilities. An examination of the influence of digital and computational thinking follows this. The work also explores the intersection of code patching and vulnerability management and how code shapes our sense of control, trust, and empathy, ultimately arguing that a rhetorical-cultural lens can be used to better understand code\u27s controlling influence. Recurring themes throughout these analyses and observations are the connections to power and vulnerability in WordPress\u27 code and how cultural, processual, rhetorical, and ethical implications can be expressed through its code, creating a particular worldview. Code\u27s emergent properties help illustrate how human values and practices (e.g., empathy, aesthetics, language, and trust) become encoded in software design and how people perceive the software through its worldview. These connected analyses reveal cultural, processual, and vulnerability focal points and the influence these entanglements have concerning WordPress as code, software, and platform. WordPress is a complex sociotechnical platform worthy of further study, as is the interdisciplinary merging of theoretical perspectives and disciplines to critically examine code. Ultimately, this project helps further enrich the field by introducing focal points in code, examining sociocultural phenomena within the code, and offering techniques to apply critical code methods
Compact monotone tall complexity one -spaces
Motivated by a conjecture of Fine and Panov, we study compact monotone tall
complexity one -spaces in any dimension. We use the classification of
Karshon and Tolman, and the monotone condition, to prove that any two such
spaces are isomorphic if and only if they have equal Duistermaat-Heckman
measures. Moreover, we provide a complete description of the possible
Duistermaat-Heckman measures that can arise for a given moment polytope. Hence,
we prove a finiteness result for compact monotone tall complexity one
-spaces that is analogous to that for compact monotone symplectic toric
manifolds. Furthermore, we prove that any such -action can be extended to a
toric -action. Finally, we use this last result to show that
any compact monotone tall complexity one -space is equivariantly
symplectomorphic to a Fano manifold endowed with a complexity one -action.Comment: 70 pages, 12 figures, comments are welcome
Evaluating Architectural Safeguards for Uncertain AI Black-Box Components
Although tremendous progress has been made in Artificial Intelligence (AI), it entails new challenges. The growing complexity of learning tasks requires more complex AI components, which increasingly exhibit unreliable behaviour. In this book, we present a model-driven approach to model architectural safeguards for AI components and analyse their effect on the overall system reliability
FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning
LLMs have demonstrated great capabilities in various NLP tasks. Different
entities can further improve the performance of those LLMs on their specific
downstream tasks by fine-tuning LLMs. When several entities have similar
interested tasks, but their data cannot be shared because of privacy concerns
regulations, federated learning (FL) is a mainstream solution to leverage the
data of different entities. However, fine-tuning LLMs in federated learning
settings still lacks adequate support from existing FL frameworks because it
has to deal with optimizing the consumption of significant communication and
computational resources, data preparation for different tasks, and distinct
information protection demands. This paper first discusses these challenges of
federated fine-tuning LLMs, and introduces our package FS-LLM as a main
contribution, which consists of the following components: (1) we build an
end-to-end benchmarking pipeline, automizing the processes of dataset
preprocessing, federated fine-tuning execution, and performance evaluation on
federated LLM fine-tuning; (2) we provide comprehensive federated
parameter-efficient fine-tuning algorithm implementations and versatile
programming interfaces for future extension in FL scenarios with low
communication and computation costs, even without accessing the full model; (3)
we adopt several accelerating and resource-efficient operators for fine-tuning
LLMs with limited resources and the flexible pluggable sub-routines for
interdisciplinary study. We conduct extensive experiments to validate the
effectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art
parameter-efficient fine-tuning algorithms in FL settings, which also yields
valuable insights into federated fine-tuning LLMs for the research community.
To facilitate further research and adoption, we release FS-LLM at
https://github.com/alibaba/FederatedScope/tree/llm.Comment: Source code: https://github.com/alibaba/FederatedScope/tree/ll
The Black Box of Enrollment Management: The Influence of Academic Capitalism and Values of the Public Good
The study addresses the widening income and racial access gap in higher education resulting from enrollment management teams’ operationalization of academic capitalism. The study focuses on the local, micro level, emphasizing how enrollment management leadership teams make sense of enrollment management, recognizing that enrollment management and the work of enrollment management stakeholders exist within an organizational space encompassing the values of both public good and academic capitalism. Using a case study methodology and critical sensemaking theory, the research explored how academic capitalism and values of the public good shaped enrollment management leadership teams’ sensemaking and sensegiving as they enacted decisions, actions, and practices to recruit and admit students. The main conclusion includes the critical role of the EMLTs and its members’ agency in public good enactments, especially driving the sensemaking process, and a more nuanced and complicated picture between academic capitalism and values of the public good in enrollment management. The study is the first to demonstrate that academic capitalism and the public good can coexist and overlap, in various ways, within the field of enrollment management despite existing literature’s overwhelming characterization of enrollment management as firmly existing within the space of academic capitalism. Recommendations for colleges and universities include leveraging capitalist tools to drive a public good agenda; using predictive data analytics to have a measured approach to increase access; balancing the use of tuition discounting; investing in hiring organizational actors who can operate with contradictory logics and share public good values; developing key public good metrics; diversifying revenue streams; and for wealthy institutions to be bold in their public good enactments
The Role of Spatial Scale in Electricity System Optimisation Models
To investigate possible pathways to reduce greenhouse gas emissions in the electricity sector, researchers build optimisation models that typically minimise the total system costs such that all technical and physical constraints are met. For systems based on renewable energy, whose greatest expansion potentials are found for wind and solar generation, the chief challenge is dealing with their variability. To tackle this challenge, the optimisation models typically include large transmission networks to smooth renewable feed-in in space or storage technologies to smooth the variability in time. However, all aspects of the energy system at all levels of detail cannot currently be contained in a single model because of computational constraints. Instead, one must make simplifications and compromises that affect the optimality of the result from the point of view of the complete system. While reductions on the temporal scale and linearisation approaches of the model formulation have been previously analysed, in this thesis we focus on the quantification of the impact of the spatial scale. This is important because it is scientific practice to simplify models spatially while only little is known on the error made by the aggregation.
The contents of this dissertations spatial scale analysis are three-fold and build upon one another: (i) A novel clustering methodology enables us to disentangle and quantify the error that is made by spatially aggregating generation sites where renewable electricity can be sourced versus the error made by aggregating transmission lines and, thus, electricity interactions between spatially distributed substations. By clustering the network on both features in tandem, we can verify the results and learn which of these two effects dominates the optimisation. (ii) Insights from (i) are used to improve existing spatial aggregation methods and to develop novel similarity measures to be applied for clustering electricity system models such that the spatially simplified model can better approximate the original, highly-resolved model with respect to renewable generation sites and the transmission grid. (iii) The prevailing best clustering method is applied on optimisation models with high shares of renewable generation to investigate if the spatially clustered low-resolved solutions are feasible with regard to the full, spatially highly-resolved model. To this end we propose novel inverse methods to spatially disaggregate the coarse optimisation solution in terms of the resulting, aggregated variables across the highly dimensioned model
Deep Equilibrium Multimodal Fusion
Multimodal fusion integrates the complementary information present in
multiple modalities and has gained much attention recently. Most existing
fusion approaches either learn a fixed fusion strategy during training and
inference, or are only capable of fusing the information to a certain extent.
Such solutions may fail to fully capture the dynamics of interactions across
modalities especially when there are complex intra- and inter-modality
correlations to be considered for informative multimodal fusion. In this paper,
we propose a novel deep equilibrium (DEQ) method towards multimodal fusion via
seeking a fixed point of the dynamic multimodal fusion process and modeling the
feature correlations in an adaptive and recursive manner. This new way encodes
the rich information within and across modalities thoroughly from low level to
high level for efficacious downstream multimodal learning and is readily
pluggable to various multimodal frameworks. Extensive experiments on BRCA,
MM-IMDB, CMU-MOSI, SUN RGB-D, and VQA-v2 demonstrate the superiority of our DEQ
fusion. More remarkably, DEQ fusion consistently achieves state-of-the-art
performance on multiple multimodal benchmarks. The code will be released
- …