1,503 research outputs found
The Pragmatic Development of a Carbon Management Framework for UK SMEs
The UK's commitment to net-zero emissions by 2050 is challenged by critics citing current government strategies as inadequate, marked by a lack of concrete action and aspirational guidelines. Notably, businesses, including small and medium-sized enterprises (SMEs) which constitute about half of all business emissions, are pivotal to this goal. Yet, existing policies and standards often neglect the significant role of SMEs, who face barriers such as limited knowledge and resources in implementing carbon management practices.
This thesis explores the development of a novel carbon management framework specifically designed for medium-sized organisations in the UK to address these problems. The research adopts a practical approach through collaboration with an industry partner, facilitating a case study for real-world application.
Adopting a mixed-methods research design grounded in pragmatism, the study commenced with a qualitative study in the form of a focus group. This exploratory phase, critical for understanding SME challenges, yielded rich data revealing key management themes in strategy, energy, and data. The framework design was supported by a materiality assessment and input from key stakeholders on three major iterations. The final framework comprises three phases: establishing a baseline carbon footprint, creating a carbon reduction plan, and strategically implementing this plan. The validation process, conducted at Knowsley Safari, successfully tested the initial two phases but faced constraints in fully assessing the third phase due to time limitations.
While the research achieved its primary aim of developing a novel carbon management framework for SMEs, it encountered limitations, notably in time and the generalisability of findings due to reliance on a single case study. Future research could test the framework across diverse SME settings to establish its broader applicability and effectiveness in aiding the UK's net-zero emission goals
Contract-Wrapped Property
For nearly two centuries, the law has allowed servitudes that ârun withâ real property while consistently refusing to permit servitudes attached to personal property. That is, owners of land can establish new, specific requirements for the property that bind all future ownersâbut owners of chattels cannot. In recent decades, however, firms have increasingly begun relying on contract provisions that purport to bind future owners of chattels. These developments began in the context of software licensing, but they have started to migrate to chattels not encumbered by software. Courts encountering these provisions have mostly missed their significance, focusing instead on questions of contract doctrine, such as whether opening shrink wrap constitutes assent to be bound. Property concepts never enter their analysis. The result of this oversight is that courts have de facto recognized equitable servitudes on chattelsâa category that our legal system has long forbidden. Yet because courts are often unfamiliar with property-law principles, and because lawyers have failed to make property-based arguments, individual contracts cases are remodeling the architecture of property rights without anyone realizing it.This Article identifies the unexpected emergence of servitudes on chattels via contract law. It explores the consequences of that development and argues that we should see it as deeply troubling. By unwittingly reestablishing equitable servitudes on chattelsâsomething our legal system rejected long ago for good reasonâthis change in law threatens to undo longstanding precedent, disrupt settled expectations, and effectively recognize a new form of property. More generally, elevating contract over other private law doctrines disrupts the private lawâs equilibrium in which a complementary suite of doctrines developed to promote economic liberty while curtailing opportunistic impulses. While the pathologies that have flourished internally in modern contract doctrine have been well studied by scholars, the way in which contract law is threatening to consume property and other areas of private law has received less attention. Using servitudes on personal property as a window into the larger problem of contract-dominated private law, this Article explores the private lawâs role in shaping environmental conservation, autonomy, innovation, and the legitimacy of the law itself. Those values are all in jeopardy as if contract law is allowed to encroach on property and to erode the very concept of ownership
Secure storage systems for untrusted cloud environments
The cloud has become established for applications that need to be scalable and highly
available. However, moving data to data centers owned and operated by a third party,
i.e., the cloud provider, raises security concerns because a cloud provider could easily
access and manipulate the data or program flow, preventing the cloud from being
used for certain applications, like medical or financial.
Hardware vendors are addressing these concerns by developing Trusted Execution
Environments (TEEs) that make the CPU state and parts of memory inaccessible from
the host software. While TEEs protect the current execution state, they do not provide
security guarantees for data which does not fit nor reside in the protected memory
area, like network and persistent storage.
In this work, we aim to address TEEsâ limitations in three different ways, first we
provide the trust of TEEs to persistent storage, second we extend the trust to multiple
nodes in a network, and third we propose a compiler-based solution for accessing
heterogeneous memory regions. More specifically,
âą SPEICHER extends the trust provided by TEEs to persistent storage. SPEICHER
implements a key-value interface. Its design is based on LSM data structures, but
extends them to provide confidentiality, integrity, and freshness for the stored
data. Thus, SPEICHER can prove to the client that the data has not been tampered
with by an attacker.
âą AVOCADO is a distributed in-memory key-value store (KVS) that extends the
trust that TEEs provide across the network to multiple nodes, allowing KVSs to
scale beyond the boundaries of a single node. On each node, AVOCADO carefully
divides data between trusted memory and untrusted host memory, to maximize
the amount of data that can be stored on each node. AVOCADO leverages the
fact that we can model network attacks as crash-faults to trust other nodes with
a hardened ABD replication protocol.
âą TOAST is based on the observation that modern high-performance systems
often use several different heterogeneous memory regions that are not easily
distinguishable by the programmer. The number of regions is increased by the
fact that TEEs divide memory into trusted and untrusted regions. TOAST is a
compiler-based approach to unify access to different heterogeneous memory
regions and provides programmability and portability. TOAST uses a
load/store interface to abstract most library interfaces for different memory
regions
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose "Skeleton-of-Thought" (SoT), which guides LLMs to first generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-up (up to 2.39x across 11 different LLMs),
but it can also potentially improve the answer quality on several question
categories in terms of diversity and relevance. SoT is an initial attempt at
data-centric optimization for efficiency, and reveal the potential of pushing
LLMs to think more like a human for answer quality.Comment: Technical report, work in progres
Anpassen verteilter eingebetteter Anwendungen im laufenden Betrieb
The availability of third-party apps is among the key success factors for software ecosystems: The users benefit from more features and innovation speed, while third-party solution vendors can leverage the platform to create successful offerings.
However, this requires a certain decoupling of engineering activities of the different parties not achieved for distributed control systems, yet.
While late and dynamic integration of third-party components would be required, resulting control systems must provide high reliability regarding real-time requirements, which leads to integration complexity.
Closing this gap would particularly contribute to the vision of software-defined manufacturing, where an ecosystem of modern IT-based control system components could lead to faster innovations due to their higher abstraction and availability of various frameworks.
Therefore, this thesis addresses the research question:
How we can use modern IT technologies and enable independent evolution and easy third-party integration of software components in distributed control systems, where deterministic end-to-end reactivity is required, and especially, how can we apply distributed changes to such systems consistently and reactively during operation?
This thesis describes the challenges and related approaches in detail and points out that existing approaches do not fully address our research question.
To tackle this gap, a formal specification of a runtime platform concept is presented in conjunction with a model-based engineering approach.
The engineering approach decouples the engineering steps of component definition, integration, and deployment.
The runtime platform supports this approach by isolating the components, while still offering predictable end-to-end real-time behavior.
Independent evolution of software components is supported through a concept for synchronous reconfiguration during full operation, i.e., dynamic orchestration of components.
Time-critical state transfer is supported, too, and can lead to bounded quality degradation, at most.
The reconfiguration planning is supported by analysis concepts, including simulation of a formally specified system and reconfiguration, and analyzing potential quality degradation with the evolving dataflow graph (EDFG) method.
A platform-specific realization of the concepts, the real-time container architecture, is described as a reference implementation.
The model and the prototype are evaluated regarding their feasibility and applicability of the concepts by two case studies.
The first case study is a minimalistic distributed control system used in different setups with different component variants and reconfiguration plans to compare the model and the prototype and to gather runtime statistics.
The second case study is a smart factory showcase system with more challenging application components and interface technologies.
The conclusion is that the concepts are feasible and applicable, even though the concepts and the prototype still need to be worked on in future -- for example, to reach shorter cycle times.Eine groĂe Auswahl von Drittanbieter-Lösungen ist einer der SchlĂŒsselfaktoren fĂŒr Software Ecosystems:
Nutzer profitieren vom breiten Angebot und schnellen Innovationen, wĂ€hrend Drittanbieter ĂŒber die Plattform erfolgreiche Lösungen anbieten können.
Das jedoch setzt eine gewisse Entkopplung von Entwicklungsschritten der Beteiligten voraus, welche fĂŒr verteilte Steuerungssysteme noch nicht erreicht wurde.
WĂ€hrend Drittanbieter-Komponenten möglichst spĂ€t -- sogar Laufzeit -- integriert werden mĂŒssten, mĂŒssen Steuerungssysteme jedoch eine hohe ZuverlĂ€ssigkeit gegenĂŒber Echtzeitanforderungen aufweisen, was zu IntegrationskomplexitĂ€t fĂŒhrt.
Dies zu lösen wĂŒrde insbesondere zur Vision von Software-definierter Produktion beitragen, da ein Ecosystem fĂŒr moderne IT-basierte Steuerungskomponenten wegen deren höherem Abstraktionsgrad und der Vielzahl verfĂŒgbarer Frameworks zu schnellerer Innovation fĂŒhren wĂŒrde.
Daher behandelt diese Dissertation folgende Forschungsfrage:
Wie können wir moderne IT-Technologien verwenden und unabhĂ€ngige Entwicklung und einfache Integration von Software-Komponenten in verteilten Steuerungssystemen ermöglichen, wo Ende-zu-Ende-Echtzeitverhalten gefordert ist, und wie können wir insbesondere verteilte Ănderungen an solchen Systemen konsistent und im Vollbetrieb vornehmen?
Diese Dissertation beschreibt Herausforderungen und verwandte AnsÀtze im Detail und zeigt auf, dass existierende AnsÀtze diese Frage nicht vollstÀndig behandeln.
Um diese LĂŒcke zu schlieĂen, beschreiben wir eine formale Spezifikation einer Laufzeit-Plattform und einen zugehörigen Modell-basierten Engineering-Ansatz.
Dieser Ansatz entkoppelt die Design-Schritte der Entwicklung, Integration und des Deployments von Komponenten.
Die Laufzeit-Plattform unterstĂŒtzt den Ansatz durch Isolation von Komponenten und zugleich Zeit-deterministischem Ende-zu-Ende-Verhalten.
UnabhĂ€ngige Entwicklung und Integration werden durch Konzepte fĂŒr synchrone Rekonfiguration im Vollbetrieb unterstĂŒtzt, also durch dynamische Orchestrierung.
Dies beinhaltet auch Zeit-kritische Zustands-Transfers mit höchstens begrenzter QualitĂ€tsminderung, wenn ĂŒberhaupt.
Rekonfigurationsplanung wird durch Analysekonzepte unterstĂŒtzt, einschlieĂlich der Simulation formal spezifizierter Systeme und Rekonfigurationen und der Analyse der etwaigen QualitĂ€tsminderung mit dem Evolving Dataflow Graph (EDFG).
Die Real-Time Container Architecture wird als Referenzimplementierung und Evaluationsplattform beschrieben.
Zwei Fallstudien untersuchen Machbarkeit und NĂŒtzlichkeit der Konzepte.
Die erste verwendet verschiedene Varianten und Rekonfigurationen eines minimalistischen verteilten Steuerungssystems, um Modell und Prototyp zu vergleichen sowie Laufzeitstatistiken zu erheben.
Die zweite Fallstudie ist ein Smart-Factory-Demonstrator, welcher herausforderndere Applikationskomponenten und Schnittstellentechnologien verwendet.
Die Konzepte sind den Studien nach machbar und nĂŒtzlich, auch wenn sowohl die Konzepte als auch der Prototyp noch weitere Arbeit benötigen -- zum Beispiel, um kĂŒrzere Zyklen zu erreichen
Security and Privacy of Resource Constrained Devices
The thesis aims to present a comprehensive and holistic overview on cybersecurity and privacy & data protection aspects related to IoT resource-constrained devices. Chapter 1 introduces the current technical landscape by providing a working definition and architecture taxonomy of âInternet of Thingsâ and âresource-constrained devicesâ, coupled with a threat landscape where each specific attack is linked to a layer of the taxonomy. Chapter 2 lays down the theoretical foundations for an interdisciplinary approach and a unified, holistic vision of cybersecurity, safety and privacy justified by the âIoT revolutionâ through the so-called infraethical perspective. Chapter 3 investigates whether and to what extent the fast-evolving European cybersecurity regulatory framework addresses the security challenges brought about by the IoT by allocating legal responsibilities to the right parties. Chapters 4 and 5 focus, on the other hand, on âprivacyâ understood by proxy as to include EU data protection. In particular, Chapter 4 addresses three legal challenges brought about by the ubiquitous IoT data and metadata processing to EU privacy and data protection legal frameworks i.e., the ePrivacy Directive and the GDPR. Chapter 5 casts light on the risk management tool enshrined in EU data protection law, that is, Data Protection Impact Assessment (DPIA) and proposes an original DPIA methodology for connected devices, building on the CNIL (French data protection authority) model
Generalising weighted model counting
Given a formula in propositional or (finite-domain) first-order logic and some non-negative weights, weighted model counting (WMC) is a function problem that asks to compute the sum of the weights of the models of the formula. Originally used as a flexible way of performing probabilistic inference on graphical models, WMC has found many applications across artificial intelligence (AI), machine learning, and other domains. Areas of AI that rely on WMC include explainable AI, neural-symbolic AI, probabilistic programming, and statistical relational AI. WMC also has applications in bioinformatics, data mining, natural language processing, prognostics, and robotics.
In this work, we are interested in revisiting the foundations of WMC and considering generalisations of some of the key definitions in the interest of conceptual clarity and practical efficiency. We begin by developing a measure-theoretic perspective on WMC, which suggests a new and more general way of defining the weights of an instance. This new representation can be as succinct as standard WMC but can also expand as needed to represent less-structured probability distributions. We demonstrate the performance benefits of the new format by developing a novel WMC encoding for Bayesian networks. We then show how existing WMC encodings for Bayesian networks can be transformed into this more general format and what conditions ensure that the transformation is correct (i.e., preserves the answer). Combining the strengths of the more flexible representation with the tricks used in existing encodings yields further efficiency improvements in Bayesian network probabilistic inference.
Next, we turn our attention to the first-order setting. Here, we argue that the capabilities of practical model counting algorithms are severely limited by their inability to perform arbitrary recursive computations. To enable arbitrary recursion, we relax the restrictions that typically accompany domain recursion and generalise circuits (used to express a solution to a model counting problem) to graphs that are allowed to have cycles. These improvements enable us to find efficient solutions to counting fundamental structures such as injections and bijections that were previously unsolvable by any available algorithm.
The second strand of this work is concerned with synthetic data generation. Testing algorithms across a wide range of problem instances is crucial to ensure the validity of any claim about one algorithmâs superiority over another. However, benchmarks are often limited and fail to reveal differences among the algorithms. First, we show how random instances of probabilistic logic programs (that typically use WMC algorithms for inference) can be generated using constraint programming. We also introduce a new constraint to control the independence structure of the underlying probability distribution and provide a combinatorial argument for the correctness of the constraint model. This model allows us to, for the first time, experimentally investigate inference algorithms on more than just a handful of instances. Second, we introduce a random model for WMC instances with a parameter that influences primal treewidthâthe parameter most commonly used to characterise the difficulty of an instance. We show that the easy-hard-easy pattern with respect to clause density is different for algorithms based on dynamic programming and algebraic decision diagrams than for all other solvers. We also demonstrate that all WMC algorithms scale exponentially with respect to primal treewidth, although at differing rates
Improving low latency applications for reconfigurable devices
This thesis seeks to improve low latency application performance via architectural improvements in reconfigurable devices. This is achieved by improving resource utilisation and access, and by exploiting the different environments within which reconfigurable devices are deployed.
Our first contribution leverages devices deployed at the network level to enable the low latency processing of financial market data feeds. Financial exchanges transmit messages via two identical data feeds to reduce the chance of message loss. We present an approach to arbitrate these redundant feeds at the network level using a Field-Programmable Gate Array (FPGA). With support for any messaging protocol, we evaluate our design using the NASDAQ TotalView-ITCH, OPRA, and ARCA data feed protocols, and provide two simultaneous outputs: one prioritising low latency, and one prioritising high reliability with three dynamically configurable windowing methods.
Our second contribution is a new ring-based architecture for low latency, parallel access to FPGA memory. Traditional FPGA memory is formed by grouping block memories (BRAMs) together and accessing them as a single device. Our architecture accesses these BRAMs independently and in parallel. Targeting memory-based computing, which stores pre-computed function results in memory, we benefit low latency applications that rely on: highly-complex functions; iterative computation; or many parallel accesses to a shared resource. We assess square root, power, trigonometric, and hyperbolic functions within the FPGA, and provide a tool to convert Python functions to our new architecture.
Our third contribution extends the ring-based architecture to support any FPGA processing element. We unify E heterogeneous processing elements within compute pools, with each element implementing the same function, and the pool serving D parallel function calls. Our implementation-agnostic approach supports processing elements with different latencies, implementations, and pipeline lengths, as well as non-deterministic latencies. Compute pools evenly balance access to processing elements across the entire application, and are evaluated by implementing eight different neural network activation functions within an FPGA.Open Acces
Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology
The great behavioral heterogeneity observed between individuals with the same
psychiatric disorder and even within one individual over time complicates both
clinical practice and biomedical research. However, modern technologies are an
exciting opportunity to improve behavioral characterization. Existing
psychiatry methods that are qualitative or unscalable, such as patient surveys
or clinical interviews, can now be collected at a greater capacity and analyzed
to produce new quantitative measures. Furthermore, recent capabilities for
continuous collection of passive sensor streams, such as phone GPS or
smartwatch accelerometer, open avenues of novel questioning that were
previously entirely unrealistic. Their temporally dense nature enables a
cohesive study of real-time neural and behavioral signals.
To develop comprehensive neurobiological models of psychiatric disease, it
will be critical to first develop strong methods for behavioral quantification.
There is huge potential in what can theoretically be captured by current
technologies, but this in itself presents a large computational challenge --
one that will necessitate new data processing tools, new machine learning
techniques, and ultimately a shift in how interdisciplinary work is conducted.
In my thesis, I detail research projects that take different perspectives on
digital psychiatry, subsequently tying ideas together with a concluding
discussion on the future of the field. I also provide software infrastructure
where relevant, with extensive documentation.
Major contributions include scientific arguments and proof of concept results
for daily free-form audio journals as an underappreciated psychiatry research
datatype, as well as novel stability theorems and pilot empirical success for a
proposed multi-area recurrent neural network architecture.Comment: PhD thesis cop
- âŠ