96 research outputs found
On the definition of a theoretical concept of an operating system
We dwell on how a definition of a theoretical concept of an operating system,
suitable to be incorporated in a mathematical theory of operating systems,
could look like. This is considered a valuable preparation for the development
of a mathematical theory of operating systems.Comment: 8 page
FHPM: Fine-grained Huge Page Management For Virtualization
As more data-intensive tasks with large footprints are deployed in virtual
machines (VMs), huge pages are widely used to eliminate the increasing address
translation overhead. However, once the huge page mapping is established, all
the base page regions in the huge page share a single extended page table (EPT)
entry, so that the hypervisor loses awareness of accesses to base page regions.
None of the state-of-the-art solutions can obtain access information at base
page granularity for huge pages. We observe that this can lead to incorrect
decisions by the hypervisor, such as incorrect data placement in a tiered
memory system and unshared base page regions when sharing pages.
This paper proposes FHPM, a fine-grained huge page management for
virtualization without hardware and guest OS modification. FHPM can identify
access information at base page granularity, and dynamically promote and demote
pages. A key insight of FHPM is to redirect the EPT huge page directory entries
(PDEs) to new companion pages so that the MMU can track access information
within huge pages. Then, FHPM can promote and demote pages according to the
current hot page pressure to balance address translation overhead and memory
usage. At the same time, FHPM proposes a VM-friendly page splitting and
collapsing mechanism to avoid extra VM-exits. In combination, FHPM minimizes
the monitoring and management overhead and ensures that the hypervisor gets
fine-grained VM memory accesses to make the proper decision. We apply FHPM to
improve tiered memory management (FHPM-TMM) and to promote page sharing
(FHPM-Share). FHPM-TMM achieves a performance improvement of up to 33% and 61%
over the pure huge page and base page management. FHPM-Share can save 41% more
memory than Ingens, a state-of-the-art page sharing solution, with comparable
performance
Conception de noyaux de systèmes embarqués reconfigurables
The vision of the emergence of a global environment for the information management where most of the physical object around us will be equipped with processors, communication capabilities and interconnected through various networks forces us to redesign the computing systems. Instead of heavy, monolithic and non evolutive systems, we must design light, flexible and reconfigurable systems.This work presents a new architecture allowing the conception and development of flexible and reconfigurable operating system kernels for embedded systems.La perspective de l'émergence d'un environnement global du traitement de l'information dans lequel la plupart des objets physiques qui nous entourent seront équipés de processeurs, dotés de capacités de communication et interconnectés par le biais de réseaux divers, nous oblige à repenser les systèmes informatiques. Aux systèmes traditionnels, lourds, monolithiques, et peu évolutifs, nous devons préférer les systèmes légers, flexibles, et reconfigurables.Cette thèse présente une architecture permettant la conception et le développement de noyaux de systèmes d'exploitation flexibles et reconfigurables à destination du monde de l'embarqué
Putting Instruction Sequences into Effect
An attempt is made to define the concept of execution of an instruction
sequence. It is found to be a special case of directly putting into effect of
an instruction sequence. Directly putting into effect of an instruction
sequences comprises interpretation as well as execution. Directly putting into
effect is a special case of putting into effect with other special cases
classified as indirectly putting into effect
Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning
Representation learning on text-attributed graphs (TAGs) has become a
critical research problem in recent years. A typical example of a TAG is a
paper citation graph, where the text of each paper serves as node attributes.
Initial graph neural network (GNN) pipelines handled these text attributes by
transforming them into shallow or hand-crafted features, such as skip-gram or
bag-of-words features. Recent efforts have focused on enhancing these pipelines
with language models (LMs), which typically demand intricate designs and
substantial computational resources. With the advent of powerful large language
models (LLMs) such as GPT or Llama2, which demonstrate an ability to reason and
to utilize general knowledge, there is a growing need for techniques which
combine the textual modelling abilities of LLMs with the structural learning
capabilities of GNNs. Hence, in this work, we focus on leveraging LLMs to
capture textual information as features, which can be used to boost GNN
performance on downstream tasks. A key innovation is our use of explanations as
features: we prompt an LLM to perform zero-shot classification, request textual
explanations for its decision-making process, and design an LLM-to-LM
interpreter to translate these explanations into informative features that
enhance downstream GNNs. Our experiments demonstrate that our method achieves
state-of-the-art results on well-established TAG datasets, including Cora,
PubMed, ogbn-arxiv, as well as our newly introduced dataset, arXiv-2023.
Furthermore, our method significantly speeds up training, achieving a 2.88
times improvement over the closest baseline on ogbn-arxiv. Lastly, we believe
the versatility of the proposed method extends beyond TAGs and holds the
potential to enhance other tasks involving graph-text data~\footnote{Our codes
and datasets are available at: \url{https://github.com/XiaoxinHe/TAPE}}
- …