31 research outputs found
L4 Pointer: An efficient pointer extension for spatial memory safety support without hardware extension
Since buffer overflow has long been a frequently occurring, high-risk
vulnerability, various methods have been developed to support spatial memory
safety and prevent buffer overflow. However, every proposed method, although
effective in part, has its limitations. Due to expensive bound-checking or
large memory in taking for metadata, the software-only support for spatial
memory safety inherently entails runtime overhead. Contrastingly,
hardware-assisted methods are not available without specific hardware
assistants. To mitigate such limitations, Herein we propose L4 Pointer, which
is a 128-bit pointer extended from a normal 64-bit virtual addresses. By using
the extra bits and widespread SIMD operations, L4 Pointer shows less slow-down
and higher performance without hardware extension than existing methods
Screening with Disadvantaged Agents
Motivated by school admissions, this paper studies screening in a population
with both advantaged and disadvantaged agents. A school is interested in
admitting the most skilled students, but relies on imperfect test scores that
reflect both skill and effort. Students are limited by a budget on effort, with
disadvantaged students having tighter budgets. This raises a challenge for the
principal: among agents with similar test scores, it is difficult to
distinguish between students with high skills and students with large budgets.
Our main result is an optimal stochastic mechanism that maximizes the gains
achieved from admitting ``high-skill" students minus the costs incurred from
admitting ``low-skill" students when considering two skill types and budget
types. Our mechanism makes it possible to give higher probability of admission
to a high-skill student than to a low-skill, even when the low-skill student
can potentially get higher test-score due to a higher budget. Further, we
extend our admission problem to a setting in which students uniformly receive
an exogenous subsidy to increase their budget for effort. This extension can
only help the school's admission objective and we show that the optimal
mechanism with exogenous subsidies has the same characterization as optimal
mechanisms for the original problem
Recommended from our members
Latency-driven performance in data centres
Data centre based cloud computing has revolutionised the way businesses use computing infrastructure. Instead of building their own data centres, companies rent computing resources
and deploy their applications on cloud hardware. Providing customers with well-defined application performance guarantees is of paramount importance to ensure transparency and to build
a lasting collaboration between users and cloud operators. A user’s application performance is
subject to the constraints of the resources it has been allocated and to the impact of the network
conditions in the data centre.
In this dissertation, I argue that application performance in data centres can be improved through
cluster scheduling of applications informed by predictions of application performance for given
network latency, and measurements of current network latency in data centres between hosts.
Firstly, I show how to use the Precision Time Protocol (PTP), through an open-source software
implementation PTPd, to measure network latency and packet loss in data centres. I propose
PTPmesh, which uses PTPd, as a cloud network monitoring tool for tenants. Furthermore, I
conduct a measurement study using PTPmesh in different cloud providers, finding that network
latency variability in data centres is still common. Normal latency values in data centres are
in the order of tens or hundreds of microseconds, while unexpected events, such as network
congestion or packet loss, can lead to latency spikes in the order of milliseconds.
Secondly, I show that network latency matters for certain distributed applications even in small
amounts of tens or hundreds of microseconds, significantly reducing their performance. I propose a methodology to determine the impact of network latency on distributed applications
performance by injecting artificial delay into the network of an experimental setup. Based on
the experimental results, I build functions that predict the performance of an application for a
given network latency.
Given the network latency variability observed in data centers, applications’ performance is
determined by their placement within the data centre. Thirdly, I propose latency-driven, application performance-aware, cluster scheduling as a way to provide performance guarantees
to applications. I introduce NoMora, a cluster scheduling architecture that leverages the predictions of application performance dependent upon network latency combined with dynamic
network latency measurements taken between pairs of hosts in data centres to place applications. Moreover, I show that NoMora improves application performance by choosing better
placements than other scheduling policies.MEASUREMENT FOR EUROPE: TRAINING AND RESEARCH FOR INTERNET COMMUNICATIONS SCIENCE, European Commission FP7 Marie Curie Innovative Training Networks (ITN)
ENDEAVOUR, European Commission Horizon 2020 (H2020) Industrial Leadership (IL
Interim research assessment 2003-2005 - Computer Science
This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities
Virtualization of Micro-architectural Components Using Software Solutions
Cloud computing has become a dominant computing paradigm in the information technology industry due to its flexibility and efficiency in resource sharing and management. The key technology that enables cloud computing is virtualization. Essential requirements in a virtualized system where several virtual machines (VMs) run on a same physical machine include performance isolation and predictability. To enforce these properties, the virtualization software (called the hypervisor) must find a way to divide physical resources (e.g., physical memory, processor time) of the system and allocate them to VMs with respect to the amount of virtual resources defined for each VM. However, modern hardware have complex architectures and some microarchitectural-level resources such as processor caches, memory controllers, interconnects cannot be divided and allocated to VMs. They are globally shared among all VMs which compete for their use, leading to contention. Therefore, performance isolation and predictability are compromised. In this thesis, we propose software solutions for preventing unpredictability in performance due to micro-architectural components. The first contribution is called Kyoto, a solution to the cache contention issue, inspired by the polluters pay principle. A VM is said to pollute the cache if it provokes significant cache replacements which impact the performance of other VMs. Henceforth, using the Kyoto system, the provider can encourage cloud users to book pollution permits for their VMs. The second contribution addresses the problem of efficiently virtualizing NUMA machines. The major challenge comes from the fact that the hypervisor regularly reconfigures the placement of a VM over the NUMA topology. However, neither guest operating systems (OSs) nor system runtime libraries (e.g., HotSpot) are designed to consider NUMA topology changes at runtime, leading end user applications to unpredictable performance. We presents eXtended Para-Virtualization (XPV), a new principle to efficiently virtualize a NUMA architecture. XPV consists in revisiting the interface between the hypervisor and the guest OS, and between the guest OS and system runtime libraries so that they can dynamically take into account NUMA topology changes
Libro de Actas JCC&BD 2018 : VI Jornadas de Cloud Computing & Big Data
Se recopilan las ponencias presentadas en las VI Jornadas de Cloud Computing & Big Data (JCC&BD), realizadas entre el 25 al 29 de junio de 2018 en la Facultad de Informática de la Universidad Nacional de La Plata.Universidad Nacional de La Plata (UNLP) - Facultad de Informátic
High Frequency Physiological Data Quality Modelling in the Intensive Care Unit
Intensive care medicine is a resource intense environment in which technical and clinical decision making relies on rapidly assimilating a huge amount of categorical and timeseries physiologic data. These signals are being presented at variable frequencies and of variable quality. Intensive care clinicians rely on high frequency measurements of the patient's physiologic state to assess critical illness and the response to therapies. Physiological waveforms have the potential to reveal details about the patient state in very fine resolution, and can assist, augment, or even automate decision making in intensive care. However, these high frequency time-series physiologic signals pose many challenges for modelling. These signals contain noise, artefacts, and systematic timing errors, all of which can impact the quality and accuracy of models being developed and the reproducibility of results. In this context, the central theme of this thesis is to model the process of data collection in an intensive care environment from a statistical, metrological, and biosignals engineering perspective with the aim of identifying, quantifying, and, where possible, correcting errors introduced by the data collection systems. Three different aspects of physiological measurement were explored in detail, namely measurement of blood oxygenation, measurement of blood pressure, and measurement of time. A literature review of sources of errors and uncertainty in timing systems used in intensive care units was undertaken. A signal alignment algorithm was developed and applied to approximately 34,000 patient-hours of simultaneously collected electroencephalography and physiological waveforms collected at the bedside using two different medical devices