31 research outputs found

    L4 Pointer: An efficient pointer extension for spatial memory safety support without hardware extension

    Full text link
    Since buffer overflow has long been a frequently occurring, high-risk vulnerability, various methods have been developed to support spatial memory safety and prevent buffer overflow. However, every proposed method, although effective in part, has its limitations. Due to expensive bound-checking or large memory in taking for metadata, the software-only support for spatial memory safety inherently entails runtime overhead. Contrastingly, hardware-assisted methods are not available without specific hardware assistants. To mitigate such limitations, Herein we propose L4 Pointer, which is a 128-bit pointer extended from a normal 64-bit virtual addresses. By using the extra bits and widespread SIMD operations, L4 Pointer shows less slow-down and higher performance without hardware extension than existing methods

    Screening with Disadvantaged Agents

    Full text link
    Motivated by school admissions, this paper studies screening in a population with both advantaged and disadvantaged agents. A school is interested in admitting the most skilled students, but relies on imperfect test scores that reflect both skill and effort. Students are limited by a budget on effort, with disadvantaged students having tighter budgets. This raises a challenge for the principal: among agents with similar test scores, it is difficult to distinguish between students with high skills and students with large budgets. Our main result is an optimal stochastic mechanism that maximizes the gains achieved from admitting ``high-skill" students minus the costs incurred from admitting ``low-skill" students when considering two skill types and nn budget types. Our mechanism makes it possible to give higher probability of admission to a high-skill student than to a low-skill, even when the low-skill student can potentially get higher test-score due to a higher budget. Further, we extend our admission problem to a setting in which students uniformly receive an exogenous subsidy to increase their budget for effort. This extension can only help the school's admission objective and we show that the optimal mechanism with exogenous subsidies has the same characterization as optimal mechanisms for the original problem

    Screening with Disadvantaged Agents

    Get PDF

    Interim research assessment 2003-2005 - Computer Science

    Get PDF
    This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities

    Virtualization of Micro-architectural Components Using Software Solutions

    Get PDF
    Cloud computing has become a dominant computing paradigm in the information technology industry due to its flexibility and efficiency in resource sharing and management. The key technology that enables cloud computing is virtualization. Essential requirements in a virtualized system where several virtual machines (VMs) run on a same physical machine include performance isolation and predictability. To enforce these properties, the virtualization software (called the hypervisor) must find a way to divide physical resources (e.g., physical memory, processor time) of the system and allocate them to VMs with respect to the amount of virtual resources defined for each VM. However, modern hardware have complex architectures and some microarchitectural-level resources such as processor caches, memory controllers, interconnects cannot be divided and allocated to VMs. They are globally shared among all VMs which compete for their use, leading to contention. Therefore, performance isolation and predictability are compromised. In this thesis, we propose software solutions for preventing unpredictability in performance due to micro-architectural components. The first contribution is called Kyoto, a solution to the cache contention issue, inspired by the polluters pay principle. A VM is said to pollute the cache if it provokes significant cache replacements which impact the performance of other VMs. Henceforth, using the Kyoto system, the provider can encourage cloud users to book pollution permits for their VMs. The second contribution addresses the problem of efficiently virtualizing NUMA machines. The major challenge comes from the fact that the hypervisor regularly reconfigures the placement of a VM over the NUMA topology. However, neither guest operating systems (OSs) nor system runtime libraries (e.g., HotSpot) are designed to consider NUMA topology changes at runtime, leading end user applications to unpredictable performance. We presents eXtended Para-Virtualization (XPV), a new principle to efficiently virtualize a NUMA architecture. XPV consists in revisiting the interface between the hypervisor and the guest OS, and between the guest OS and system runtime libraries so that they can dynamically take into account NUMA topology changes

    Libro de Actas JCC&BD 2018 : VI Jornadas de Cloud Computing & Big Data

    Get PDF
    Se recopilan las ponencias presentadas en las VI Jornadas de Cloud Computing & Big Data (JCC&BD), realizadas entre el 25 al 29 de junio de 2018 en la Facultad de Informática de la Universidad Nacional de La Plata.Universidad Nacional de La Plata (UNLP) - Facultad de Informátic

    Building the knowledge base for environmental action and sustainability

    Get PDF

    High Frequency Physiological Data Quality Modelling in the Intensive Care Unit

    Get PDF
    Intensive care medicine is a resource intense environment in which technical and clinical decision making relies on rapidly assimilating a huge amount of categorical and timeseries physiologic data. These signals are being presented at variable frequencies and of variable quality. Intensive care clinicians rely on high frequency measurements of the patient's physiologic state to assess critical illness and the response to therapies. Physiological waveforms have the potential to reveal details about the patient state in very fine resolution, and can assist, augment, or even automate decision making in intensive care. However, these high frequency time-series physiologic signals pose many challenges for modelling. These signals contain noise, artefacts, and systematic timing errors, all of which can impact the quality and accuracy of models being developed and the reproducibility of results. In this context, the central theme of this thesis is to model the process of data collection in an intensive care environment from a statistical, metrological, and biosignals engineering perspective with the aim of identifying, quantifying, and, where possible, correcting errors introduced by the data collection systems. Three different aspects of physiological measurement were explored in detail, namely measurement of blood oxygenation, measurement of blood pressure, and measurement of time. A literature review of sources of errors and uncertainty in timing systems used in intensive care units was undertaken. A signal alignment algorithm was developed and applied to approximately 34,000 patient-hours of simultaneously collected electroencephalography and physiological waveforms collected at the bedside using two different medical devices
    corecore