48 research outputs found
Mapping traditional knowledge: Digital cartography in the Canadian north
Digital cartography offers exciting opportunities for recording indigenous knowledge, particularly in contexts where a people's relationship to the land has high cultural significance. Canada's north offers a useful case study of both the opportunities and challenges of such projects. Through the Geomatics and Cartographic Research Centre (GCRC), Inuit peoples have been invited to become partners in innovative digital mapping projects, including creating atlases of traditional place names, recording the patterns and movement of sea ice, and recording previously uncharted and often shifting traditional routes over ice and tundra. Such projects have generated interest in local communities because of their potential to record and preserve traditional knowledge and because they offer an attractive visual and multimedia interface that can address linguistic and cultural concerns. But given corporations' growing interest in the natural resourc
Legal issues in mapping traditional knowledge: Digital cartography in the canadian north
Digital cartography offers great potential for mapping the traditional knowledge of indigenous communities. This is particularly so because of the close relationship between such knowledge and traditional lands. Yet the mapping of traditional knowledge also raises distinct legal and ethical considerations which should be at the forefront in the design and implementation of indigenous digital cartography projects. This paper examines these considerations through the lens of digital atlases jointly created by Inuit communities and Carleton Universityâs Geomatics and Cartographic Research Centre (GCRC)
An Empirical Study of Operating Systems Errors
We present a study of operating system errors found by automatic, static, compiler analysis applied to the Linux and OpenBSD kernels. Our approach differs from previous studies that consider errors found by manual inspection of logs, testing, and surveys because static analysis is applied uniformly to the entire kernel source, though our approach necessarily considers a less comprehensive variety of errors than previous studies. In addition, automation allows us to track errors over multiple versions of the kernel source to estimate how long errors remain in the system before they are fixed. We found that device drivers have error rates up to three to seven times higher than the rest of the kernel. We found that the largest quartile of functions have error rates two to six times higher than the smallest quartile. We found that the newest quartile of files have error rates up to twice that of the oldest quartile, which provides evidence that code "hardens" over time. Finally, we found that bugs remain in the Linux kernel an average of 1.8 years before being fixed.
Enforcing performance isolation across virtual machines in xen
Abstract. Virtual machines (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers. One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics. However, such multiplexing must often be done while observing per-VM performance guarantees or service level agreements. Thus, one important requirement in this environment is effective performance isolation among VMs. In this paper, we address performance isolation across virtual machines in Xen [1]. For instance, while Xen can allocate fixed shares of CPU among competing VMs, it does not currently account for work done on behalf of individual VMâs in device drivers. Thus, the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place. In this paper, we present the design and evaluation of a set of primitives implemented in Xen to address this issue. First, XenMon accurately measures per-VM resource consumption, including work done on behalf of a particular VM in Xenâs driver domains. Next, our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU. Finally, ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits. Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations.
Applying code specialization to FFT libraries for integral parameters
Code specialization is an approach that can be used to improve the sequence of optimizations to be performed by the compiler. The performance of code after specialization may vary, depending upon the structure of the application. For FFT libraries, the specialization of code with different parameters may cause an increase in code size, thereby impacting overall behavior of applications executing in environment with small instruction caches. In this article, we propose a new approach for specializing FFT code that can be effectively used to improve performance while limiting the code increase by incorporating dynamic specialization. Our approach makes use of a static compile time analysis and adapts a single version of code to multiple values through runtime specialization. This technique has been applied to different FFT libraries over Itanium IA-64 platform using icc compiler v 9.0. For highly efficient libraries, we are able to achieve speedup of more than 80 % with small increase in code size