6,106 research outputs found

    KALwEN: a new practical and interoperable key management scheme for body sensor networks

    Get PDF
    Key management is the pillar of a security architecture. Body sensor networks (BSNs) pose several challenges–some inherited from wireless sensor networks (WSNs), some unique to themselves–that require a new key management scheme to be tailor-made. The challenge is taken on, and the result is KALwEN, a new parameterized key management scheme that combines the best-suited cryptographic techniques in a seamless framework. KALwEN is user-friendly in the sense that it requires no expert knowledge of a user, and instead only requires a user to follow a simple set of instructions when bootstrapping or extending a network. One of KALwEN's key features is that it allows sensor devices from different manufacturers, which expectedly do not have any pre-shared secret, to establish secure communications with each other. KALwEN is decentralized, such that it does not rely on the availability of a local processing unit (LPU). KALwEN supports secure global broadcast, local broadcast, and local (neighbor-to-neighbor) unicast, while preserving past key secrecy and future key secrecy (FKS). The fact that the cryptographic protocols of KALwEN have been formally verified also makes a convincing case. With both formal verification and experimental evaluation, our results should appeal to theorists and practitioners alike

    Decent work in construction and the role of local authorities the case of Bulawayo city, Zimbabwe.

    Get PDF
    The role of local authorities in promoting decent work is little understood and has been absent from both policy and practice (GIAN, 2005). The purpose of this interdisciplinary study was to identify and describe the existing and potential roles of Bulawayo City in fostering decent work in the construction sector, urban development and related services through policy making, strategic planning and project activities. The study outcomes will contribute to the shared knowledge among local authorities and other stakeholders at the local and international levels. Bulawayo is Zimbabwe’s second largest urban settlement with a 2002 population close to 700 000 i.e. 6% of the national population or 20% of the urban population (CSO, 2002:21), a budget of Z619millionin1993/94(NdubiwaandHamilton,1994),Z619 million in 1993/94 (Ndubiwa and Hamilton, 1994), Z2.5 billion in 2000 and Z$797 billion in 20051. The research team collected national and local level secondary data on decent work variables with a view to compile decent work indicators to help compare Bulawayo City against national and global conditions. Such data was sought from the Central Statistical Office (CSO), the National Social Security Authority (NSSA), employer and worker organisations, construction firms, research institutions and Bulawayo City itself. Key informants in all these institutions were interviewed using a semi-structured questionnaire and grey literature related to decent work was identified and collected where feasible. While Zimbabwe is not ‘statistics poor’, statistics collected from the institutions cited above are not in formats suitable to answer descent work questions. The political-economic crisis in the country and in particular the government’s frosty relations with the UK, the EU the USA and the white Commonwealth (GoZ, 2005: 25c), have compounded conditions of insecurity for most institutions and individuals; making even the release to outsiders of routine administrative information for research purposes a sensitive affair. Increasingly, key informants were not prepared to release information unless there was a direct financial benefit to themselves or their organisations. It is in this context of economic crisis and tense relations that some in the west have expressed doubts regarding the accuracy of employment, economic and population statistics; alleging that these are manipulated to suit the ruling party. Further, high population movements and the ‘informalization’ of the economy since mid 1990s have left significant socio-economic activities outside the data frameworks of institutions such as the CSO and NSSA. Thus lack of informal sector data is the main limitation of this study. The above obstacles not withstanding, the study compiled reasonable information with detailed data on the social security, social dialogue, health and safety and Bulawayo City’s efforts at strategic planning and local economic development. The term ‘decent work’ was neither known nor used by a majority of the key informants in this study. In general, while the statutory provisions for decent work promotion are sound, in practice the economic crisis has compromised efforts to create and stabilise employment, has poisoned the climate of social dialogue, eroded the value of pensions and benefits and heightened the risks of accidents at work. Except in its areas of direct jurisdiction, Bulawayo City has not played significant roles in promoting social dialogue and social security - issues that are the domain of national authorities. But it has been exemplary in its strategic planning efforts, partnerships, promotion of equality and indigenisation, employment creation, training and education. Employment conditions in Bulawayo are characterised by an acute economic climate that has led to decreasing numbers of jobs since the 1990s in many sectors including the construction sector. The informal sector which had created many jobs during this period is struggling to survive and was disrupted by the 2005 government operation to clear informal enterprises and settlements.

    Scalable Reliable SD Erlang Design

    Get PDF
    This technical report presents the design of Scalable Distributed (SD) Erlang: a set of language-level changes that aims to enable Distributed Erlang to scale for server applications on commodity hardware with at most 100,000 cores. We cover a number of aspects, specifically anticipated architecture, anticipated failures, scalable data structures, and scalable computation. Other two components that guided us in the design of SD Erlang are design principles and typical Erlang applications. The design principles summarise the type of modifications we aim to allow Erlang scalability. Erlang exemplars help us to identify the main Erlang scalability issues and hypothetically validate the SD Erlang design

    A study of distributed clustering of vector time series on the grid by task farming

    Get PDF
    Traditional data mining methods were limited by availability of computing resources like network bandwidth, storage space and processing power. These algorithms were developed to work around this problem by looking at a small cross-section of the whole data available. However since a major chunk of the data is kept out, the predictions were generally inaccurate and missed out on significant features that was part of the data. Today with resources growing at almost the same pace as data, it is possible to rethink mining algorithms to work on distributed resources and essentially distributed data. Distributed data mining thus holds great promise. Using grid technologies, data mining can be extended to areas which were not previously looked at because of the volume of data being generated, like climate modeling, web usage, etc. An important characteristic of data today is that it is highly decentralized and mostly redundant. Data mining algorithms which can make efficient use of distributed data has to be thought of. Though it is possible to bring all the data together and run traditional algorithms, this has a high overhead, in terms of bandwidth usage for transmission, preprocessing steps which have to be to handle every format the received data. By processing the data locally, the preprocessing stage can be made less bulky and also the traditional data mining techniques would be able to work on the data efficiently. The focus of this project is to use an existing data mining technique, fuzzy c-means clustering to work on distributed data in a simulated grid environment and to review the performance of this approach viz., the traditional approach

    Dispatch: distributed peer-to-peer simulations

    Get PDF
    Recently there has been an increasing demand for efficient mechanisms of carrying out computations that exhibit coarse grained parallelism. Examples of this class of problems include simulations involving Monte Carlo methods, computations where numerous, similar but independent, tasks are performed to solve a large problem or any solution which relies on ensemble averages where a simulation is run under a variety of initial conditions which are then combined to form the result. With the ever increasing complexity of such applications, large amounts of computational power are required over a long period of time. Economic constraints entail deploying specialized hardware to satisfy this ever increasing computing power. We address this issue in Dispatch, a peer-to-peer framework for sharing computational power. In contrast to grid computing and other institution-based CPU sharing systems, Dispatch targets an open environment, one that is accessible to all the users and does not require any sort of membership or accounts, i.e. any machine connected to the Internet can be the part of framework. Dispatch allows dynamic and decentralized organization of these computational resources. It empowers users to utilize heterogeneous computational resources spread across geographic and administrative boundaries to run their tasks in parallel. As a first step, we address a number of challenging issues involved in designing such distributed systems. Some of these issues are forming a decentralized and scalable network of computational resources, finding sufficient number of idle CPUs in the network for participants, allocating simulation tasks in an optimal manner so as to reduce the computation time, allowing new participants to join the system and run their task irrespective of their geographical location and facilitating users to interact with their tasks (pausing, resuming, stopping) in real time and implementing security features for preventing malicious users from compromising the network and remote machines. As a second step, we evaluate the performance of Dispatch on a large-scale network consisting of 10−130 machines. For one particular simulation, we were able to achieve up to 1500 million iterations per second as compared to 10 million iterations per second on one machine. We also test Dispatch over a wide-area network where it is deployed on machines that are geographically apart and belong to different domains
    corecore