96 research outputs found

    Optimising lower layers of the protocol stack to improve communication performance in a wireless temperature sensor network

    Get PDF
    The function of wireless sensor networks is to monitor events or gather information and report the information to a sink node, a central location or a base station. It is a requirement that the information is transmitted through the network efficiently. Wireless communication is the main activity that consumes energy in wireless sensor networks through idle listening, overhearing, interference and collision. It becomes essential to limit energy usage while maintaining communication between the sensor nodes and the sink node as the nodes die after the battery has been exhausted. Thus, conserving energy in a wireless sensor network is of utmost importance. Numerous methods to decrease energy expenditure and extend the lifetime of the network have been proposed. Researchers have devised methods to efficiently utilise the limited energy available for wireless sensor networks by optimising the design parameters and protocols. Cross-layer optimisation is an approach that has been employed to improve wireless communication. The essence of cross-layer scheme is to optimise the exchange and control of data between two or more layers to improve efficiency. The number of transmissions is therefore a vital element in evaluating overall energy usage. In this dissertation, a Markov Chain model was employed to analyse the tuning of two layers of the protocol stack, namely the Physical Layer (PHY) and Media Access Control layer (MAC), to find possible energy gains. The study was conducted utilising the IEEE 802.11 channel, SensorMAC (SMAC) and Slotted-Aloha (S-Aloha) medium access protocols in a star topology Wireless Temperature Sensor Network (WTSN). The research explored the prospective energy gains that could be realised through optimizing the Forward Error Correction (FEC) rate. Different Reed Solomon codes were analysed to explore the effect of protocol tuning on energy efficiency, namely transmission power, modulation method, and channel access. The case where no FEC code was used and analysed as the control condition. A MATLAB simulation model was used to identify the statistics of collisions, overall packets transmitted, as well as the total number of slots used during the transmission phase. The bit error probability results computed analytically were utilised in the simulation model to measure the probability of successful transmitting data in the physical layer. The analytical values and the simulation results were compared to corroborate the correctness of the models. The results indicate that energy gains can be accomplished by the suggested layer tuning approach.Electrical and Mining EngineeringM. Tech. (Electrical Engineering

    Digital Library Services for Three-Dimensional Models

    Get PDF
    With the growth in computing, storage and networking infrastructure, it is becoming increasingly feasible for multimedia professionals—such as graphic designers in commercial, manufacturing, scientific and entertainment areas—to work with 3D digital models of the objects with which they deal in their domain. Unfortunately most of these models exist in individual repositories, and are not accessible to geographically distributed professionals who are in need of them. Building an efficient digital library system presents a number of challenges. In particular, the following issues need to be addressed: (1) What is the best way of representing 3D models in a digital library, so that the searches can be done faster? (2) How to compress and deliver the 3D models to reduce the storage and bandwidth requirements? (3) How can we represent the user\u27s view on similarity between two objects? (4) What search types can be used to enhance the usability of the digital library and how can we implement these searches, what are the trade-offs? In this research, we have developed a digital library architecture for 3D models that addresses the above issues as well as other technical issues. We have developed a prototype for our 3D digital library (3DLIB) that supports compressed storage, along with retrieval of 3D models. The prototype also supports search and discovery services that are targeted for 3-D models. The key to 3DLIB is a representation of a 3D model that is based on “surface signatures”. This representation captures the shape information of any free-form surface and encodes it into a set of 2D images. We have developed a shape similarity search technique that uses the signature images to compare 3D models. One advantage of the proposed technique is that it works in the compressed domain, thus it eliminates the need for uncompressing in content-based search. Moreover, we have developed an efficient discovery service consisting of a multi-level hierarchical browsing service that enables users to navigate large sets of 3D models. To implement this targeted browsing (find an object that is similar to a given object in a large collection through browsing) we abstract a large set of 3D models to a small set of representative models (key models). The abstraction is based on shape similarity and uses specially tailored clustering techniques. The browsing service applies clustering recursively to limit the number of key models that a user views at any time. We have evaluated the performance of our digital library services using the Princeton Shape Benchmark (PSB) and it shows significantly better precision and recall, as compared to other approaches

    Reducing the Overhead of Memory Space, Network Communication and Disk I/O for Analytic Frameworks in Big Data Ecosystem

    Get PDF
    To facilitate big data processing, many distributed analytic frameworks and storage systems such as Apache Hadoop, Apache Hama, Apache Spark and Hadoop Distributed File System (HDFS) have been developed. Currently, many researchers are conducting research to either make them more scalable or enabling them to support more analysis applications. In my PhD study, I conducted three main works in this topic, which are minimizing the communication delay in Apache Hama, minimizing the memory space and computational overhead in HDFS and minimizing the disk I/O overhead for approximation applications in Hadoop ecosystem. Specifically, In Apache Hama, communication delay makes up a large percentage of the overall graph processing time. While most recent research has focused on reducing the number of network messages, we add a runtime communication and computation scheduler to overlap them as much as possible. As a result, communication delay can be mitigated. In HDFS, the block location table and its corresponding maintenance could occupy more than half of the memory space and 30% of processing capacity in master node, which severely limit the scalability and performance of master node. We propose Deister that uses deterministic mathematical calculations to eliminate the huge table for storing the block locations and its corresponding maintenance. My third work proposes to enable both efficient and accurate approximations on arbitrary sub-datasets of a large dataset. Existing offline sampling based approximation systems are not adaptive to dynamic query workloads and online sampling based approximation systems suffer from low I/O efficiency and poor estimation accuracy. Therefore, we develop a distribution aware method called Sapprox. Our idea is to collect the occurrences of a sub-dataset at each logical partition of a dataset (storage distribution) in the distributed system at a very small cost, and make good use of such information to facilitate online sampling

    Research on Improving Reliability, Energy Efficiency and Scalability in Distributed and Parallel File Systems

    Get PDF
    With the increasing popularity of cloud computing and Big data applications, current data centers are often required to manage petabytes or exabytes of data. To store this huge amount of data, thousands or tens of thousands storage nodes are required at a single site. This imposes three major challenges for storage system designers: (1) Reliability---node failure in these datacenters is a normal occurrence rather than a rare situation. This makes data reliability a great concern. (2) Energy efficiency---a data center can consume up to 100 times more energy than a standard office building. More than 10% of this energy consumption can be attributed to storage systems. Thus, reducing the energy consumption of the storage system is key to reducing the overall consumption of the data center. (3) Scalability---with the continuously increasing size of data, maintaining the scalability of the storage systems is essential. That is, the expansion of the storage system should be completed efficiently and without limitations on the total number of storage nodes or performance. This thesis proposes three ways to improve the above three key features for current large-scale storage systems. Firstly, we define the problem of reverse lookup , namely finding the list of objects (blocks) for a failed node. As the first step of failure recovery, this process is directly related to the recovery/reconstruction time. While existing solutions use metadata traversal or data distribution reversing methods for reverse lookup, which are either time consuming or expensive, a deterministic block placement can achieve fast and efficient reverse lookup. However, the deterministic placement solutions are designed for centralized, small-scale storage architectures such as RAID etc.. Due to their lacking of scalability, they cannot be directly applied in large-scale storage systems. In this paper, we propose Group-Shifted Declustering (G-SD), a deterministic data layout for multi-way replication. G-SD addresses the scalability issue of our previous Shifted Declustering layout and supports fast and efficient reverse lookup. Secondly, we define a problem: how to balance the performance, energy, and recovery in degradation mode for an energy efficient storage system? . While extensive researches have been proposed to tradeoff performance for energy efficiency under normal mode, the system enters degradation mode when node failure occurs, in which node reconstruction is initiated. This very process requires a number of disks to be spun up and requires a substantial amount of I/O bandwidth, which will not only compromise energy efficiency but also performance. Without considering the I/O bandwidth contention between recovery and performance, we find that the current energy proportional solutions cannot answer this question accurately. This thesis present PERP, a mathematical model to minimize the energy consumption for a storage systems with respect to performance and recovery. PERP answers this problem by providing the accurate number of nodes and the assigned recovery bandwidth at each time frame. Thirdly, current distributed file systems such as Google File System(GFS) and Hadoop Distributed File System (HDFS), employ a pseudo-random method for replica distribution and a centralized lookup table (block map) to record all replica locations. This lookup table requires a large amount of memory and consumes a considerable amount of CPU/network resources on the metadata server. With the booming size of Big Data , the metadata server becomes a scalability and performance bottleneck. While current approaches such as HDFS Federation attempt to horizontally extend scalability by allowing multiple metadata servers, we believe a more promising optimization option is to vertically scale up each metadata server. We propose Deister, a novel block management scheme that builds on top of a deterministic declustering distribution method Intersected Shifted Declustering (ISD). Thus both replica distribution and location lookup can be achieved without a centralized lookup table

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Computer Science Principles with Python

    Get PDF
    This textbook is intended to be used for a first course in computer science, such as the College Board’s Advanced Placement course known as AP Computer Science Principles (CSP). This book includes all the topics on the CSP exam, plus some additional topics. It takes a breadth-first approach, with an emphasis on the principles which form the foundation for hardware and software. No prior experience with programming should be required to use this book. This version of the book uses the Python programming language.https://rdw.rowan.edu/oer/1024/thumbnail.jp

    Computer Science Principles with C++

    Get PDF
    This textbook is intended to be used for a first course in computer science, such as the College Board’s Advanced Placement course known as AP Computer Science Principles (CSP). This book includes all the topics on the CSP exam, plus some additional topics. It takes a breadth-first approach, with an emphasis on the principles which form the foundation for hardware and software. No prior experience with programming should be required to use this book. This version of the book uses the C++ programming language.https://rdw.rowan.edu/oer/1025/thumbnail.jp
    • …
    corecore