292 research outputs found

    Self-organising an indoor location system using a paintable amorphous computer

    No full text
    This thesis investigates new methods for self-organising a precisely defined pattern of intertwined number sequences which may be used in the rapid deployment of a passive indoor positioning system's infrastructure.A future hypothetical scenario is used where computing particles are suspended in paint and covered over a ceiling. A spatial pattern is then formed over the covered ceiling. Any small portion of the spatial pattern may be decoded, by a simple camera equipped device, to provide a unique location to support location-aware pervasive computing applications.Such a pattern is established from the interactions of many thousands of locally connected computing particles that are disseminated randomly and densely over a surface, such as a ceiling. Each particle has initially no knowledge of its location or network topology and shares no synchronous clock or memory with any other particle.The challenge addressed within this thesis is how such a network of computing particles that begin in such an initial state of disarray and ignorance can, without outside intervention or expensive equipment, collaborate to create a relative coordinate system. It shows how the coordinate system can be created to be coherent, even in the face of obstacles, and closely represent the actual shape of the networked surface itself. The precision errors incurred during the propagation of the coordinate system are identified and the distributed algorithms used to avoid this error are explained and demonstrated through simulation.A new perimeter detection algorithm is proposed that discovers network edges and other obstacles without the use of any existing location knowledge. A new distributed localisation algorithm is demonstrated to propagate a relative coordinate system throughout the network and remain free of the error introduced by the network perimeter that is normally seen in non-convex networks. This localisation algorithm operates without prior configuration or calibration, allowing the coordinate system to be deployed without expert manual intervention or on networks that are otherwise inaccessible.The painted ceiling's spatial pattern, when based on the proposed localisation algorithm, is discussed in the context of an indoor positioning system

    Multifunctional wearable epidermal device for physiological signal monitoring in sleep study

    Get PDF
    Sleep is the essential part of life. Thousands of people are suffering from different kinds sleep disorders. Clinical diagnosing and treating for such disorders are costly, painful and quite sluggish. To reach the demand many commercial products are into the market to encourage home based sleep studies using portable devices. These portable devices are limited in use, cannot be handled easily and quite costly. Advancements in technology miniaturized these portable devices to wearable devices to make them convenient and economical. Elastic, soft and thin silicon membrane with physical properties well matched with that of the epidermis provides conformal and robust contact with the skin. Integration of an elastic and flexible electronics to such a membrane provides an epidermal electronic system (EES) that can enhance the robustness in operation for electrophysiological signal measurement. Biocompatible and non-invasive over the skin are the advantages of this class of technology that lie beyond those available with conventional, point-contact electrode interfaces to the skin. Recording of various long-term physiological signals relevant in various sleep studies can be performed using this multifunctional device. Optimized design of EES for monitoring various physiological signals like surface electroencephalography (EEG), electrooculography (EOG) and electromyography (EMG) are presented in this project --Abstract, page iii

    RF Coverage Planning And Analysis With Adaptive Cell Sectorization In Millimeter Wave 5G Networks

    Get PDF
    The advancement of Fifth Generation Network (5G) technology is well underway, with Mobile Network Operators (MNOs) globally commencing the deployment of 5G networks within the mid-frequency spectrum range (3GHz–6GHz). Nevertheless, the escalating demands for data traffic are compelling MNOs to explore the high-frequency spectrum (24GHz–100GHz), which offers significantly larger bandwidth (400MHz-800 MHz) compared to the mid-frequency spectrum (3GHz–6GHz), which typically provides 50MHz-100MHz of bandwidth. However, it is crucial to note that the higher-frequency spectrum imposes substantial challenges due to exceptionally high free space propagation loss, resulting in 5G cell site coverage being limited to several hundred meters, in contrast to the several kilometers achievable with 4G. Consequently, MNOs are faced with the formidable task of accurately planning and deploying hundreds of new 5G cells to cover the same areas served by a single 4G cell.This dissertation embarks on a comprehensive exploration of Radio Frequency (RF) coverage planning for 5G networks, initially utilizing a conventional three-sector cell architecture. The coverage planning phase reveals potential challenges, including coverage gaps and poor Signal-to-Interference-plus-Noise Ratio (SINR). In response to these issues, the dissertation introduces an innovative cell site architecture that embraces both nine and twelve sector cells, enhancing RF coverage through the adoption of an advanced antenna system designed with subarrays, offering adaptive beamforming and beam steering capabilities. To further enhance energy efficiency, the dissertation introduces adaptive higher-order cell-sectorization (e.g., nine sector cells and twelve sector cells). In this proposed method, all sectors within a twelve-sector cell remain active during peak hours (e.g., daytime) and are reduced to fewer sectors (e.g., nine sectors or six sectors per cell) during off-peak hours (e.g., nighttime). This dynamic adjustment is facilitated by an advanced antenna system utilizing sub-array architecture, which employs adaptive beamforming and beam steering to tailor the beamwidth and radiation angle of each active sector. Simulation results unequivocally demonstrate significant enhancements in RF coverage and SINR with the implementation of higher-order cell-sectorization. Furthermore, the proposed adaptive cell-sectorization method significantly reduces energy consumption during off-peak hours. In addition to addressing RF coverage planning, this dissertation delves into the numerous challenges associated with deploying 5G networks in the higher frequency spectrum (30GHz-300GHz). It encompasses issues such as precise cell site planning, location acquisition, propagation modeling, energy efficiency, backhauling, and more. Furthermore, the dissertation offers valuable insights into future research directions aimed at effectively surmounting these challenges and optimizing the deployment of 5G networks in the high-frequency spectrum

    Localization Of Sensors In Presence Of Fading And Mobility

    Get PDF
    The objective of this dissertation is to estimate the location of a sensor through analysis of signal strengths of messages received from a collection of mobile anchors. In particular, a sensor node determines its location from distance measurements to mobile anchors of known locations. We take into account the uncertainty and fluctuation of the RSS as a result of fading and take into account the decay of the RSS which is proportional to the transmitter-receiver distance power raised to the PLE. The objective is to characterize the channel in order to derive accurate distance estimates from RSS measurements and then utilize the distance estimates in locating the sensors. To characterize the channel, two techniques are presented for the mobile anchors to periodically estimate the channel\u27s PLE and fading parameter. Both techniques estimate the PLE by solving an equation via successive approximations. The formula in the first is stated directly from MLE analysis whereas in the second is derived from a simple probability analysis. Then two distance estimates are proposed, one based on a derived formula and the other based on the MLE analysis. Then a location technique is proposed where two anchors are sufficient to uniquely locate a sensor. That is, the sensor narrows down its possible locations to two when collects RSS measurements transmitted by a mobile anchor, then uniquely determines its location when given a distance to the second anchor. Analysis shows the PLE has no effect on the accuracy of the channel characterization, the normalized error in the distance estimation is invariant to the estimated distance, and accurate location estimates can be achieved from a moderate sample of RSS measurements

    Reverse Geographic Location of a Computer Node

    Get PDF
    The determination of methods by which a user is able to locate his computer when that user does not know his current location, termed homestation , will provide the Air Force an advantage over its adversaries. The methods are a combination of different mathematical techniques that enable the user to manipulate data to minimize the effects of delay caused by various factors on the network. The techniques use the smallest round trip time obtained from the ping utility. This time is then converted into miles and plotted on a map of the United States. The methods used to solve this problem are trilateration, a trilateration variant, the slope-intercept method, and the reverse traceroute combined with Euclidean distance. The results from the methods described in this research provide insight to fundamental problems that need to be resolved to achieve this capability

    Exploring clinical phenotypes of open-angle glaucoma and their significance in practice

    Full text link
    There are several enduring questions regarding the differentiation of clinical phenotypes of glaucoma which clinicians may derive clinical meaning directed towards patient’s management and prognostication. This thesis seeks to address the following issues relating to distinguishing clinical phenotypes of glaucoma: “Evaluating the impact of changing visual field test density on macular structure-function relationships to identify central-involving glaucoma phenotypes”; and “Identifying quantitative structural and functional clinical parameters that may distinguish between intraocular pressure (IOP) defined glaucoma phenotypes”; Two studies were undertaken to examine clinical phenotypes of glaucoma. The first study utilised systematic approach to assessing the impact of test point density in macular visual field (VF) testing on structure-function concordance for identifying centrally-involving glaucoma phenotypes. The second study used multivariate regression analysis and principal component analysis (PCA) to examine quantitative structural (using optical coherence tomography) and functional (VF) clinical data of newly-diagnosed glucoma patients to determine if there are clinically meaningful distinctions between IOP-defined phenotypes (i.e. low-tension vs high-tension glaucoma). Study 1) Using a systematic approach of test point addition and subtraction, we identified a critical number of test locations (8-14) in macular VF testing where binarised structure-function concordance is maximised, and discordance minimised. This methodology provides a framework for optimising macular VF test patterns for detection of centrally-involving glaucoma phenotypes. Study 2) Despite statistical significance in differences between low- and high-tension glaucoma, PCA applied to quantitative clinical structural and functional parameters returned no groups of clinical parameters that reliably distinguished between patients in IOP-defined glaucoma phenotypes. The present work provides a framework to identify phenotypic groups of glaucoma, the clinical significance of which may vary. We identified the minimum number of test points required to detect central-involving glaucoma in visual field testing. We also demonstrate that IOP-defined phenotypes are not clinically distinguishable at the point of diagnosis, suggesting that these phenotypes form part of a continuum of open-angle glaucoma. These findings have implications for disease staging and preferred treatment modality

    Debris Flow Fan Evolution, Chalk Creek Natural Debris Flow Laboratory, Colorado

    Get PDF
    Terrestrial laser scanning (TLS) is a surveying technique used to gather dense point cloud data that can be converted to high-resolution digital elevation models (DEM). TLS techniques are employed in the current study to monitor changes to a debris flow fan following five separate debris flows over twenty-five months (May 2009 to July 2011). This thesis represents a combination of two peer-reviewed journal articles. The first focuses on a new critical review of the six predominant themes dominating the last 40 years of alluvial fan dynamism studies. The themes include the development of conceptual models, field experiments, physical models, numerical models, high-resolution morphometric analyses, and climate change scenarios. Each theme is presented independently, but as highlighted in the concluding statements, there should be greater efforts placed on integrating scientists from these disparate approaches to provide greater understanding of alluvial fan evolution. A case study is also presented in support of the review and contains pilot results from the first debris flow recorded for this study at the Colorado Natural Debris Flow Laboratory near Buena Vista, CO, USA.  M.A

    Improving visual field tests for populations with advanced glaucoma and visual field loss in the periphery

    Get PDF
    The visual field can extend up to 100° in the temporal visual region; however, in patients with glaucoma and other diseases that affect peripheral vision, only the central 30° of the visual field is monitored regularly in clinical practice using static perimetry. These static tests are rapid and robust against human errors due to their testing strategies. However, approximately 80% of the rest of the visual field is less regularly examined due to the length of time it takes to measure, using both static and kinetic stimuli. Currently, there is not an established automated kinetic test to measure the visual field within the same duration, and precision as a central static perimetry test. The peripheral visual field is important for aspects such as attention, balance, and mobility, thus examination of this visual region may provide important information. This Thesis focuses on the development and clinical application of automated kinetic peripheral visual field tests, designed to rapidly measure the peripheral visual field. In the first study, the outer limits of the far peripheral visual field were examined using kinetic stimuli by adapting a commercial Octopus 900 perimeter (Haag-Streit, Koniz, Switzerland) with an extended fixation device. The results confirmed research from a century ago and the distribution of responses provided the framework to develop kinetic perimetry strategies. With this perimeter adaptation, we investigated the effect of cataract surgery on the extent of the peripheral visual field and if negative dysphotopsia can be detected. This was undertaken in 30 post-cataract surgery patients, using a stimulus that moved both inwards towards the fixation point and outwards from the fixation point. The results suggested implantation of intraocular lenses reduces the extent of the peripheral visual field. Negative dysphotopsia was detected in a patient, with shrinkage of the capsular bag being identified as the possible cause. Simulations of responses to kinetic stimuli formed a kinetic test that was used to measure the outer visual boundary in participants with advanced glaucoma. Simulation results showed good precision, and a test duration similar to a static central test. Clinical application of this kinetic strategy test in a group of 12 participants with advanced glaucoma showed faster results than simulation estimates, and isopter estimates were precise to within ±4°. I investigated the effect of vision loss from glaucoma on postural sway stability. Participant postural stability was measured in 11 participants with glaucoma and 12 aged matched controls, using accelerometers (Xsens MTw, Awinda, Holland). Participants viewed different visual scenes, to compare the role of central and peripheral visual fields on stability. The impact of proprioceptive feedback on stability and the contribution of vision was measured by using different standing surfaces. The results of this study confirmed a decrease of postural stability with vision loss, an increased reliance on proprioceptive feedback in glaucoma participants, and lack of input of the peripheral visual field outside of 60° on standing balance

    An approach to understand network challenges of wireless sensor network in real-world environments

    Get PDF
    The demand for large-scale sensing capabilities and scalable communication networks to monitor and control entities within smart buildings have fuelled the exponential growth in Wireless Sensor Network (WSN). WSN proves to be an attractive enabler because of its accurate sensing, low installation cost and flexibility in sensor placement. While WSN offers numerous benefits, it has yet to realise its full potential due to its susceptibility to network challenges in the environment that it is deployed. Particularly, spatial challenges in the indoor environment are known to degrade WSN communication reliability and have led to poor estimations of link quality. Existing WSN solutions often generalise all link failures and tackle them as a single entity. However, under the persistent influence of spatial challenges, failing to provide precise solutions may cause further link failures and higher energy consumption of battery-powered devices. Therefore, it is crucial to identify the causes of spatial- related link failures in order to improve WSN communication reliability. This thesis investigates WSN link failures under the influence of spatial challenges in real-world indoor environments. Novel and effective strategies are developed to evaluate the WSN communication reliability. By distinguishing between spatial challenges such as a poorly deployed environment and human movements, solutions are devised to reduce link failures and improve the lifespans of energy constraint WSN nodes. In this thesis, WSN test beds using proprietary wireless sensor nodes are developed and deployed in both controlled and uncontrolled office environments. These test beds provide diverse platforms for investigation into WSN link quality. In addition, a new data extraction feature called Network Instrumentation (NI) is developed and implemented onto the communication stacks of wireless sensor nodes to collect ZigBee PRO parameters that are under the influence of environmental dynamics. To understand the relationships between WSN and Wi-Fi devices communications, an investigation on frequency spectrum sharing is conducted between IEEE 802.15.4 and IEEE 802.11 bgn standards. It is discovered that the transmission failure of WSN nodes under persistent Wi-Fi interference is largely due to channel access failure rather than corrupted packets. The findings conclude that both technologies can co- exist as long as there is sufficient frequency spacing between Wi-Fi and WSN communication and adequate operating distance between the WSN nodes, and between the WSN nodes and the Wi-Fi interference source. Adaptive Network-based Fuzzy Inference System (ANFIS) models are developed to predict spatial challenges in an indoor environment. These challenges are namely, “no failure”, “failure due to poorly deployed environment” and “failure due to human movement”. A comparison of models has found that the best-produced model represents the properties of signal strength, channel fluctuations, and communication success rates. It is recognised that the interpretability of ANFIS models have reduced due to the “curse of dimensionality”. Hence, Non-Dominated Sorting Genetic Algorithm (NSGA-II) technique is implemented to reduce the complexity of these ANFIS models. This is followed by a Fuzzy rule sensitivity analysis, where the impacts of Fuzzy rules on model accuracy are found to be dependent on factors such as communication range and controlled or uncontrolled environment. Long-term WSN routing stability is measured, taking into account the adaptability and robustness of routing paths in the real-world environments. It is found that routing stability is subjected to the implemented routing protocol, deployed environment and routing options available. More importantly, the probability of link failures can be as high as 29.9% when a next hop’s usage rate falls less than 10%. This suggests that a less dominant next hop is subjected to more link failures and is short-lived. Overall, this thesis brings together diverse WSN test beds in real-world indoor environments and a new data extraction platform to extract link quality parameters from ZigBee PRO stack for a representative assessment of WSN link quality. This produces realistic perspectives of the interactions between WSN communication reliability and the environmental dynamics, particularly spatial challenges. The outcomes of this work include an in-depth system level understanding of real-world deployed applications and an insightful measure of large-scale WSN communication performance. These findings can be used as building blocks for a reliable and sustainable network architecture built on top of resource–constrained WSN

    A wireless sensor network system for border security and crossing detection

    Get PDF
    The protection of long stretches of countries’ borders has posed a number of challenges. Effective and continuous monitoring of a border requires the implementation of multi-surveillance technologies, such as Wireless Sensor Networks (WSN), that work as an integrated unit to meet the desired goals. The research presented in this thesis investigates the application of topologically Linear WSN (LWSNs) to international border monitoring and surveillance. The main research questions studied here are: What is the best form of node deployment and hierarchy? What is the minimum number of sensor nodes to achieve k− barrier coverage in a given belt region? iven an appropriate network density, how do we determine if a region is indeed k−barrier covered? What are the factors that affect barrier coverage? How to organise nodes into logical segments to perform in-network processing of data? How to transfer information from the networks to the end users while maintaining critical QoS measures such as timeliness and accuracy. To address these questions, we propose an architecture that specifies a mechanism to assign nodes to various network levels depending on their location. These levels are used by a cross-layer communication protocol to achieve data delivery at the lowest possible cost and minimal delivery delay. Building on this levelled architecture, we study the formation of weak and strong barriers and how they determine border crossing detection probability. We propose new method to calculate the required node density to provide higher intruder detection rate. Then, we study the effect of people movement models on the border crossing detection probability. At the data link layer, new energy balancing along with shifted MAC protocol are introduced to further increase the network lifetime and delivery speed. In addition, at network layer, a routing protocol called Level Division raph (LD ) is developed. LD utilises a complex link cost measurement to insure best QoS data delivery to the sink node at the lowest possible cost. The proposed system has the ability to work independently or cooperatively with other monitoring technologies, such as drowns and mobile monitoring stations. The performance of the proposed work is extensively evaluated analytically and in simulation using real-life conditions and parameters. The simulation results show significant performance gains when comparing LD to its best rivals in the literature Dynamic Source Routing. Compared to DSR, LD achieves higher performance in terms of average end-to-end delays by up to 95%, packet delivery ratio by up to 20%, and throughput by up to 60%, while maintaining similar performance in terms of normalised routing load and energy consumption
    • 

    corecore