775 research outputs found

    Development of a high temporal resolution electronic sun journal for monitoring sun exposure patterns

    Get PDF
    Excessive exposure to UV radiation can significantly damage human health. Exposure to UV radiation causes acute effects and long-term effects. Examples of acute effects are sunburn (erythema), immunosuppression and photokeratitis. Long-term effects include melanoma and other skin cancers and ocular disease such as pterygium and cataracts. Measuring personal solar UV exposure and determining sun exposure patterns is important for public health, as more knowledge is needed to define the causes of diseases related to sun exposure. Many studies have employed paper-based sun diaries (journals) or employ expensive electronic dosimeters (which limit the size of the sample population) to estimate periods of exposure. A cost-effective personal electronic sun journal (ESJ) is developed in this project, which introduces a novel methodology for sensing outdoor exposure patterns. This methodology has not been previously employed for personal solar exposure monitoring. The ESJ was built from a UV infrared photodiode, which was tested in this project to determine if it can be utilised in a personal ESJ, for characterising personal UV exposure patterns. The development of the ESJ was undertaken by testing a group of photodiodes for their physical response. These photodiodes were chosen due to their low cost, their sensitivity to infrared radiation and their cosine response as listed by their manufacturer. The photodiode with the best cosine response was selected to be one of the ESJ circuit elements. The other elements are a 20 kΩ resistor, a 3 V battery and a Tinytag TK-4703 voltage data logger. Preliminary environmental tests were conducted on the ESJ to ensure that it operated correctly and is sensitive to the environment. After the preliminary tests, other tests were performed, including the cosine response test, temperature stability test and sky view test. Environmental characterisation tests were then performed by placing the ESJ in different types of static environments. The ESJ has been used in conjunction with ambient UV meters to estimate the erythemally effective UV exposure. Five individual walking tests, or field trials, were performed, with each trial involving the researcher holding a wooden board with the ESJ and PMA2100 meter and the researcher walking around and through different types of environments with a variety of shade protection. Preliminary environmental test results showed that the ESJ is sensitive to the environment. The temperature stability test showed that the ESJ can be employed in normal summer and winter temperatures. The sky view tests showed that a decrease in sky view leads to an increase in output voltage. Environmental characterisation tests demonstrated the ability of the ESJ to classify the type of environment typically occupied by users. In terms of the characteristics of each tested environment, there was increased output voltage by the ESJ with increasing shade density (reduced sky view). Results of individual walking tests confirmed the ability of ESJ to detect individual exposure patterns. The greater detail thereby obtained regarding behavioural exposure patterns cannot be obtained by using paper-based sun diaries. Based on the results of this research, the ESJ could replace paper-based sun journals. The latter depend on self-reported volunteer recall, which is subjective and possibly marred by public social desirability bias. The ESJ data offers greater objectivity and could complement existing exposure monitoring in UV research studies for estimating periods of exposure patterns. Using the ESJ further improves the accuracy of long-term epidemiological cumulative exposure studies as high sampling rates can be obtained using this more affordable tool

    Photocatalytic oxidation of ethanol using macroporous titania

    Get PDF
    Photocatalytic oxidation (PCO) using TiO2 is a potential means of remediating poor indoor air quality that is attributed to low levels of volatile organic compounds (VOC). In this work, ethanol is chosen as a simple compound representative of VOC’s. The aim of this research is to establish a baseline for the photocatalytic activity of TiO2 in ethanol PCO as well as the photonic efficiency of the photoreactor. The PCO conversion could then be enhanced by using photocatalyst having a macroporous structure. A flat plate photoreactor, UV light delivery and a flow system was designed in this work to accomplish ethanol PCO. Three kinds of photocatalysts were evaluated: 1) commercial Degussa P25 (in powder and slurry form), 2) unstructured sol-gel TiO2 and 3) macroporous TiO2 deposited on two substrates (optic fiber and glass slide). Titania from sol-gel hydrolysis was found to be a better photocatalyst than the commercial Degussa P25. Maximum PCO conversion found is 61% using an optimum TiO2 surface loading of 0.403 mg/cm2. A quantum efficiency of 2.3% was obtained for the photoreactor. Kinetic analysis of the experimental rate data gave an apparent reaction order of 0.45 and an approximate rate constant of 0.00144 (mol/cm3)0.55 (cm3/ gcat-s) for ethanol PCO. The photocatalyst samples were characterized using XRD and it was found that during sol-gel hydrolysis, only anatase crystalline phase was formed. From SEM images it was confirmed that the dipcoating method at low TiO2 weights resulted to a macroporous structure but only short range ordering is apparent. It was also found that colloidal crystals made from convective assembly have very good long range order and with clearly visible (111) symmetric plane. The available surface areas were measured from adsorption isotherms, for the unstructured sol-gel TiO2 it was found to have a surface area of 50 m2/g which is comparable to Degussa P25. The pore size distributions were generated from desorption isotherms, for the unstructured sol-gel TiO2 it was found to have an average pore size of 3.9 nm. A porosity of 0.21 and bulk density of 3.07 cm3/g was also found, indicating a much denser structure than Degussa P25 slurry. Lastly, an effort was made to attain higher PCO conversion for the macroporous TiO2 through higher TiO2 weights at ideal TiO2:PS weight ratio, using three different colloidal crystal templating methods and four variations of sol-gel infiltration techniques. However, no evidence that a macroporous structure was formed. Comparable PCO conversion values to unstructured sol-gel TiO2 were obtained. Additional work is needed to improve the methodology used in the fabrication of the macroporous structure

    Assessment of personal exposure to radio frequency radiation in realistic environments

    Get PDF

    RAD - Research and Education 2010

    Get PDF

    Analysis and evaluation of Wi-Fi indoor positioning systems using smartphones

    Get PDF
    This paper attempts to analyze the main algorithms used in Machine Learning applied to the indoor location. New technologies are facing new challenges. Satellite positioning has become a typical application of mobile phones, but stops working satisfactorily in enclosed spaces. Currently there is a problem in positioning which is unresolved. This circumstance motivates the research of new methods. After the introduction, the first chapter presents current methods of positioning and the problem of positioning indoors. This part of the work shows globally the current state of the art. It mentions a taxonomy that helps classify the different types of indoor positioning and a selection of current commercial solutions. The second chapter is more focused on the algorithms that will be analyzed. It explains how the most widely used of Machine Learning algorithms work. The aim of this section is to present mathematical algorithms theoretically. These algorithms were not designed for indoor location but can be used for countless solutions. In the third chapter, we learn gives tools work: Weka and Python. the results obtained after thousands of executions with different algorithms and parameters showing main problems of Machine Learning shown. In the fourth chapter the results are collected and the conclusions drawn are shown

    Low computational SLAM for an autonomous indoor aerial inspection vehicle

    Get PDF
    The past decade has seen an increase in the capability of small scale Unmanned Aerial Vehicle (UAV) systems, made possible through technological advancements in battery, computing and sensor miniaturisation technology. This has opened a new and rapidly growing branch of robotic research and has sparked the imagination of industry leading to new UAV based services, from the inspection of power-lines to remote police surveillance. Miniaturisation of UAVs have also made them small enough to be practically flown indoors. For example, the inspection of elevated areas in hazardous or damaged structures where the use of conventional ground-based robots are unsuitable. Sellafield Ltd, a nuclear reprocessing facility in the U.K. has many buildings that require frequent safety inspections. UAV inspections eliminate the current risk to personnel of radiation exposure and other hazards in tall structures where scaffolding or hoists are required. This project focused on the development of a UAV for the novel application of semi-autonomously navigating and inspecting these structures without the need for personnel to enter the building. Development exposed a significant gap in knowledge concerning indoor localisation, specifically Simultaneous Localisation and Mapping (SLAM) for use on-board UAVs. To lower the on-board processing requirements of SLAM, other UAV research groups have employed techniques such as off-board processing, reduced dimensionality or prior knowledge of the structure, techniques not suitable to this application given the unknown nature of the structures and the risk of radio-shadows. In this thesis a novel localisation algorithm, which enables real-time and threedimensional SLAM running solely on-board a computationally constrained UAV in heavily cluttered and unknown environments is proposed. The algorithm, based on the Iterative Closest Point (ICP) method utilising approximate nearest neighbour searches and point-cloud decimation to reduce the processing requirements has successfully been tested in environments similar to that specified by Sellafield Ltd

    Automated inverse-rendering techniques for realistic 3D artefact compositing in 2D photographs

    Get PDF
    PhD ThesisThe process of acquiring images of a scene and modifying the defining structural features of the scene through the insertion of artefacts is known in literature as compositing. The process can take effect in the 2D domain (where the artefact originates from a 2D image and is inserted into a 2D image), or in the 3D domain (the artefact is defined as a dense 3D triangulated mesh, with textures describing its material properties). Compositing originated as a solution to enhancing, repairing, and more broadly editing photographs and video data alike in the film industry as part of the post-production stage. This is generally thought of as carrying out operations in a 2D domain (a single image with a known width, height, and colour data). The operations involved are sequential and entail separating the foreground from the background (matting), or identifying features from contour (feature matching and segmentation) with the purpose of introducing new data in the original. Since then, compositing techniques have gained more traction in the emerging fields of Mixed Reality (MR), Augmented Reality (AR), robotics and machine vision (scene understanding, scene reconstruction, autonomous navigation). When focusing on the 3D domain, compositing can be translated into a pipeline 1 - the incipient stage acquires the scene data, which then undergoes a number of processing steps aimed at inferring structural properties that ultimately allow for the placement of 3D artefacts anywhere within the scene, rendering a plausible and consistent result with regard to the physical properties of the initial input. This generic approach becomes challenging in the absence of user annotation and labelling of scene geometry, light sources and their respective magnitude and orientation, as well as a clear object segmentation and knowledge of surface properties. A single image, a stereo pair, or even a short image stream may not hold enough information regarding the shape or illumination of the scene, however, increasing the input data will only incur an extensive time penalty which is an established challenge in the field. Recent state-of-the-art methods address the difficulty of inference in the absence of 1In the present document, the term pipeline refers to a software solution formed of stand-alone modules or stages. It implies that the flow of execution runs in a single direction, and that each module has the potential to be used on its own as part of other solutions. Moreover, each module is assumed to take an input set and output data for the following stage, where each module addresses a single type of problem only. data, nonetheless, they do not attempt to solve the challenge of compositing artefacts between existing scene geometry, or cater for the inclusion of new geometry behind complex surface materials such as translucent glass or in front of reflective surfaces. The present work focuses on the compositing in the 3D domain and brings forth a software framework 2 that contributes solutions to a number of challenges encountered in the field, including the ability to render physically-accurate soft shadows in the absence of user annotate scene properties or RGB-D data. Another contribution consists in the timely manner in which the framework achieves a believable result compared to the other compositing methods which rely on offline rendering. The availability of proprietary hardware and user expertise are two of the main factors that are not required in order to achieve a fast and reliable results within the current framework

    Investigations of 5G localization with positioning reference signals

    Get PDF
    TDOA is an user-assisted or network-assisted technique, in which the user equipment calculates the time of arrival of precise positioning reference signals conveyed by mobile base stations and provides information about the measured time of arrival estimates in the direction of the position server. Using multilateration grounded on the TDOA measurements of the PRS received from at least three base stations and known location of these base stations, the location server determines the position of the user equipment. Different types of factors are responsible for the positioning accuracy in TDOA method, such as the sample rate, the bandwidth, network deployment, the properties of PRS, signal propagation condition, etc. About 50 meters positioning is good for the 4G/LTE users, whereas 5G requires an accuracy less than a meter for outdoor and indoor users. Noteworthy improvements in positioning accuracy can be achievable with the help of redesigning the PRS in 5G technology. The accuracy for the localization has been studied for different sampling rates along with different algorithms. High accuracy TDOA with 5G positioning reference signal (PRS) for sample rate and bandwidth hasn’t been taken into consideration yet. The key goal of the thesis is to compare and assess the impact of different sampling rates and different bandwidths of PRS on the 5G positioning accuracy. By performing analysis with variable bandwidths of PRS in resource blocks and comparing all the analyses with different bandwidths of PRS in resource blocks, it is undeniable that there is a meaningful decrease in the RMSE and significant growth in the SNR. The higher bandwidth of PRS in resource blocks brings higher SNR while the RMSE of positioning errors also decreases with higher bandwidth. Also, the number of PRS in resource blocks provides lower SNR with higher RMSE values. The analysis with different bandwidths of PRS in resource blocks reveals keeping the RMSE value lower than a meter each time with different statistics is a positivity of the research. The positioning accuracy also analyzed with different sample sizes. With an increased sample size, a decrease in the root mean square error and a crucial increase in the SNR was observed. From this thesis investigation, it is inevitable to accomplish that two different analyses (sample size and bandwidth) done in a different way with the targeted output. A bandwidth of 38.4 MHz and sample size N = 700 required to achieve below 1m accuracy with SNR of 47.04 dB
    corecore