31 research outputs found

    Monitoring of a Veneer Lathe Knife by the use of an Industrial Internet of Things- Platform

    Get PDF
    The number of devices connected to the Internet grows constantly. This information entity has been labeled the Internet of Things (IoT). One important aspect of this is the industrial applications, sometimes labeled the Industrial Internet of Things (IIoT). Collecting and analyzing the massive amounts of data that industry generates will only become more and more important as technology and the need for efficiency increase. Novotek is a company with long and extensive experience of industrial IT and automation. Together with their customer Quant Service they are launching a project for predictive maintenance. This aims to monitor several different industrial sites using an industrial platform and the IIoT framework. The monitoring will allow for tracking of machine status and maintenance needs from both near and afar. One of the sites for this project is a veneer production line for composite wood products. As a part of the monitoring and predictive maintenance project, this report looks at the possibility of using the ThingWorx IIoT platform’s analytics functionality to determine the need for maintenance of the cutting knife on a veneer lathe. The goal is to look at its uses for monitoring and predictive maintenance for this particular case but also as a general method. The process for this will be twofold. Since the project uses the IIoT framework one part is how to collect the data from the site and then passing it through the platform and to the analytics program. The second part is the machine learning and statistical methods and algorithms used to analyze the data for predictions. For benchmarking, it will be compared to another analytics product. The results of the project are not conclusive concerning the knife predictions. Development of the measurement setup is needed. The IIoT platform does however show potential in being used for the intended purpose.Predictive Maintenance with the Industrial Internet of Things The Industrial Internet of Things is growing every day. When machines talk to each other, they will revolutionize industry as we know it. The Industrial Internet of Things (IIoT), meaning real-time interconnectedness of industrial devices, is said to play a big part in the next industrial revolution, Industry 4.0. Pretty much all industrial devices, or Things, generate data. But data is not information. If it is to be valuable, it must be analyzed with the right tools so that the right decisions can be made. Ultimately, this will lead to a complete automation of the industrial process with smart machines talking and giving advice to each other. Novotek, a company with long experience in the areas of industrial IT and automation, is launching an IIoT project together with a customer. As a part of this, a MSc thesis study was done on using an IIoT platform for predictive maintenance. The object of study was a veneer peeling lathe used in the manufacturing of composite wood products. Wood cutting constantly dulls the tools involved and they need to be sharpened or exchanged several times during a workday. If it is possible for the machine to “know” the sharpness of its knife, it can decide when the optimal point of maintenance should be. One possible method to predict this is to monitor overall vibrations in the lathe and look for any patterns. To handle all the communication, storing and analysis of the data, specialized tools are needed. One such tool is the IIoT platform ThingWorx. ThingWorx has functionality for a multitude of applications. It can keep track of all your Things and handle the communication between them. It also has components for advanced analysis of data, using machine learning and statistical algorithms. The results of the study are not conclusive but tests for the process imply the usefulness of the IIoT framework. The application implemented creates a well-defined path for data to follow. This functions both for the modeling of the problem as well facilitating predictive process monitoring in actual operation. Once an IIoT solution has been implemented a company has a complete structure for connecting and monitoring all parts of their business. This goes beyond just reading production parameters from afar. This kind of connected industry can monitor itself. It can make predictions and take the right decisions for the manufacturing autonomously, only involving humans when needed. The possibilities for optimization and efficiency goes far beyond what was thought possible only a decade ago

    Evaluating interventions to make healthcare safer : methodological analysis and case study

    Get PDF
    This thesis describes study designs for the robust evaluation of complex patient safety interventions. Fundamentally, study designs available to measure the effectiveness of patient safety interventions fall into two categories – those that use contemporaneous controls, and those that do not. A review of the recent literature (245 citations) revealed that most studies were single-centre (63%), and the majority of these did not use contemporaneous controls (84%); whilst in multi-centre studies (37%) the number of studies using contemporaneous controls (49%) equalled the number of studies that that did not (51%). Studies that do not use contemporaneous controls dominate the literature, but they are weak and subject to bias. The thesis further discussed a case-study, as an exemplar for the evaluation of a highly complex patient safety intervention – the Safer Patients Initiative (SPI), which sought to generically strengthen hospitals, whilst improving frontline activities. The evaluation was a before and after study, with contemporaneous controls. It used mixed-methods, so that the triangulation of a one type of research finding could be reinforced when corroborated by the finding of another type. Uniquely, it also, compared the rates of change across control and SPI hospitals – an approach referred to as the “difference-in-difference” method.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Estudio comparativo completo de varios métodos basados en datos para la gestión de los recursos hídricos en ambientes mediterráneos a través de diferentes escalas temporales

    Get PDF
    Since the beginning of time, there has been innovation in the knowledge and technology of water and the hydraulic systems, to achieve an efficient and upgrade management of them. In this project, as an opening hypothesis, we will apply computational techniques and Artificial Intelligence concepts. Given that the primary asset of these studies is data, we have preferred to use the term ”Data-Driven”, as the term Artificial Intelligence can cause confusion in non-experts. This is an expanding field in all aspects of science and life, where the computing and processing powers are increasing periodic, so does the generation of information. There we have 5G technology, or the Internet of things, where the exponential build up in the volume of data utilised, pushes us to set up frameworks for the treatment and analysis of the information.Data-Driven techniques offers enormous potential to transform our perception to understand,monitor and predict the states of hydro-meteorological variables. Its application provides benefits, however, performing these exercises requires practice and explicit knowledge. Therefore, a deeper understanding of the capabilities and limitations of novel computational techniques within our field of knowledge is needed. Hence, it is essential to carry out ”hydro-informatics” experiences under this assumption. For the development of these models, we identify which points are the most relevant and need to be taken into account in regional conditions or frameworks. In consequence, we will work with the time series collected in the different monitoring networks, selecting the hydrological points of interest, in order to further develop hydrological frameworks that are useful for water management and optimisation. Here, we are interested in seeing the practical applicability to hydro-meteorology under Mediterranean conditions, where data are sometimes scarce, by selecting two hydrographic basins in south-east Andalusia: the Guadalhorce river (Málaga) and the Guadalfeo river (Granada). In chapter 1, an introduction to the doctoral thesis is made. Likewise, we establish the general and the specific objectives, and the motivation of the thesis. Afterwards, we describe the three fundamental exercises to be carried out in the research work: Regression, Classification and Optimisation. Ultimately, we carry out a brief review of previous works under Mediterranean climatic conditions and similar assumptions. Chapter 2 presents the study areas, analysing the spatial and temporal characteristics of two Andalusian Mediterranean basins in south-east Spain: Guadalhorce (GH) and Guadalfeo (GF). These are hydrographic basins with highly variable/heterogeneous spacetime patterns. The first hydrological system, GH, contains an area of socio-economic importance, such is the city of M´alaga. The second, GF, to the north has the Sierra Nevada National Park, crowned by the Mulhac´en peak and flowing in a few kilometres into the area of Motril. In this particular water system, we find large gradients of the geophysical agents. Both systems have regulation structures of great interest for the development and study of their optimisation. We also review the monitoring networks available in these basins, and which environmental agents and/or processes should be taken into account to meet the objectives of this work. We carry out a bibliographic review of the most relevant historical floods, listing the factors associated with these extreme events. In the data analysis stage of this chapter, we focus on the spatialtemporal evolution of the risk of flooding in the two mouths of the Guadalhorce and Guadalfeo Rivers into the Albor´an Sea. We quantify that had stepped up in recent years, noting that dangerous practices have increased the risk of flooding because of the intrusion of land uses with high-costs. This chapter also analyses collected data within the monitoring networks, to understand the occurrence of floods in the river GH related to upstream discharges. We found that this basin has limitations in regulation and cannot mitigate costs downstream. The results got, were part of the work presented in Egüen et al. (2015). These analyses allow us to identify in which parts of the flood management of this hydrological system need a more precise optimisation. Finally, a summary of another important hydrological risk is carried out, such as droughts, and how these water deficits can be represented by standardised indices, both in rainfall and the flow rates. The various approaches and methodologies for hydro-meteorological time series modelling are discussed in the chapter 3. The contrasting concepts are exposed antagonistically, to focus on the different design choices that we need to make: black box vs. grey box vs. white box, parametric vs. non-parametric, static vs. dynamic, linear vs. non-linear, frequency vs. Bayesian, single vs. multiple, among others..., detailing the advantages and disadvantages of each approach. We presented some ideas that emerged in this part of the research in Herrero et al. (2014). The partition, management and data transformation steps for the correct application of these experimental methods are also discussed. This is of great importance, since part of the hard work in the application of these methods comes from the transformation of the data. So that, the algorithms and transfer functions work correctly. Finally, we focus on how to test and validate the deterministic and probabilistic behaviours through evaluative coefficients to avoid coefficients that mask the results, and therefore focus on the behaviours of our interest, in our case precision and predictability. We have also taken parsimony into account in models based on neural networks, since they can easily fall into over-parameterisation. In chapter 4, we present the experimental work, where seven short-term, six daily and one hourly rainfall-runoff regressions are performed. The case studies correspond to various points of interest within the study areas with important implications for hydrological management. On an hourly scale, we analyse the efficiency and predictive capacities of the MLR and BNN at ten time horizons for the level of the Guadalhorce River in Cártama. We found that, for closer predictive horizons, a simpler approach such as linear (MLR) can outperform other with a priori higher capabilities, such as non-linear (BNN). This finding could simplify greatly its development and application. At a daily scale, we establish a comparative framework between the two previous models and a complete Bayesian method such as the Gaussian Processes. This DD computational technique, allows us to apply different transfer functions under a single model. This is an advantage over the other two DD models, since the results show that they work well in one domain, but do not work well in the other. During the construction of the models, we do the selection of the input variables in a progressive way, through a trial-and-error method, where the significant improvements with respect to the last predictor structure are taken into account preserving the principle of parsimony. Here, we have used different types of data: real data collected in the monitoring networks, and data generated in parallel from physically based hydrological modelling (WiMMed). The results are robust, where the major limitation is the high computational cost by the recurrent and iterative method used. Some results of this chapter, were presented in Gulliver et al. (2014). In chapter 5 three medium-term time scale prediction experiments are performed. We base the first modelling experiment on a quarterly scale, where a hydrological time scheme determines the cumulative flow for specific time horizons. We start the scheme according to the relevant dates where hydrological planning takes place. It is validated that the forecasts are more prosperous after have been consumed the first six months of the hydrological year. Instead of the three months in which we carry out the evaluations. The observed input variables quantified in the water system are: cumulative stream flow, cumulative rainfall, cumulative snowfall values and atmospheric oscillations (AO). At the level of modelling with DD, this experience has shown the importance of combining mixed regression classification models instead of only regression models within static frameworks. In this manner, we reduce and narrow the space of possible solutions and, therefore, we optimised the predictive behaviour of the DD model. During the development of this exercise, we have also carried out a classification practice comparing three DD classifiers: Probabilistic Neural Network (PNN), K-Nearest Neighbour (KNN) and Support Vector Machine (SVM). We see that the SVM behaves better than the others with our data. However, more research is still needed on classifiers in hydro-meteorological frameworks like ours, because of their variability. We showed this part of the doctoral thesis in Gulliver et al. (2016). In the second section of this chapter (Sec. 5.3), we carry out a rain forecast exercise on a monthly scale. To do so, we use BNN following the same construction method of the SVI model exposed in the previous chapter (Sec. Ref. Chapter 4), thus validating it in another time scale. However, the results in predictive terms are poor for this hydro-meteorological variable. This confirms the difficulty of predicting this variable from historical data and without the incorporation of dynamic tools. Thus, the need for complex hydrodynamic modelling for the prediction of this important variable is confirmed. On the other hand, this case serves to empirically infer the causality of the most relevant atmospheric oscillations in the points of study. From multiple simulations with the model-based approach it has been possible to establish which indices have a greater influence. In the last section of this chapter (Section 5.4), an exercise was carried out to predict the deviation or anomaly of rainfall and runoff indices for four time series representative of different locations within the Guadalfeo BR. In this case, we verified the suitability of seven statistical distributions to characterize the anomalies/deviations under Mediterranean conditions. Under this hypothesis, the indices that passed the Shapiro-Wilk normality test were modelled to analyse the capabilities of BNN to predict these indices at various time horizons. Here, predictions of negative phases (droughts or deficit periods) have been poor, and the behaviour of the models for positive phases (wet periods) has been more successful. Regarding the causal inference of IC and its possible influence on the study area, we found out how NAO and WEMO help forecasts for shorter time horizons, while MOI helps for longer cumulative time horizons/times. We have analysed the relevance of these atmospheric variables in each case where sometimes their introduction was convenient and sometimes not, following the rules of construction and detailing them in each case study. Throughout the work, the usefulness of mixed modelling approaches has been verified, using models based on observed data from the different monitoring networks with physical modelling for the reproduction of essential hydrological processes. With the proposed methodology, a positive influence of atmospheric oscillations has been observed for medium-term prediction within the study regions, finding no evidence for short-term predictions (daily scale). The final conclusions and the most important points for future work are presented in the chapter 6. Applications of this type of methods are currently necessary. They help us to establish relationships based on measured hydro-meteorological data and thus ”based on real data”, without hypothesizing any assumptions. These data-based experiences are very useful for limiting future uncertainty and optimizing water resources. The establishment of temporal relationships between different environmental agents allows us, through supervised methods, to establish causal relationships. From here a physical inference exercise is necessary to add coherence and establish a robust scientific exercise. The results obtained in this work, reaffirm the practicality of implementing this Data- Driven frameworks, in both the public and private spheres, being a good starting point for technology transfer. Most of the routines and models provided in this thesis, could be directly applied in Hydro-meteorological Services, or Decision Support Systems for water officials. This includes potential users as varied as public administrations and basin organisations, reservoir managers, energy companies that manage hydroelectric generation, irrigation communities, water bottling plants,... etc. The establishment of iterative and automatic frameworks for data processing and modelling, needs to be implemented, to make the most of the data collected in the water systems.Desde el inicio de los tiempos, se innova en el conocimiento y la tecnología de los sistemas hídricos e hidráulicos con el fin de conseguir una eficiente y correcta gestión de los mismos. En este proyecto, como hipótesis de partida, se van a aplicar diversas técnicas computacionales y conceptos de Inteligencia Artificial. Dado que el principal activo de estas aplicaciones son los datos, optamos por el término ”Data-Driven” (DD), ya que el término de Inteligencia Artificial puede causar confusión en los no expertos. Este es un campo en expansión en todos los aspectos de la ciencia y de la vida, donde al tiempo que se incrementan las capacidades de computación y de procesamiento, se incrementa la generación de datos. Ahí tenemos la tecnología 5G, o el internet de las cosas, donde el incremento exponencial del volumen de datos que se utilizan nos obliga a desarrollar marcos para el tratamiento y el análisis de los mismos. Los métodos DD tienen un enorme potencial para transformar nuestra habilidad de establecer un seguimiento supervisado y predecir estados de variables hidro-meteorológicas. Su aplicación provee claramente de beneficios, sin embargo realizar estos ejercicios requiere una práctica y un conocimiento específico. Por ello, es necesario un entendimiento más profundo de las capacidades y de las limitaciones de estas técnicas computacionales, dentro de nuestro campo de conocimiento y casos específicos. Por estos motivos, es esencial realizar experiencias ”hidro-informáticas” bajo este supuesto, identificando así que puntos son los más relevantes y a tener en cuenta en el desarrollo y la validación de estos modelos en condiciones o marcos más regionales. Para ello, trabajaremos con las series temporales recogidas en las diferentes redes de monitorización, con series resultantes de modelado hidro-meteorológico y con series de las oscilaciones atmosféricas más relevantes en la zona de estudio. El objetivo principal de este trabajo es el desarrollo y la validación de marcos metodológicos basados en datos. Para ello, se seleccionan puntos de interés, con el fin de desarrollar marcos hidro-meteorológicos ´útiles en la gestión y optimización de los recursos hídricos. En este supuesto, nos interesa ver la aplicabilidad práctica de estas herramientas de aprendizaje automático, machine learning, en condiciones mediterráneas y locales, donde los datos a veces son escasos o de baja calidad. En el primer capítulo (Cap.1) se realiza una introducción a la tesis doctoral, estableciendo los objetivos tanto generales como específicos, y la motivación de la tesis. Seguidamente se realiza a modo introductorio una descripción de los tres ejercicios fundamentales a realizar en el trabajo de investigación: Regresión, Clasificación y Optimización. Finalmente, se realiza una revisión del estado del arte de trabajos previos bajo condiciones climáticas mediterráneas y similares. El capítulo 2 presenta las zonas de estudio, analizando las características espacio-temporales de dos cuencas mediterráneas andaluzas situadas en el sureste español: río Guadalhorce (GH) y río Guadalfeo (GF). Son cuencas hidrográficas con unos patrones espaciotemporales altamente variables/heterogéneos. El primer sistema hidrológico, GH, contiene una zona de gran importancia socio-económica como es la ciudad de Málaga. El segundo, GF, al norte tiene situado el Parque Nacional de Sierra Nevada, coronado por el pico Mulhacén y desemboca a pocos kilómetros en la costa de Motril. Esto hace que este sea un sistema con grandes gradientes geo-morfológicos e hidro-meteorológicos. En ambas cuencas existen estructuras de regulación de gran interés para el desarrollo y estudio de su optimización. También se revisan las redes de monitorización disponibles en estas cuencas, y que agentes deben ser tenidos en cuenta para la consecución de los objetivos del presente trabajo. En la etapa de análisis de datos de este capítulo, nos centramos en la evolución espacio temporal del riesgo frente a las inundaciones en las desembocaduras de ambos sistemas hidrológicos al mar de Alborán. Se cuantifica el aumento del riesgo frente a inundaciones ante la intrusión de usos del suelo con altos costes en las zonas potencialmente inundables en estos ´últimos años, constatando así una mala práctica en la planificación del territorio dentro de la zona de estudio. También, en este capítulo se analizan los datos registrados con el fin de comprender la ocurrencia de avenidas en el río GH y su relación con los desembalses aguas arriba. En este análisis se pudo identificar, como ante algunos eventos pluviométricos extremos (> 100mm/24h), esta cuenca tiene limitaciones en la regulación, no pudiendo así mitigar los costes aguas abajo. Parte de los resultados obtenidos formaron parte del trabajo presentado en Egüen et al. (2015). Estos análisis nos permiten identificar la necesidad de una optimización temporal más precisa en la gestión de avenidas en este sistema hidrológico. Finalmente, realizamos un análisis de otro riesgo hidrológico importante como son las sequías, y cómo podemos representar este déficit hídrico mediante índices estandarizados, tanto para la pluviometría como para la escorrentía. En el capítulo 3 se analizan los diversos enfoques y metodologías para el modelado de series temporales hidro-meteorológicas. Los enfoques se exponen de forma antagonista entre las diferentes opciones de modelado que tenemos: caja negra vs. caja gris vs. caja blanca, paramétricos vs. no-paramétricos, estático vs. dinámico, lineal vs. no-lineal, frecuentista vs. bayesiano, único vs múltiple, entre otros..., enumerando las ventajas e inconvenientes de cada enfoque. Algunas ideas surgidas en esta parte de la investigación fueron expuestas en Herrero et al. (2014). Por otro lado, también se discuten los pasos de partición, gestión y transformación de los datos para una correcta aplicación de este tipo de métodos experimentales. Esto es de gran importancia, ya que parte del trabajo duro en la aplicación de este tipo de metodologías, proviene de la transformación de los datos para que los algoritmos y las funciones de transferencia funcionen correctamente. En la parte final de este capítulo, nos centramos en cómo evaluar y validar el comportamiento determinista y probabilístico mediante coeficientes evaluativos. En este punto, prestamos especial atención en evitar la utilización de coeficientes que enmascaren los resultados o muy generalistas, y por lo tanto nos centramos en aquellos que evalúan las capacidades predictivas y de precisión de los modelos. También se ha tenido en cuenta la parsimonia para los modelos basados en redes neuronales, ya que pueden caer fácilmente en una sobre-parametrización. El capítulo 4 expone trabajo puramente experimental, donde se realizan siete regresiones lluvia escorrentía a corto plazo, seis diarias y una horaria. Los casos de estudio corresponden a diversos puntos de interés dentro de las zonas de estudio, con importantes implicaciones en la gestión hidrológica. A escala horaria se analiza las capacidades de eficiencia y predictivas de la Regresión Lineal Múltiple (MLR) y Redes Neuronales Bayesianas (BNN) a diez horizontes temporales para el nivel del río Guadalhorce en el puente de Cártama. Se encontró que, para horizontes predictivos más cercanos, un enfoque más sencillo como puede ser el lineal (MLR), puede superar a uno con mayores capacidades predictivas a priori, como pueden ser uno no lineal (BNN). Simplificando así, el desarrollo y la implementación de este tipo de técnicas computacionales bajo este tipo de marcos hidrológicos. Por otro lado, a escala diaria se establece un marco comparativo entre los dos modelos anteriores, MLR y BNN, y un método bayesiano completo: Procesos Gaussianos (GP). Esta técnica computacional, nos permite aplicar funciones de transferencia de diferente naturaleza bajo un único modelo. Esto es una ventaja con respecto a los otros dos modelos computacionales, ya que los resultados nos indican que a veces funcionan bien en un dominio, pero no funcionan bien en el contrario. Durante la construcción de los modelos, la selección de las variables de entrada se realiza de forma progresiva, mediante un método de prueba y error, donde se tienen en cuenta las mejoras significativas con respecto a la última estructura de predictores preservando el principio de parsimonia. Se han utilizado datos de diferente naturaleza: datos reales recogidos en las redes de monitorización y datos generados paralelamente de modalización hidrológica con base física (WiMMed). Los resultados son robustos donde la principal limitación es el alto coste computacional por el método recurrente e iterativo. Resultados de este capítulo fueron presentados en Gulliver et al. (2014). En el capítulo 5 se realizan tres

    Development of a high spatial selectivity tri-polar concentric ring electrode for Laplacian electroencephalography (LEEG) system

    Get PDF
    Brain activity generates electrical potentials that are spatio-temporal in nature. Electroencephalography (EEG) is the least costly and most widely used non-invasive technique for diagnosing many brain problems. It has high temporal resolution but lacks high spatial resolution. The surface Laplacian will enhance the spatial resolution of EEG as it performs the second spatial derivative of the surface potentials. In an attempt to increase the spatial selectivity, researchers introduced a bipolar electrode configuration using a five point finite difference method (FPM) and others applied a quasi-bipolar (tri-polar with two elements shorted) concentric electrode configuration. To further increase the spatial resolution, the nine-point finite difference method (NPM) was generalized to tri-polar concentric ring electrodes. A computer model was developed to evaluate and compare the properties of concentric bipolar, quasi-bipolar, and tri-polar electrode configurations, and the results were verified with tank experiments. The tri-polar configuration was found to have significantly improved spatial localization. Movement-related potential (MRP) signals were recorded from the left pre-frontal lobes on the scalp of human subjects while they performed fast repetitive movements. Disc, bipolar, quasi-bipolar, and tri-polar electrodes were used. MRP signals were plotted for all four electrode configurations. The SNR of four electrode configurations were studied and statistically analyzed using Bonferroni statistical tests. MRP signals were recorded from an array of 5X7 on the left hemisphere of the head. The SNR, spatial selectivity, and mutual information (MI) were compared among conventional disc electrodes, bipolar and tri-polar concentric ring electrodes. The tri-polar concentric electrodes showed more significant improvement in SNR than the all other electrode systems tested. Tri-polar concentric electrodes also had significantly higher spatial selectivity and spatial attenuation for global signals. The increased spatial selectivity significantly decreased the MI in between different channels which will be useful in different BCI system. The tri-polar and bipolar concentric ring electrode configuration was also shown to be appropriate for recording seizure electrographic activity. This higher spatial selectivity of tri-polar concentric electrodes may be useful for seizure foci detection and seizure stage determination

    Public accountability: The case of government guarantee scheme in PFI/PPP projects

    Get PDF
    Although government guarantee scheme has become a well-known policy strategy for encouraging public-private infrastructure delivery. However, a huge concern with government guarantee in PFI/PPP is the issue of weak public accountability scrutiny. This study therefore investigates accountability mechanisms necessary for evaluating PFI/PPP government guarantee scheme within UK context. Using exploratory sequential mixed methodology approach, constructs from accountability theory (Process-Based Accountability Mechanisms, Ethics-Based Accountability Mechanisms, Democratic Accountability Mechanisms, and Outcome-Based Accountability Mechanisms) were examined. Sixteen (16) accountability mechanisms (value for money, parliamentary scrutiny, rule of law etc.) useful for evaluating PFI/PPP government guarantee scheme were identified and used to formulate theoretical hypotheses. Through literature review, documentation and case study interviews with experts in public and private sectors, 78 indicators contributing towards each accountability mechanism were uncovered. Confirming the relevance of each indicators from experts in the qualitative study, a final questionnaire survey was developed and distributed to wider audiences. Series of statistical tests were performed on the collected questionnaire data including Descriptive Mean Rating, Reliability Analysis, Mann Whitney U Test of Significant Differences in Perceptions and Structural Equation Modelling. The results revealed fourteen out of the sixteen tested hypotheses were validated, with two rejected (Benchmarking and Budgetary Reporting). Findings also identified the top-five accountability mechanisms critical for evaluating PFI/PPP government guarantee scheme comprising: Value for Money, Competition, Social and Political Impact, Risk Management, and Parliamentary Scrutiny. The study culminated in a multidimensional framework for public accountability in PFI/PPP government guarantee scheme. Contributing towards existing accountability theory, the study confirmed a combination of multiple accountabilities, as against a single-dimensional accountability, is necessary for strengthening public accountability in PFI/PPP government guarantee scheme. For UK policy formulators, the result suggested need for future re-dimensioning of accountability frameworks for infrastructure government guarantee schemes, especially as the nation faces new geo-political and economic complexities in years to come

    Feasibility study for a numerical aerodynamic simulation facility. Volume 1

    Get PDF
    A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation

    Automatic generation of sound synthesis techniques

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001.Includes bibliographical references (p. 97-98).Digital sound synthesizers, ubiquitous today in sound cards, software and dedicated hardware, use algorithms (Sound Synthesis Techniques, SSTs) capable of generating sounds similar to those of acoustic instruments and even totally novel sounds. The design of SSTs is a very hard problem. It is usually assumed that it requires human ingenuity to design an algorithm suitable for synthesizing a sound with certain characteristics. Many of the SSTs commonly used are the fruit of experimentation and a long refinement processes. A SST is determined by its "functional form" and "internal parameters". Design of SSTs is usually done by selecting a fixed functional form from a handful of commonly used SSTs, and performing a parameter estimation technique to find a set of internal parameters that will best emulate the target sound. A new approach for automating the design of SSTs is proposed. It uses a set of examples of the desired behavior of the SST in the form of "inputs + target sound". The approach is capable of suggesting novel functional forms and their internal parameters, suited to follow closely the given examples. Design of a SST is stated as a search problem in the SST space (the space spanned by all the possible valid functional forms and internal parameters, within certain limits to make it practical). This search is done using evolutionary methods; specifically, Genetic Programming (GP). A custom language for representing and manipulating SSTs as topology graphs and expression trees is proposed, as well as the mapping rules between both representations. Fitness functions that use analytical and perceptual distance metrics between the target and produced sounds are discussed. The AGeSS system (Automatic Generation of Sound Synthesizers) developed in the Media Lab is outlined, and some SSTs and their evolution are shown.by Ricardo A. García.S.M

    Wireless modular multi-sensor systems for the analysis of mechanical coupling between respiration and locomotion in mammals

    Get PDF
    Die Kopplung zwischen Fortbewegung und Atmung (Locomotion-Respiration-Coupling LRC) basiert bei Säugetieren nach den gängigen Modellvorstellungen sowohl auf mechanischen als auch neuromuskulären Bindungen zwischen beiden Prozessen. Zur artübergreifenden Analyse dieser Interaktionen fehlt es bisher an einfach anpassbaren, modularen Systemen. Damit fehlt es an belastbaren Messdaten zur Beantwortung der Fragen, wie Fortbewegungszyklen zum Atemfluss beitragen oder wie die Atemmuskelkontraktionen die Fortbewegung beeinflussen. Die meisten der bisherigen artspezifischen Studien konzentrierten sich auf LRC während des Laufens, aber einige analysierten auch andere Aktivitäten wie Radfahren, Fliegen (Vögel) oder Tauchen. In dieser Arbeit wurde basierend auf einem modularen Multisensor-Funksystem eine neuartige Methode entwickelt, die es ermöglicht, die Interaktion zwischen Fortbewegung und Atmung bei Säugetieren zu analysieren. Das entwickelte System besteht aus vier Komponenten für die LRC-Analyse: (1) einer Thoraxbinnendruckmessung basierend auf einem implantierbaren Gerät, (2) ein Volumenstrommodul zur Messung des lokomotorisch getriebenen Luftvolumens (LDV - Locomotor driven air volume) während des Atemzyklus, (3) ein Schrittidentifikationsmodul zur Berechnung des LRC-Verhältnisses (Schritt/Atem) und (4) ein Muskelaktivitätsmodul zur Analyse des Verhaltens des Atemmuskels während der Kopplung. Diese Module sind freizügig kombinierbar. Die drahtlose Kommunikation erlaubt es, Untersuchungen im Freifeld durchzuführen, wobei sich das Tier (oder der Mensch) im Gegensatz zu früheren Studien, in denen sich das Subjekt mit einer konstanten Geschwindigkeit auf einem Laufband bewegt, frei mit einer selbstgewählten Laufgeschwindigkeit bewegen kann. Diese Möglichkeit könnte das Stressniveau von Tieren während der Experimente signifikant reduzieren, die Analyseergebnisse liegen absehbar näher am "natürlichen" Laufverhalten (unrestrained) als jene von Laufbandstudien (restrained). Als experimenteller Test des Systems wurde die Methode am Menschen angewendet. Das Respiratory Flow Module (RFM) wurde basierend auf einer „ergonomischen Maske“ und einem Strömungssensor entwickelt. Das Respiratory Muscles Module (RMM) nutzte vier Oberflächen-Elektromyographie-Sensoren (sEMG) an der Bauch- und Brustmuskulatur. Am Knöchel jedes Beines befanden sich zwei Beschleunigungssensoren, um den Fuß-Boden-Kontakt zu erkennen. Fünfzehn Teilnehmer wurden bei einem Sprint-Lauftest in einem Sportzentrum (50 m x 30 m) der Technischen Universität Ilmenau beobachtet. Die erhaltenen Ergebnisse bestätigten ein variables LRC-Verhältnis von 2:1, 3:1, 4:1 wie in früheren Studien gezeigt wurde, zeigte jedoch zusätzlich im Falle des LDV die Nutzung der annähernd maximal möglichen Amplitude (Vitalkapazität) auf. Das Experiment belegt, dass die neue Methode zur Untersuchung von Säugetieren verwendet werden kann.Locomotor-respiratory coupling (LRC) is a mechanical and neuromuscular link between respiration and locomotion in mammals. In the last several decades many researchers have developed studies in this field measuring LRC in different mammals. However, until now it was not exactly established how many locomotion cycles contribute to the respiratory flow or how the respiratory muscle contractions affect the locomotion cycles. Most of these studies were focused on LRC during running, but some also analyzed other activities like cycling, flying (birds), diving. In this work a novel method was developed based on a modular multi-sensor wireless system which allows analyzing the interaction between locomotion and respiration in mammals. The developed system consists of four modules for the LRC analysis: (1) a thoracic pressure measurement based on an implantable device, (2) a volumetric flow module to measure the locomotor driven air volume (LDV) during the breathing cycle, (3) a step identification module to calculate the LRC ratio (stride/breath), and (4) a muscular activity module to analyze the behavior of the respiratory muscle during the coupling. The wireless communication allows performing studies in open field, where the animal can move freely with a self selected running pace, contrary to previous studies where the object moves at a steady speed on a treadmill. These characteristics could significantly reduce the stress level of animals during the experiments. The method was applied to humans as an experimental test of the system, the Respiratory Flow Module (RFM) was designed based on an ergonomic mask and flow sensor. The Respiratory Muscles Module (RMM) had four surface Electromyography (sEMG) sensors located at the abdominal and thoracic respiratory muscles and two accelerometers were located at the ankle of each leg to detect the foot-ground contact. Fifteen participants were evaluated in a sprint running test at a sport center (50 m x 30 m) of Technische Universität Ilmenau. The obtained results confirmed a variable LRC ratio of 2:1, 3:1, 4:1, as was shown in previous studies. However, in the case of LDV it reached almost the maximum amplitude of the vital capacity. The performed experiment showed that our novel method could also be used to study other mammals

    Methods for monitoring the human circadian rhythm in free-living

    Get PDF
    Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles. In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime. At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations. We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32±17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40±48 min and wake error of 42±57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle

    Energy Accounting and Optimization for Mobile Systems

    Get PDF
    Energy accounting determines how much a software process contributes to the total system energy consumption. It is the foundation for evaluating software and has been widely used by operating system based energy management. While various energy accounting policies have been tried, there is no known way to evaluate them directly simply because it is hard to track every hardware use by software in a heterogeneous multicore system like modern smartphones and tablets. This work provides the ground truth for energy accounting based on multi-player game theory and offers the first evaluation of existing energy accounting policies, revealing their important flaws. The proposed ground truth is based on Shapley value, a single value solution to multi-player games of which four axiomatic properties are natural and self-evident to energy accounting. This work further provides a utility optimization formulation of energy management and shows, surprisingly, that energy accounting does not matter for existing energy management solutions that control the energy use of a process by giving it an energy budget, or budget based energy management (BEM). This work shows an optimal energy management (OEM) framework can always outperform BEM. While OEM does not require any form of energy accounting, it is related to Shapley value in that both require the system energy consumption for all possible combination of processes under question. This work reports a prototype implementation of both Shapley value-based energy accounting and OEM based scheduling. Using this prototype and smartphone workload, this work experimentally demonstrates how erroneous existing energy accounting policies can be, show that existing BEM solutions are unnecessarily complicated yet underperforming by 20% compared to OEM
    corecore