224 research outputs found

    A proof of concept study of respiratory physiology in preterm neonates during high flow nasal cannula therapy

    Get PDF
    PhD ThesisIntroduction and rationale: High flow nasal cannula therapy is being increasingly used as a form of respiratory support across the world. Its adoption and popularity have been rapid but little is known regarding its key mechanism of action even after more than a decade of its use. I conducted a proof of concept study of respiratory physiology during high flow therapy in preterm neonates. Methods: The study protocol involved measurement of nasopharyngeal airway pressures and gas concentrations as well as measurement of tidal breathing indices. A detailed descriptive review of clinical efficacy of high flow nasal cannula in preterm infants was performed. In order to identify the optimum measuring techniques, in this proof of concept study, three types of pressure measuring techniques, a gas analyser device and a non-invasive tidal breathing indices device were studied and the results are presented in this thesis. In addition, a detailed protocol for a larger randomised crossover study of respiratory physiology during continuous positive airway pressure of 6 cm H2O and high flow nasal cannula therapy ranging from 2-8 litres per minute flow was designed.. Results: In this thesis, the results of a proof of concept physiological study have been presented. The results of the measurements performed in six babies of varying gestational age (less than 37 weeks of gestation) and birth weight are presented. Valid tidal volumes were measured in all babies, nasopharyngeal gas concentrations and pressure measurements in five and two babies respectively. There were no adverse events. Conclusions: It is feasible to measure nasopharyngeal air way pressures and gas concentrations as well as non-invasive tidal breathing indices in babies on high flow nasal cannula therapy safely. This study was successfully followed up by a larger randomised cross over study involving 45 infants with the same protocol.Special Trustees at Newcastle Hospital

    Relative Importance of Different Body Channels to the Believability of a Stylized 3D Character Animation in a Full Body Shot

    Get PDF
    Character animation involves movement in different channels of the character’s body. There is a norm in the animation industry that each of these different body channels has varying contributions to the believability of an acting performance: body movement being the highest contributor and lip sync the lowest. This thesis investigated this norm using statistical analysis. While the reduction of body motion caused the biggest drop in believability, similar reductions in facial animation and lip sync have not made a significant effect on the believability. The only exception to the above statement is the animation depicting the emotion sadness, where no significant effect was found for any of the body channels. Also, the emotion anger seems to have an interaction between the participant background and the body channel. Here, the participants from Computer Graphics background gave lower believability ratings compared to the ones without it. To generalize, when it comes to a stylized character animation in a full body shot, it does appear that biggest contributor to the believability is the body motion while it appears that the reduction in facial and lip sync animation does not impact the believability as much as the reduction in body motion

    ON ROBUST MACHINE LEARNING IN THE PRESENCE OF ADVERSARIES

    Get PDF
    In today\u27s highly connected world, the number of smart devices worldwide has increased exponentially. These devices generate huge amounts of real-time data, perform complicated computational tasks, and provide actionable information. Over the past decade, numerous machine learning approaches have been widely adopted to infer hidden information from this massive and complex data. Accuracy is not enough when developing machine learning systems for some crucial application domains. The safety and reliability guarantees on the underlying learning models are critical requirements as well. This in turn necessitates that the learned models be robust towards processing corrupted data. Data can be corrupted by adversarial attacks where the attack may consist of data taking arbitrary values adversely affecting the efficiency of the algorithm. An adversary can replace samples with erroneous or malicious samples such as false labels or arbitrary inputs. In this dissertation, we refer to this type of attack as attack on data. Moreover, with the rapid increase in the volume of the data, storing and processing all this data at a central location becomes computationally expensive. Therefore, utilizing a distributed system is warranted to distribute tasks across multiple machines (known as distributed learning). Improvement of the efficiency of the optimization algorithms with respect to computational and communication costs along with maintaining a high level of accuracy is critical in distributed learning. However, an attack can occur by replacing the transmitted data of the machines in the system with arbitrary values that may negatively impact the performance of the learning task. We refer to this attack as attack on devices. The aforementioned attack scenarios can significantly impact the accuracy of the results, thereby, negatively impacting the expected model outcome. Hence, the development of a new generation of systems that are robust to such adversarial attacks and provide provable performance guarantees is warranted. The goal of this dissertation is to develop learning algorithms that are robust to such adversarial attacks. In this dissertation, we propose learning algorithms that are robust to adversarial attacks under two frameworks: 1) supervised learning, where the true label of the samples are known; and 2) unsupervised learning, where the labels are not known. Although neural networks have gained widespread success, theoretical understanding of their performance is lacking. Therefore, in the first part of the dissertation (Chapter 2), we try to understand the inner workings of a neural network. We achieve this by learning the parameters of the network. In fact, we generalize the estimation procedure by considering the robustness aspect along with the parameter estimation in the presence of adversarial attacks (attack on data). We devise a learning algorithm to estimate the parameters (weight matrix and bias vector) of a single-layer neural network with rectified linear unit activation in the unsupervised learning framework where each output sample can potentially be an arbitrary outlier with a fixed probability. Our estimation algorithm uses gradient descent algorithms along with the median-based filter to mitigate the effect of the outliers. We further determine the number of samples required to estimate the parameters of the network in the presence of the outliers. Combining the use of distributed systems to solve large-scale problems with the recent success of deep learning, there has been a surge of development in the field of distributed learning. In fact, the research in this direction has been further catalyzed by the development of federated learning. Despite extensive research in this area, distributed learning faces the challenge of training a high-dimensional model in a distributed manner while maintaining robustness against adversarial attacks. Hence, in the second part of the dissertation (Chapters 3 and 4), we study the problem of distributed learning in the presence of adversarial nodes (attack on nodes). Specifically, we consider the worker-server architecture to minimize a global loss function under both the learning frameworks in the presence of adversarial nodes (Byzantines). Each honest node performs some computation based only on its own local data, then communicates with the central server that performs aggregation. However, an adversarial node may send arbitrary information to the central server. In Chapter 3, we consider robust distributed learning under the supervised learning framework. We propose a novel algorithm that combines the idea of variance-reduction with a filtering technique based on vector median to mitigate the effect of the Byzantines. We prove the convergence of the approach to a first-order stationary point. Further, in Chapter 4, we consider robust distributed learning under the unsupervised learning framework (robust clustering). We propose a novel algorithm that combines the idea of redundant data assignment with the paradigm of distributed clustering. We show that our proposed approaches obtain constant factor approximate solutions in the presence of adversarial nodes

    Object tracking using log-polar transformation

    Get PDF
    In this thesis, we use log-polar transform to solve object tracking. Object tracking in video sequences is a fundamental problem in computer vision. Even though object tracking is being studied extensively, still some challenges need to be addressed, such as appearance variations, large scale and rotation variations, and occlusion. We implemented a novel tracking algorithm which works robustly in the presence of large scale changes, rotation, occlusion, illumination changes, perspective transformations and some appearance changes. Log-polar transformation is used to achieve robustness to scale and rotation. Our object tracking approach is based on template matching technique. Template matching is based on extracting an example image, template, of an object in first frame, and then finding the region which best suites this template in the subsequent frames. In template matching, we implemented a fixed template algorithm and a template update algorithm. In the fixed template algorithm we use same template for the entire image sequence, where as in the template update algorithm the template is updated according to the changes in object image. The fixed template algorithm is faster; the template update algorithm is more robust to appearance changes in the object being tracked. The proposed object tracking is highly robust to scale, rotation, illumination changes and occlusion with good implementation speed

    Seismic imaging of a mid-lithospheric discontinuity beneath Ontong Java Plateau

    Get PDF
    Ontong Java Plateau (OJP) is a huge, completely submerged volcanic edifice that is hypothesized to have formed during large plume melting events ?90 and 120 My ago. It is currently resisting subduction into the North Solomon trench. The size and buoyancy of the plateau along with its history of plume melting and current interaction with a subduction zone are all similar to the characteristics and hypothesized mechanisms of continent formation. However, the plateau is remote, and enigmatic, and its proto-continent potential is debated. We use SS precursors to image seismic discontinuity structure beneath Ontong Java Plateau. We image a velocity increase with depth at 28±4 km consistent with the Moho. In addition, we image velocity decreases at 80±5 km and 282±7 km depth. Discontinuities at 60–100 km depth are frequently observed both beneath the oceans and the continents. However, the discontinuity at 282 km is anomalous in comparison to surrounding oceanic regions; in the context of previous results it may suggest a thick viscous root beneath OJP. If such a root exists, then the discontinuity at 80 km bears some similarity to the mid-lithospheric discontinuities (MLDs) observed beneath continents. One possibility is that plume melting events, similar to that which formed OJP, may cause discontinuities in the MLD depth range. Plume–plate interaction could be a mechanism for MLD formation in some continents in the Archean prior to the onset of subduction

    Traffic Characteristics of Non-Motorized Vehicles in Mixed Traffic

    Get PDF
    In present day scenario, in countries like India we can find mixed traffic conditions, i.e. traffic flow constituting of all sorts of vehicles like cycles, rickshaws, auto and so on. During the peak hours, the flow of NMVs is high. The presence of NMV in the traffic stream affect the traffic characteristics like speed, density and flow of the stream. In order to design a traffic facility, the traffic behavior has to be understood. For the mixed traffic conditions, it is difficult to understand the behavior of the stream. In this thesis, an endeavor is kept to study the traffic characteristics of NMVs in mixed stream. The entire project work is consists of two parts. The former is the experimental part and the latter is the statistical testing part. The former part of study includes the study of the fundamental diagrams, finding the capacity of the section and the lateral occupancy of the section for the data obtained from the various parts of the Rourkela City. It was seen that with the change in the NMV percentage an adversity is found in the parameters like speed, density and flow. In the study of lateral occupancy, it is observed that in one way divided traffic flow, the maximum number of NMVs are occupying the left two strips and the MVs are occupying the right most strips as our Indian traffic behavior is left handed and it is easy for the MVs to overtake the slow moving vehicles. In the case of undivided two way traffic, the maximum number of traffic is found in the middle portions but a minimum on the right and left strips in the light of fact that the vehicles are present in the opposite directions. IV In the statistical analysis part, a comparison is made for the traffic parameters in Rourkela City between 2011 and 2014. The hypothesis testing is conducted between the traffic parameters and variations are found. The hypothesis testing compares the means of the two observed samples. The procedure follows four steps. The first step is of stating the null hypothesis, it is followed by test static, P-value and conclusion and finally decision making. The decision is made on the basis of Z observed and the obtained P-value. The results indicate that the percentage NMV is decreased from 2011 and the speed and flow got increased from 2011

    Examining Autoexposure for Challenging Scenes

    Full text link
    Autoexposure (AE) is a critical step applied by camera systems to ensure properly exposed images. While current AE algorithms are effective in well-lit environments with constant illumination, these algorithms still struggle in environments with bright light sources or scenes with abrupt changes in lighting. A significant hurdle in developing new AE algorithms for challenging environments, especially those with time-varying lighting, is the lack of suitable image datasets. To address this issue, we have captured a new 4D exposure dataset that provides a large solution space (i.e., shutter speed range from (1/500 to 15 seconds) over a temporal sequence with moving objects, bright lights, and varying lighting. In addition, we have designed a software platform to allow AE algorithms to be used in a plug-and-play manner with the dataset. Our dataset and associate platform enable repeatable evaluation of different AE algorithms and provide a much-needed starting point to develop better AE methods. We examine several existing AE strategies using our dataset and show that most users prefer a simple saliency method for challenging lighting conditions.Comment: ICCV 202

    The Neo Energy Industry

    Get PDF
    The main Idea of the project is to develop a new web application which regulates Gas & Electricity market and Promote competition, wherever appropriate & regulating the monopoly companies which run the gas & electricity networks. Helping the gas and electricity industries to achieve environmental improvements as efficiently as possible. Take account of the needs of vulnerable customers, particularly older people, those with disabilities and those on low incomes. There are 3 modules in the project GUI and Database Design, User Module and Administrator Module
    corecore