3,504 research outputs found

    Optimal control of the heave motion of marine cable subsea-unit systems

    Get PDF
    One of the key problems associated with subsea operations involving tethered subsea units is the motions of support vessels on the ocean surface which can be transmitted to the subsea unit through the cable and increase the tension. In this paper, a theoretical approach for heave compensation is developed. After proper modelling of each element of the system, which includes the cable/subsea-unit, the onboard winch, control theory is applied to design an optimal control law. Numerical simulations are carried out, and it is found that the proposed active control scheme appears to be a promising solution to the problem of heave compensation

    Pathological Evidence Exploration in Deep Retinal Image Diagnosis

    Full text link
    Though deep learning has shown successful performance in classifying the label and severity stage of certain disease, most of them give few evidence on how to make prediction. Here, we propose to exploit the interpretability of deep learning application in medical diagnosis. Inspired by Koch's Postulates, a well-known strategy in medical research to identify the property of pathogen, we define a pathological descriptor that can be extracted from the activated neurons of a diabetic retinopathy detector. To visualize the symptom and feature encoded in this descriptor, we propose a GAN based method to synthesize pathological retinal image given the descriptor and a binary vessel segmentation. Besides, with this descriptor, we can arbitrarily manipulate the position and quantity of lesions. As verified by a panel of 5 licensed ophthalmologists, our synthesized images carry the symptoms that are directly related to diabetic retinopathy diagnosis. The panel survey also shows that our generated images is both qualitatively and quantitatively superior to existing methods.Comment: to appear in AAAI (2019). The first two authors contributed equally to the paper. Corresponding Author: Feng L

    TimeClassifier - A Visual Analytic System for the Classification of Multi-Dimensional Time-Series Data

    Get PDF
    Biologists studying animals in their natural environment are increasingly using sensors such as accelerometers in animal-attached ‘smart’ tags because it is widely acknowledged that this approach can enhance the understanding of ecological and behavioural processes. The potential of such tags is tempered by the difficulty of extracting animal behaviour from the sensors which is currently primarily dependent on the manual inspection of multiple time-series graphs. This is time-consuming and error-prone for the domain expert and is now the limiting factor for realising the value of tags in this area. We introduce TimeClassifier, a visual analytic system for the classification of time-series data for movement ecologists. We deploy our system with biologists and report two real-world case studies of its use

    Automatic Ultrasound Scanning

    Get PDF

    Análise colaborativa de grandes conjuntos de séries temporais

    Get PDF
    The recent expansion of metrification on a daily basis has led to the production of massive quantities of data, and in many cases, these collected metrics are only useful for knowledge building when seen as a full sequence of data ordered by time, which constitutes a time series. To find and interpret meaningful behavioral patterns in time series, a multitude of analysis software tools have been developed. Many of the existing solutions use annotations to enable the curation of a knowledge base that is shared between a group of researchers over a network. However, these tools also lack appropriate mechanisms to handle a high number of concurrent requests and to properly store massive data sets and ontologies, as well as suitable representations for annotated data that are visually interpretable by humans and explorable by automated systems. The goal of the work presented in this dissertation is to iterate on existing time series analysis software and build a platform for the collaborative analysis of massive time series data sets, leveraging state-of-the-art technologies for querying, storing and displaying time series and annotations. A theoretical and domain-agnostic model was proposed to enable the implementation of a distributed, extensible, secure and high-performant architecture that handles various annotation proposals in simultaneous and avoids any data loss from overlapping contributions or unsanctioned changes. Analysts can share annotation projects with peers, restricting a set of collaborators to a smaller scope of analysis and to a limited catalog of annotation semantics. Annotations can express meaning not only over a segment of time, but also over a subset of the series that coexist in the same segment. A novel visual encoding for annotations is proposed, where annotations are rendered as arcs traced only over the affected series’ curves in order to reduce visual clutter. Moreover, the implementation of a full-stack prototype with a reactive web interface was described, directly following the proposed architectural and visualization model while applied to the HVAC domain. The performance of the prototype under different architectural approaches was benchmarked, and the interface was tested in its usability. Overall, the work described in this dissertation contributes with a more versatile, intuitive and scalable time series annotation platform that streamlines the knowledge-discovery workflow.A recente expansão de metrificação diária levou à produção de quantidades massivas de dados, e em muitos casos, estas métricas são úteis para a construção de conhecimento apenas quando vistas como uma sequência de dados ordenada por tempo, o que constitui uma série temporal. Para se encontrar padrões comportamentais significativos em séries temporais, uma grande variedade de software de análise foi desenvolvida. Muitas das soluções existentes utilizam anotações para permitir a curadoria de uma base de conhecimento que é compartilhada entre investigadores em rede. No entanto, estas ferramentas carecem de mecanismos apropriados para lidar com um elevado número de pedidos concorrentes e para armazenar conjuntos massivos de dados e ontologias, assim como também representações apropriadas para dados anotados que são visualmente interpretáveis por seres humanos e exploráveis por sistemas automatizados. O objetivo do trabalho apresentado nesta dissertação é iterar sobre o software de análise de séries temporais existente e construir uma plataforma para a análise colaborativa de grandes conjuntos de séries temporais, utilizando tecnologias estado-de-arte para pesquisar, armazenar e exibir séries temporais e anotações. Um modelo teórico e agnóstico quanto ao domínio foi proposto para permitir a implementação de uma arquitetura distribuída, extensível, segura e de alto desempenho que lida com várias propostas de anotação em simultâneo e evita quaisquer perdas de dados provenientes de contribuições sobrepostas ou alterações não-sancionadas. Os analistas podem compartilhar projetos de anotação com colegas, restringindo um conjunto de colaboradores a uma janela de análise mais pequena e a um catálogo limitado de semântica de anotação. As anotações podem exprimir significado não apenas sobre um intervalo de tempo, mas também sobre um subconjunto das séries que coexistem no mesmo intervalo. Uma nova codificação visual para anotações é proposta, onde as anotações são desenhadas como arcos traçados apenas sobre as curvas de séries afetadas de modo a reduzir o ruído visual. Para além disso, a implementação de um protótipo full-stack com uma interface reativa web foi descrita, seguindo diretamente o modelo de arquitetura e visualização proposto enquanto aplicado ao domínio AVAC. O desempenho do protótipo com diferentes decisões arquiteturais foi avaliado, e a interface foi testada quanto à sua usabilidade. Em geral, o trabalho descrito nesta dissertação contribui com uma abordagem mais versátil, intuitiva e escalável para uma plataforma de anotação sobre séries temporais que simplifica o fluxo de trabalho para a descoberta de conhecimento.Mestrado em Engenharia Informátic

    A Predictive Model for Scaffolding Manhours in Heavy Industrial Construction Projects: An application of machine learning

    Get PDF
    In cold countries like Canada, modular construction is widely adopted in heavy industrial construction projects due to weather uncertainties. To facilitate the construction processes, the temporary structures, especially scaffolding, are essential since it provides easy access for workers to carry out construction activities at different levels of the height and also ensures the safety of laborers. As indirect costs of projects, the scaffolding is estimated by 15-40% of project costs. Furthermore, according to the increase in the size of the projects, the scaffolding uses a larger amount of resources than estimated ones, which may cause budget overrun and schedule delay. However, due to the lack of systematic and scientific models to estimate scaffolding productivity, the heavy industrial company has difficulty to plan and allocate the resources for scaffold activities before construction. To overcome these challenges, this paper proposes a predictive model to estimate scaffolding productivity based on the historical scaffolding data of a heavy industrial project. The proposed model is developed based on the following steps: (i) identifying the key parameters (e.g. specific trades, work type, different scaffold methods, task times spent using scaffolds, and weights of the scaffolds) that influence the scaffolding manhours and project productivity; and (ii) developing the predictive models for scaffold manhours using machine learning algorithms including multiple linear regression, decision tree regression, random forest regression and artificial neural networks(ANN) . The accuracy of models have been measured with evaluation metrics which are mean absolute error (MAE) and root mean squared error (RMSE) and the R squared value. The findings reveal up to 90% accuracy for ANN models

    Plasma Thallium Concentration, Kidney Function, Nephrotoxicity and Graft Failure in Kidney Transplant Recipients

    Get PDF
    The nephrotoxic effects of heavy metals have gained increasing scientific attention in the past years. Recent studies suggest that heavy metals, including cadmium, lead, and arsenic, are detrimental to kidney transplant recipients (KTR) even at circulating concentrations within the normal range, posing an increased risk for graft failure. Thallium is another highly toxic heavy metal, yet the potential consequences of the circulating thallium concentrations in KTR are unclear. We measured plasma thallium concentrations in 672 stable KTR enrolled in the prospective TransplantLines Food and Nutrition Biobank and Cohort Study using inductively coupled plasma mass spectrometry. In cross-sectional analyses, plasma thallium concentrations were positively associated with kidney function measures and hemoglobin. We observed no associations of thallium concentration with proteinuria or markers of tubular damage. In prospective analyses, we observed no association of plasma thallium with graft failure and mortality during a median follow-up of 5.4 [interquartile range: 4.8 to 6.1] years. In conclusion, in contrast with other heavy metals such as lead, cadmium, and arsenic, there is no evidence of tubular damage or thallium nephrotoxicity for the range of circulating thallium concentrations observed in this study. This is further evidenced by the absence of associations of plasma thallium with graft failure and mortality in KTR
    corecore