11 research outputs found

    Deep Learning for Pipeline Damage Detection: an Overview of the Concepts and a Survey of the State-of-the-Art

    Get PDF
    Pipelines have been extensively implemented to transfer oil as well as gas products at wide distances as they are safe, and suitable. However, numerous sorts of damages may happen to the pipeline, for instance erosion, cracks, and dent. Hence, if these faults are not properly refit will result in the pipeline demolitions having leak or segregation which leads to tremendously environment risks. Deep learning methods aid operators to recognize the earliest phases of threats to the pipeline, supplying them time and information in order to handle the problem efficiently. This paper illustrates fundamental implications of deep learning comprising convolutional neural networks. Furthermore the usages of deep learning approaches for hampering pipeline detriment through the earliest diagnosis of threats are introduced

    An ambient agent model for reading companion robot

    Get PDF
    Reading is essentially a problem-solving task. Based on what is read, like problem solving, it requires effort, planning, self-monitoring, strategy selection, and reflection. Also, as readers are trying to solve difficult problems, reading materials become more complex, thus demands more effort and challenges cognition. To address this issue, companion robots can be deployed to assist readers in solving difficult reading tasks by making reading process more enjoyable and meaningful. These robots require an ambient agent model, monitoring of a reader’s cognitive demand as it could consist of more complex tasks and dynamic interactions between human and environment. Current cognitive load models are not developed in a form to have reasoning qualities and not integrated into companion robots. Thus, this study has been conducted to develop an ambient agent model of cognitive load and reading performance to be integrated into a reading companion robot. The research activities were based on Design Science Research Process, Agent-Based Modelling, and Ambient Agent Framework. The proposed model was evaluated through a series of verification and validation approaches. The verification process includes equilibria evaluation and automated trace analysis approaches to ensure the model exhibits realistic behaviours and in accordance to related empirical data and literature. On the other hand, validation process that involved human experiment proved that a reading companion robot was able to reduce cognitive load during demanding reading tasks. Moreover, experiments results indicated that the integration of an ambient agent model into a reading companion robot enabled the robot to be perceived as a social, intelligent, useful, and motivational digital side-kick. The study contribution makes it feasible for new endeavours that aim at designing ambient applications based on human’s physical and cognitive process as an ambient agent model of cognitive load and reading performance was developed. Furthermore, it also helps in designing more realistic reading companion robots in the future

    Agent-based models for residential energy consumption and intervention simulation

    Get PDF
    The increase in energy consumption in buildings has gained global concern due to its negative implications on the environment. A major part of this increase is attributed to human behavioural energy waste, which has triggered the development of energy simulation models. These models are used to analyse energy consumption in buildings, study the effect of human behaviour and test the effectiveness of energy interventions. However, existing models are limited in simulating realistic and detailed human dynamics, including occupant interaction with appliances, with each other or with energy interventions. This detailed interaction is important when simulating and studying behavioural energy waste. To overcome the limitations of existing models, this thesis proposes a complete layered Agent-Based Model (ABM) composed of three layers / models. The daily behaviour model simulates realistic and detailed behaviour of occupants by integrating a Probabilistic Model (PM) in the ABM. The peer pressure model simulates family-level peer pressure effect on the energy consumption of the house. This model is underpinned using well established human behaviour theories by Leon Festinger – informal social communication theory, social comparison theory and cognitive dissonance theory. The messaging intervention model implements and tests a novel messaging intervention that is proposed in the thesis along with the complete ABM. The intervention is a middle solution between the abstract data presented by existing energy feedback systems and the automated approach followed by existing energy management systems. Therefore, it detects and sends energy waste incidents to occupants who are allowed to take control of their devices. The proposed intervention is tested in the messaging intervention model, which takes advantage of the two other proposed models. The undertaken experiments showed that the model is able to overcome the limitations of exiting models by simulating realistic and detailed human behaviour dynamics. Besides, the experiments showed that the model can be used by policy makers to decide how to target family members to achieve optimal energy saving, thus addressing the world’s concern about increased energy consumption levels

    PGAGrid: A Parallel Genetic Algorithm of Fine-Grained implemented on GPU to find solutions near the optimum to the Quadratic Assignment Problem (QAP)

    Get PDF
    This work consists in implementing a fine-grained parallel genetic algorithm improved with a greedy 2-opt heuristic to find near-optimal solutions to the Quadratic Assignment Problem (QAP). The proposed algorithm was fully implemented on Graphics Processing Units (GPUs). A two-dimensional GPU grid of size 8x8 defines the population of the genetic algorithm (set of permutations of the QAP), and each GPU block consists of n GPU threads, where n is the size of the QAP. Each GPU block was used to represent the chromosome of a single individual, and each GPU thread represents a gene of such chromosome. The proposed algorithm was tested on a subset of the standard QAPLIB data set. Results show that this implementation is able to find good solutions for large QAP instances in few parallel iterations of the evolutionary process.Resumen: Este trabajo consiste en implementar un algoritmo genético paralelo de grano fino mejorado con una heurística 2-opt voraz para encontrar soluciones cercanas al óptimo al problema de Asignación Cuadrática (QAP). El algoritmo propuesto fue completamente implementado sobre Unidades de Procesamiento Gráfico (GPUs). Una retícula GPU bidimensional de tamaño 8×8 define la población del algoritmo genético (conjunto de permutaciones del QAP) y cada bloque GPU consiste de n hilos GPU donde n es el tamaño del QAP. Cada bloque GPU fue utilizado para representar el cromosoma de un solo individuo y cada hilo GPU representa un gen de tal cromosoma. El algoritmo propuesto fue comprobado sobre un subconjunto de problemas de la librería estándar QAPLIB. Los resultados muestran que esta implementación es capaz de encontrar buenas soluciones para grandes instancias del QAP en pocas iteraciones del proceso evolutivo.Doctorad

    Enhancing Geospatial Data: Collecting and Visualising User-Generated Content Through Custom Toolkits and Cloud Computing Workflows

    Get PDF
    Through this thesis we set the hypothesis that, via the creation of a set of custom toolkits, using cloud computing, online user-generated content, can be extracted from emerging large-scale data sets, allowing the collection, analysis and visualisation of geospatial data by social scientists. By the use of a custom-built suite of software, known as the ‘BigDataToolkit’, we examine the need and use of cloud computing and custom workflows to open up access to existing online data as well as setting up processes to enable the collection of new data. We examine the use of the toolkit to collect large amounts of data from various online sources, such as Social Media Application Programming Interfaces (APIs) and data stores, to visualise the data collected in real-time. Through the execution of these workflows, this thesis presents an implementation of a smart collector framework to automate the collection process to significantly increase the amount of data that can be obtained from the standard API endpoints. By the use of these interconnected methods and distributed collection workflows, the final system is able to collect and visualise a larger amount of data in real time than single system data collection processes used within traditional social media analysis. Aimed at allowing researchers without a core understanding of the intricacies of computer science, this thesis provides a methodology to open up new data sources to not only academics but also wider participants, allowing the collection of user-generated geographic and textual content, en masse. A series of case studies are provided, covering applications from the single researcher collecting data through to collection via the use of televised media. These are examined in terms of the tools created and the opportunities opened, allowing real-time analysis of data, collected via the use of the developed toolkit

    Smart Sensor Technologies for IoT

    Get PDF
    The recent development in wireless networks and devices has led to novel services that will utilize wireless communication on a new level. Much effort and resources have been dedicated to establishing new communication networks that will support machine-to-machine communication and the Internet of Things (IoT). In these systems, various smart and sensory devices are deployed and connected, enabling large amounts of data to be streamed. Smart services represent new trends in mobile services, i.e., a completely new spectrum of context-aware, personalized, and intelligent services and applications. A variety of existing services utilize information about the position of the user or mobile device. The position of mobile devices is often achieved using the Global Navigation Satellite System (GNSS) chips that are integrated into all modern mobile devices (smartphones). However, GNSS is not always a reliable source of position estimates due to multipath propagation and signal blockage. Moreover, integrating GNSS chips into all devices might have a negative impact on the battery life of future IoT applications. Therefore, alternative solutions to position estimation should be investigated and implemented in IoT applications. This Special Issue, “Smart Sensor Technologies for IoT” aims to report on some of the recent research efforts on this increasingly important topic. The twelve accepted papers in this issue cover various aspects of Smart Sensor Technologies for IoT

    Heterogeneous data to knowledge graphs matching

    Get PDF
    Many applications rely on the existence of reusable data. The FAIR (Findability, Accessibility, Interoperability, and Reusability) principles identify detailed descriptions of data and metadata as the core ingredients for achieving reusability. However, creating descriptive data requires massive manual effort. One way to ensure that data is reusable is by integrating it into Knowledge Graphs (KGs). The semantic foundation of these graphs provides the necessary description for reuse. In the Open Research KG, they propose to model artifacts of scientific endeavors, including publications and their key messages. Datasets supporting these publications are essential carriers of scientific knowledge and should be included in KGs. We focus on biodiversity research as an example domain to develop and evaluate our approach. Biodiversity is the assortment of life on earth covering evolutionary, ecological, biological, and social forms. Understanding such a domain and its mechanisms is essential to preserving this vital foundation of human well-being. It is imperative to monitor the current state of biodiversity and its change over time and to understand its forces driving and preserving life in all its variety and richness. This need has resulted in numerous works being published in this field. For example, a large amount of tabular data (datasets), textual data (publications), and metadata (e.g., dataset description) have been generated. So, it is a data-rich domain with an exceptionally high need for data reuse. Managing and integrating these heterogeneous data of biodiversity research remains a big challenge. Our core research problem is how to enable the reusability of tabular data, which is one aspect of the FAIR data principles. In this thesis, we provide answer for this research problem

    Хмарні технології в освіті: матеріали 7 семінару CTE 2019

    Get PDF
    This volume represents the proceedings of the 7th Workshop on Cloud Technologies in Education (CTE 2019), held in Kryvyi Rih, Ukraine, in December 20, 2019. It comprises 42 contributed papers that were carefully peer-reviewed and selected from 66 submissions. The accepted papers present the state-of-the-art overview of successful cases and provides guidelines for future research. The volume is structured in four parts, each presenting the contributions for a particular workshop track.Цей том представляє матеріали 7-го семінару "Хмарні технології в освіті" (CTE 2019), який відбувся у Кривому Розі, Україна, 20 грудня 2019 року. Він містить 42 доповіді, ретельно перевірені та відібрані з 66 робіт. Прийняті документи представляють сучасний огляд успішних випадків та надають рекомендації щодо майбутніх досліджень. Том складається з чотирьох частин, кожна з яких представляє внески для певного тематичного напряму семінару
    corecore