98 research outputs found

    Application of Artificial Intelligence Approaches in the Flood Management Process for Assessing Blockage at Cross-Drainage Hydraulic Structures

    Get PDF
    Floods are the most recurrent, widespread and damaging natural disasters, and are ex-pected to become further devastating because of global warming. Blockage of cross-drainage hydraulic structures (e.g., culverts, bridges) by flood-borne debris is an influen-tial factor which usually results in reducing hydraulic capacity, diverting the flows, dam-aging structures and downstream scouring. Australia is among the countries adversely impacted by blockage issues (e.g., 1998 floods in Wollongong, 2007 floods in Newcas-tle). In this context, Wollongong City Council (WCC), under the Australian Rainfall and Runoff (ARR), investigated the impact of blockage on floods and proposed guidelines to consider blockage in the design process for the first time. However, existing WCC guide-lines are based on various assumptions (i.e., visual inspections as representative of hy-draulic behaviour, post-flood blockage as representative of peak floods, blockage remains constant during the whole flooding event), that are not supported by scientific research while also being criticised by hydraulic design engineers. This suggests the need to per-form detailed investigations of blockage from both visual and hydraulic perspectives, in order to develop quantifiable relationships and incorporate blockage into design guide-lines of hydraulic structures. However, because of the complex nature of blockage as a process and the lack of blockage-related data from actual floods, conventional numerical modelling-based approaches have not achieved much success. The research in this thesis applies artificial intelligence (AI) approaches to assess the blockage at cross-drainage hydraulic structures, motivated by recent success achieved by AI in addressing complex real-world problems (e.g., scour depth estimation and flood inundation monitoring). The research has been carried out in three phases: (a) litera-ture review, (b) hydraulic blockage assessment, and (c) visual blockage assessment. The first phase investigates the use of computer vision in the flood management domain and provides context for blockage. The second phase investigates hydraulic blockage using lab scale experiments and the implementation of multiple machine learning approaches on datasets collected from lab experiments (i.e., Hydraulics-Lab Dataset (HD), Visual Hydraulics-Lab Dataset (VHD)). The artificial neural network (ANN) and end-to-end deep learning approaches reported top performers among the implemented approaches and demonstrated the potential of learning-based approaches in addressing blockage is-sues. The third phase assesses visual blockage at culverts using deep learning classifi-cation, detection and segmentation approaches for two types of visual assessments (i.e., blockage status classification, percentage visual blockage estimation). Firstly, a range of existing convolutional neural network (CNN) image classification models are imple-mented and compared using visual datasets (i.e., Images of Culvert Openings and Block-age (ICOB), VHD, Synthetic Images of Culverts (SIC)), with the aim to automate the process of manual visual blockage classification of culverts. The Neural Architecture Search Network (NASNet) model achieved best classification results among those im-plemented. Furthermore, the study identified background noise and simplified labelling criteria as two contributing factors in degraded performance of existing CNN models for blockage classification. To address the background clutter issue, a detection-classification pipeline is proposed and achieved improved visual blockage classification performance. The proposed pipeline has been deployed using edge computing hardware for blockage monitoring of actual culverts. The role of synthetic data (i.e., SIC) on the performance of culvert opening detection is also investigated. Secondly, an automated segmentation-classification deep learning pipeline is proposed to estimate the percentage of visual blockage at circular culverts to better prioritise culvert maintenance. The AI solutions proposed in this thesis are integrated into a blockage assessment framework, designed to be deployed through edge computing to monitor, record and assess blockage at cross-drainage hydraulic structures

    Hardware design and CAD for processor-based logic emulation systems.

    Get PDF

    (VANET IR-CAS): Utilizing IR Techniques in Building Context Aware Systems for VANET

    Get PDF
    Most of the available context aware dissemination systems for the Vehicular Ad hoc Network (VANET) are centralized systems with low level of user privacy and preciseness. In addition, the absence of common assessment models deprives researchers from having fair evaluation of their proposed systems and unbiased comparison with other systems. Due to the importance of the commercial, safety and convenience services, three IR-CAS systems are developed to improve three applications of these services: the safety Automatic Crash Notification (ACN), the convenience Congested Road Notification (CRN) and the commercial Service Announcement (SA). The proposed systems are context aware systems that utilize the information retrieval (IR) techniques in the context aware information dissemination. The dispatched information is improved by deploying the vector space model for estimating the relevance or severity by calculating the Manhattan distance between the current situation context and the severest context vectors. The IR-CAS systems outperform current systems that use machine learning, fuzzy logic and binary models in decentralization, effectiveness by binary and non-binary measures, exploitation of vehicle processing power, dissemination of informative notifications with certainty degrees and partial rather than binary or graded notifications that are insensitive to differences in severity within grades, and protection of privacy which achieves user satisfaction. In addition, the visual-manual and speech-visual dual-mode user interface is designed to improve user safety by minimizing distraction. An evaluation model containing ACN and CRN test collections, with around 500,000 North American test cases each, is created to enable fair effectiveness comparisons among VANET context aware systems. Hence, the novelty of VANET IR-CAS systems is: First, providing scalable abstract context model with IR based processing that raises the notification relevance and precision. Second, increasing decentralization, user privacy, and safety with the least distracting user interface. Third, designing unbiased performance evaluation as a ground for distinguishing significantly effective VANET context aware systems

    Implementing decision tree-based algorithms in medical diagnostic decision support systems

    Get PDF
    As a branch of healthcare, medical diagnosis can be defined as finding the disease based on the signs and symptoms of the patient. To this end, the required information is gathered from different sources like physical examination, medical history and general information of the patient. Development of smart classification models for medical diagnosis is of great interest amongst the researchers. This is mainly owing to the fact that the machine learning and data mining algorithms are capable of detecting the hidden trends between features of a database. Hence, classifying the medical datasets using smart techniques paves the way to design more efficient medical diagnostic decision support systems. Several databases have been provided in the literature to investigate different aspects of diseases. As an alternative to the available diagnosis tools/methods, this research involves machine learning algorithms called Classification and Regression Tree (CART), Random Forest (RF) and Extremely Randomized Trees or Extra Trees (ET) for the development of classification models that can be implemented in computer-aided diagnosis systems. As a decision tree (DT), CART is fast to create, and it applies to both the quantitative and qualitative data. For classification problems, RF and ET employ a number of weak learners like CART to develop models for classification tasks. We employed Wisconsin Breast Cancer Database (WBCD), Z-Alizadeh Sani dataset for coronary artery disease (CAD) and the databanks gathered in Ghaem Hospital’s dermatology clinic for the response of patients having common and/or plantar warts to the cryotherapy and/or immunotherapy methods. To classify the breast cancer type based on the WBCD, the RF and ET methods were employed. It was found that the developed RF and ET models forecast the WBCD type with 100% accuracy in all cases. To choose the proper treatment approach for warts as well as the CAD diagnosis, the CART methodology was employed. The findings of the error analysis revealed that the proposed CART models for the applications of interest attain the highest precision and no literature model can rival it. The outcome of this study supports the idea that methods like CART, RF and ET not only improve the diagnosis precision, but also reduce the time and expense needed to reach a diagnosis. However, since these strategies are highly sensitive to the quality and quantity of the introduced data, more extensive databases with a greater number of independent parameters might be required for further practical implications of the developed models

    Translational cell based therapies to repair the heart

    Get PDF
    Cardiovascular disease comprising of Coronary Artery Disease (CAD) and Valvular Heart Disease (VHD) represents the leading disease in western societies accounting for the death of numerous patients. CAD may lead to heart failure (HF) and despite the therapeutic options for HF which evolved over the past years, the incidence of HF is continuously increasing with a higher percentage of aged people. Similarly, an increase of VHD can be observed and although valve replacement represents the most common therapy strategy for VHD, approximately 30% of the treated patients are affected from prosthesis-related problems within 10 years. While mechanical valves require lifelong anticoagulation treatment, bioprosthetic valves present with continuous degeneration without the ability to grow, repair or remodel. The concept of regenerative medicine comprising of cell-based therapies, bio engineering technologies and hybrid solutions has been proposed as a promising next generation approach to address CAD and VHD. While myocardial cell therapy has been suggested to have a beneficial effect on the failing myocardium, heart valve tissue engineering has been demonstrated to be a promising concept to generate living, autologous heart valves with the capability to grow and to remodel which may be particularly beneficial for children. Although these regenerative strategies have shown great potential in experimental studies, the translation into a clinical setting has either been limited or has been too rapid and premature leaving many key questions unanswered. The aim of this thesis was the systematic development of translational, cell-based bio engineering concepts addressing CAD (part A) and VHD (part B) with a particular focus on minimally invasive, transcatheter-based implantation techniques. In the setting of myocardial regeneration, in the second chapter the intrinsic regenerative potential of the heart is investigated. Myocardial samples were harvested from all four chambers of the human heart and were assessed for resident stem/progenitor cell populations. The results demonstrated that BRCP+ cells can be detected within the human heart and that they were more abundant than their c-kit+ counterparts. In the non-ischemic heart they were preferentially located in the atria while following ischemia, their numbers were increased significantly in the left ventricle. There were no c-kit+/BCRP+ co-expressing stem/progenitor cell populations suggesting that these two markers are expressed by two distinct cell populations in the human heart. Although these results provided a valuable snapshot at cardiac progenitor cells after acute ischemia, the data also indicated that the absolute numbers of cells acquiring a myocardial phenotype are rather low and further effort is needed to upscale such cells into clinically relevant numbers. In chapter three, it is demonstrated that human bone marrow and adipose tissue derived mesenchymal stem cells can be efficiently isolated via minimally invasive procedures and expanded to clinically relevant numbers for myocardial cell therapy. Thereafter, these cells were tested in a uniquely developed intrauterine, fetal, preimmune ovine myocardial infarction model for the evaluation of human cell fate in vivo. After the successful intrauterine induction of acute myocardial infarction, the cells were intramyocardially transplanted and tracked using a multimodal imaging approach comprising MRI, Micro CT as well as in vitro analysis tools. The principal feasibility of intra-myocardial stem-cell transplantation following intra-uterine induction of myocardial infarction in the preimmune fetal sheep could be demonstrated suggesting this as a unique platform to evaluate human cell-fate in a relevant large animal-model without the necessity of immunosuppressive therapy. In chapter four, adipose tissue derived mesenchymal stem cells (ATMSCs) were processed to generate three dimensional microtissues (3D-MTs) prior to transplantation to address the important issue of cell retention and survival. Thereafter, the ATMSCs based 3D-MTs were transplanted into the healthy and infarcted porcine myocardium using a catheter-based, 3D electromechanical mapping guided approach. The previously used MRI based tracking concept was successfully translated into this preclinical model allowing for the in vivo monitoring of 3D-MTs. To address Valvular Heart Disease (part B), in chapter five, marrow stromal derived cells were used to develop a unique autologous, cell-based engineered heart valve in situ tissue engineering concept comprising of minimally-invasive techniques for both, cell harvest and valve implantation. Autologous marrow stromal derived cells were harvested, seeded onto biodegradable scaffolds and integrated into self-expanding nitinol stents, before they were transapically delivered into the pulmonary position of non-human primates within the same intervention while avoiding any in vitro bio-reactor period. The results of these experiments demonstrated the principal feasibility of generating marrow stromal cell-based, autologous, living tissue engineered heart valves (TEHV) and the transapical implantation in a one-step intervention. In chapter six, this concept was then successfully applied to the high-pressure system of the systemic circulation. After detailed adaption of the TEHV and stent design to the anatomic conditions of an orthotopic aortic valve, marrow stromal cell-based TEHV were implanted into the orthotopic aortic position. The implantation was successful and valve functionality was confirmed using fluoroscopy and trans-esophageal echocardiography. While displaying an ideal opening and closing behaviour with a sufficient co-aptation and a low pressure gradient, there were no signs of coronary occlusion or mal-perfusion. In conclusion, the results of this thesis represent a promising portfolio of translational concepts for cardiovascular regenerative medicine addressing CAD and VHD. In particular, it was demonstrated that mesenchymal stem cells / multipotent stromal derived cells represent a clinically relevant cell source for both myocardial regeneration and heart valve tissue engineering. It was shown that the preimmune fetal sheep myocardial infarction model represents a unique platform for the in vivo evaluation of human stem cells without the necessity of immunosuppressive therapy. Moreover, the concept of transcatheter based intramyocardial transplantation of mesenchymal stem cell-based 3D-MTs was introduced to enhance cellular retention and survival. Finally, in the setting of VHD it could be shown that marrow stromal cell based issue engineered heart valves can successfully generated and transapically implanted into the pulmonary and aortic position within a one-step intervention

    Assessing the evidential value of artefacts recovered from the cloud

    Get PDF
    Cloud computing offers users low-cost access to computing resources that are scalable and flexible. However, it is not without its challenges, especially in relation to security. Cloud resources can be leveraged for criminal activities and the architecture of the ecosystem makes digital investigation difficult in terms of evidence identification, acquisition and examination. However, these same resources can be leveraged for the purposes of digital forensics, providing facilities for evidence acquisition, analysis and storage. Alternatively, existing forensic capabilities can be used in the Cloud as a step towards achieving forensic readiness. Tools can be added to the Cloud which can recover artefacts of evidential value. This research investigates whether artefacts that have been recovered from the Xen Cloud Platform (XCP) using existing tools have evidential value. To determine this, it is broken into three distinct areas: adding existing tools to a Cloud ecosystem, recovering artefacts from that system using those tools and then determining the evidential value of the recovered artefacts. From these experiments, three key steps for adding existing tools to the Cloud were determined: the identification of the specific Cloud technology being used, identification of existing tools and the building of a testbed. Stemming from this, three key components of artefact recovery are identified: the user, the audit log and the Virtual Machine (VM), along with two methodologies for artefact recovery in XCP. In terms of evidential value, this research proposes a set of criteria for the evaluation of digital evidence, stating that it should be authentic, accurate, reliable and complete. In conclusion, this research demonstrates the use of these criteria in the context of digital investigations in the Cloud and how each is met. This research shows that it is possible to recover artefacts of evidential value from XCP

    Simulation of nonverbal social interaction and small groups dynamics in virtual environments

    Get PDF
    How can the behaviour of humans who interact with other humans be simulated in virtual environments? This thesis investigates the issue by proposing a number of dedicated models, computer languages, software architectures, and specifications of computational components. It relies on a large knowledge base from the social sciences, which offers concepts, descriptions, and classifications that guided the research process. The simulation of nonverbal social interaction and group dynamics in virtual environments can be divided in two main research problems: (1) an action selection problem, where autonomous agents must be made capable of deciding when, with whom, and how they interact according to individual characteristics of themselves and others; and (2) a behavioural animation problem, where, on the basis of the selected interaction, 3D characters must realistically behave in their virtual environment and communicate nonverbally with others by automatically triggering appropriate actions such as facial expressions, gestures, and postural shifts. In order to introduce the problem of action selection in social environments, a high-level architecture for social agents, based on the sociological concepts of role, norm, and value, is first discussed. A model of action selection for members of small groups, based on proactive and reactive motivational components, is then presented. This model relies on a new tagbased language called Social Identity Markup Language (SIML), allowing the rich specification of agents' social identities and relationships. A complementary model controls the simulation of interpersonal relationship development within small groups. The interactions of these two models create a complex system exhibiting emergent properties for the generation of meaningful sequences of social interactions in the temporal dimension. To address the issues related to the visualization of nonverbal interactions, results are presented of an evaluation experiment aimed at identifying the application requirements through an analysis of how real people interact nonverbally in virtual environments. Based on these results, a number of components for MPEG-4 body animation, AML — a tag-based language for the seamless integration and synchronization of facial animation, body animation, and speech — and a high-level interaction visualization service for the VHD++ platform are described. This service simulates the proxemic and kinesic aspects of nonverbal social interactions, and comprises such functionalities as parametric postures, adapters and observation behaviours, the social avoidance of collisions, intelligent approach behaviours, and the calculation of suitable interaction distances and angles

    Auto-Pipe and the X Language: A Toolset and Language for the Simulation, Analysis, and Synthesis of Heterogeneous Pipelined Architectures, Master\u27s Thesis, August 2006

    Get PDF
    Pipelining an algorithmis a popularmethod of increasing the performance of many computation-intensive applications. Often, one wants to form pipelines composed mostly of commonly used simple building blocks such as DSP components, simple math operations, encryption, or pattern matching stages. Additionally, one may desire to map these processing tasks to different computational resources based on their relative performance attributes (e.g., DSP operations on an FPGA). Auto-Pipe is composed of the X Language, a flexible interface language that aids the description of complex dataflow topologies (including pipelines); X-Com, a compiler for the X Language; X-Sim, a tool for modeling pipelined architectures based on measured, simulated, or derived task and communications behavior; X-Opt, a tool to optimize X applications under various metrics; and X-Dep, a tool for the automatic deployment of X-Com- or X-Sim-generated applications to real or simulated devices. This thesis presents an overview of the Auto-Pipe system, the design and use of the X Language, and an implementation of X-Com. Applications developed using the X Language are presented which demonstrate the effectiveness of describing algorithms using X, and the effectiveness of the Auto-Pipe development flow in analyzing and improving the performance of an application

    Mental vision:a computer graphics platform for virtual reality, science and education

    Get PDF
    Despite the wide amount of computer graphics frameworks and solutions available for virtual reality, it is still difficult to find a perfect one fitting at the same time the many constraints of research and educational contexts. Advanced functionalities and user-friendliness, rendering speed and portability, or scalability and image quality are opposite characteristics rarely found into a same approach. Furthermore, fruition of virtual reality specific devices like CAVEs or wearable systems is limited by their costs and accessibility, being most of these innovations reserved to institutions and specialists able to afford and manage them through strong background knowledge in programming. Finally, computer graphics and virtual reality are a complex and difficult matter to learn, due to the heterogeneity of notions a developer needs to practice with before attempting to implement a full virtual environment. In this thesis we describe our contributions to these topics, assembled in what we called the Mental Vision platform. Mental Vision is a framework composed of three main entities. First, a teaching/research oriented graphics engine, simplifying access to 2D/3D real-time rendering on mobile devices, personal computers and CAVE systems. Second, a series of pedagogical modules to introduce and practice computer graphics and virtual reality techniques. Third, two advanced VR systems: a wearable, lightweight and handsfree mixed reality setup, and a four sides CAVE designed through off the shelf hardware. In this dissertation we explain our conceptual, architectural and technical approach, pointing out how we managed to create a robust and coherent solution reducing complexity related to cross-platform and multi-device 3D rendering, and answering simultaneously to contradictory common needs of computer graphics and virtual reality for researchers and students. A series of case studies evaluates how Mental Vision concretely satisfies these needs and achieves its goals on in vitro benchmarks and in vivo scientific and educational projects

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering
    corecore