25 research outputs found

    Deep Learning Approaches in Pavement Distress Identification: A Review

    Full text link
    This paper presents a comprehensive review of recent advancements in image processing and deep learning techniques for pavement distress detection and classification, a critical aspect in modern pavement management systems. The conventional manual inspection process conducted by human experts is gradually being superseded by automated solutions, leveraging machine learning and deep learning algorithms to enhance efficiency and accuracy. The ability of these algorithms to discern patterns and make predictions based on extensive datasets has revolutionized the domain of pavement distress identification. The paper investigates the integration of unmanned aerial vehicles (UAVs) for data collection, offering unique advantages such as aerial perspectives and efficient coverage of large areas. By capturing high-resolution images, UAVs provide valuable data that can be processed using deep learning algorithms to detect and classify various pavement distresses effectively. While the primary focus is on 2D image processing, the paper also acknowledges the challenges associated with 3D images, such as sensor limitations and computational requirements. Understanding these challenges is crucial for further advancements in the field. The findings of this review significantly contribute to the evolution of pavement distress detection, fostering the development of efficient pavement management systems. As automated approaches continue to mature, the implementation of deep learning techniques holds great promise in ensuring safer and more durable road infrastructure for the benefit of society

    Dynamic Detection of Topological Information from Grid-Based Generalized Voronoi Diagrams

    Get PDF
    In the context of robotics, the grid-based Generalized Voronoi Diagrams (GVDs) are widely used by mobile robots to represent their surrounding area. Current approaches for incrementally constructing GVDs mainly focus on providing metric skeletons of underlying grids, while the connectivity among GVD vertices and edges remains implicit, which makes high-level spatial reasoning tasks impractical. In this paper, we present an algorithm named Dynamic Topology Detector (DTD) for extracting a GVD with topological information from a grid map. Beyond the construction and reconstruction of a GVD on grids, DTD further extracts connectivity among the GVD edges and vertices. DTD also provides efficient repair mechanism to treat with local changes, making it work well in dynamic environments. Simulation tests in representative scenarios demonstrate that (1) compared with the static algorithms, DTD generally makes an order of magnitude improvement regarding computation times when working in dynamic environments; (2) with negligible extra computation, DTD detects topologies not computed by existing incremental algorithms. We also demonstrate the usefulness of the resulting topological information for high-level path planning tasks

    Semi-multifractal optimization algorithm

    Get PDF
    Observations on living organism systems are the inspiration for the creation of modern computational techniques. The article presents an algorithm implementing the division of a solution space in the optimization process. A method for the algorithm operation controlling shows the wide range of its use possibilities. The article presents properties of fractal dimensions of subareas created in the process of optimization. The paper also presents the possibilities of using this method to determine function extremes. The approach proposed in the paper gives more opportunities for its use.Alrawi A, Sagheer A, Ibrahim D (2012) Texture segmentation based on multifractal dimension. Int J Soft Comput ( IJSC ) 3(1):1–10Belussi A, Catania B, Clementini E, Ferrari EE (eds) (2007) Spatial data on the web modeling and management. Springer, Berlin. doi: 10.1007/978-3-540-69878-4Corso G, Freitas J, Lucena L (2004) A multifractal scale-free lattice. Phys A Stat Mech Appl 342(1–2):214–220. doi: 10.1016/j.physa.2004.04.081Corso G, Lucena L (2005) Multifractal lattice and group theory. Phys A Stat Mech Appl 357(1):64–70. doi: 10.1016/j.physa.2005.05.049Gosciniak I (2017) Discussion on semi-immune algorithm behaviour based on fractal analysis. Soft Comput 21(14):3945–3956. doi: 10.1007/s00500-016-2044-yHwang WJ, Derin H (1995) Multiresolution multiresource progressive image transmission. IEEE Trans Image Process 4:1128–1140. doi: 10.1109/83.403418Iwanicki K, van Steen M (2009) Using area hierarchy for multi-resolution storage and search in large wireless sensor networks. In: Communications, 2009. ICC ’09. IEEE international conference on, pp 1–6. doi: 10.1109/ICC.2009.5199556Juliany J, Vose M (1994) The genetic algorithm fractal. Evol Comput 2(2):165–180. doi: 10.1162/evco.1994.2.2.165Kies P (2001) Information dimension of a population’s attractor a binary genetic algorithm. In: Artificial neural nets and genetic algorithms: proceedings of the international conference in Prague, Czech Republic. Springer, pp 232–235. doi: 10.1007/978-3-7091-6230-9_57Kotowski S, Kosinski W, Michalewicz Z, Nowicki J, Przepiorkiewicz B (2008) Fractal dimension of trajectory as invariant of genetic algorithms. In: Artificial intelligence and soft computing (ICAISC 2008). Springer, pp 414–425. doi: 10.1007/978-3-540-69731-2_41Lu Y, Huo X, Tsiotras P (2012) A beamlet-based graph structure for path planning using multiscale information. IEEE Trans Autom Control 57(5):1166–1178. doi: 10.1109/TAC.2012.2191836Marinov M, Kobbelt L (2005) Automatic generation of structure preserving multiresolution models. In: Eurographics, pp 1–8Masayoshi K, Masaru N, Yoshio S (1996) Identification of complicated shape objects by fractal characteristic variables categorizing dust particles on LSI wafer surface. Syst Comput Jpn 27(6):82–91. doi: 10.1002/scj.4690270608Michalewicz Z (1996) Genetic algorithms + data structures = evolution programs. Springer, Berlin. doi: 10.1007/978-3-662-03315-9Mo H (2008) Handbook of research on artificial immune systems and natural computing: applying complex adaptive technologies. Information Science Reference - Imprint of: IGI Publishing. doi: 10.4018/978-1-60566-310-4Pereira M, Corso G, Lucena L, Freitas J (2005) A random multifractal tilling. Chaos Solitons Fractals 23:1105–1110. doi: 10.1016/j.chaos.2004.06.045Rejaur Rahman M, Saha SK (2009) Multi-resolution segmentation for object-based classification and accuracy assessment of land use/land cover classification using remotely sensed data. J Indian Soc Remote Sens 36:189–201. doi: 10.1007/s12524-008-0020-4Song J, Qian F (2006) Fractal algorithm for finding global optimal solution. In: International conference on computational intelligence for modelling control and automation, and international conference on intelligent agents, web technologies and internet commerce (CIMCA–IAWTIC’06). IEEE Computer Society, pp 149–153Urrutia J, Sack JR (eds) (2000) Handbook of computational geometry. North-Holland, Amsterdam. doi: 10.1016/B978-0-444-82537-7.50027-9Weise T (2009) Global Optimization Algorithms—Theory and Applications, 2nd edn. University of Kassel, Distributed Systems Group. http://www.it-weise.deWeller R (2013) New geometric data structures for collision detection and haptics. Springer, Cham. doi: 10.1007/978-3-319-01020-5Vujovic I (2014) Multiresolution approach to processing images for different applications: interaction of lower processing with higher vision. Springer, Cham. doi: 10.1007/978-3-319-14457-3 Google Scholar Virtual library of simulation experiments: test functions and datasets, optimization test problems. https://www.sfu.ca/ssurjano/optimization.html. Accessed 28 July 201

    Incremental Construction of Generalized Voronoi Diagrams on Pointerless Quadtrees

    Get PDF
    In robotics, Generalized Voronoi Diagrams (GVDs) are widely used by mobile robots to represent the spatial topologies of their surrounding area. In this paper we consider the problem of constructing GVDs on discrete environments. Several algorithms that solve this problem exist in the literature, notably the Brushfire algorithm and its improved versions which possess local repair mechanism. However, when the area to be processed is very large or is of high resolution, the size of the metric matrices used by these algorithms to compute GVDs can be prohibitive. To address this issue, we propose an improvement on the current algorithms, using pointerless quadtrees in place of metric matrices to compute and maintain GVDs. Beyond the construction and reconstruction of a GVD, our algorithm further provides a method to approximate roadmaps in multiple granularities from the quadtree based GVD. Simulation tests in representative scenarios demonstrate that, compared with the current algorithms, our algorithm generally makes an order of magnitude improvement regarding memory cost when the area is larger than 210Ă—210. We also demonstrate the usefulness of the approximated roadmaps for coarse-to-fine pathfinding tasks

    Performance Evaluation of Pathfinding Algorithms

    Get PDF
    Pathfinding is the search for an optimal path from a start location to a goal location in a given environment. In Artificial Intelligence pathfinding algorithms are typically designed as a kind of graph search. These algorithms are applicable in a wide variety of applications such as computer games, robotics, networks, and navigation systems. The performance of these algorithms is affected by several factors such as the problem size, path length, the number and distribution of obstacles, data structures and heuristics. When new pathfinding algorithms are proposed in the literature, their performance is often investigated empirically (if at all). Proper experimental design and analysis is crucial to provide an informative and non- misleading evaluation. In this research, we survey many papers and classify them according to their methodology, experimental design, and analytical techniques. We identify some weaknesses in these areas that are all too frequently found in reported approaches. We first found the pitfalls in pathfinding research and then provide solutions by creating example problems. Our research shows that spurious effects, control conditions provide solutions to avoid these pitfalls

    Uncertainty quantification and numerical methods in charged particle radiation therapy

    Get PDF
    Radiation therapy is applied in approximately 50% of all cancer treatments. To eliminate the tumor without damaging organs in the vicinity, optimized treatment plans are determined. This requires the calculation of three-dimensional dose distributions in a heterogeneous volume with a spatial resolution of 2-3mm. Current planning techniques use multiple beams with optimized directions and energies to achieve the best possible dose distribution. Each dose calculation however requires the discretization of the six-dimensional phase space of the linear Boltzmann transport equation describing complex particle dynamics. Despite the complexity of the problem, dose calculation errors of less than 2% are clinically recommended and computation times cannot exceed a few minutes. Additionally, the treatment reality often differs from the computed plan due to various uncertainties, for example in patient positioning, the acquired CT image or the delineation of tumor and organs at risk. Therefore, it is essential to include uncertainties in the planning process to determine a robust treatment plan. This entails a realistic mathematical model of uncertainties, quantification of their effect on the dose distribution using appropriate propagation methods as well as a robust or probabilistic optimization of treatment parameters to account for these effects. Fast and accurate calculations of the dose distribution including predictions of uncertainties in the computed dose are thus crucial for the determination of robust treatment plans in radiation therapy. Monte Carlo methods are often used to solve transport problems, especially for applications that require high accuracy. In these cases, common non-intrusive uncertainty propagation strategies that involve repeated simulations of the problem at different points in the parameter space quickly become infeasible due to their long run-times. Quicker deterministic dose calculation methods allow for better incorporation of uncertainties, but often use strong simplifications or admit non-physical solutions and therefore cannot provide the required accuracy. This work is concerned with finding efficient mathematical solutions for three aspects of (robust) radiation therapy planning: 1. Efficient particle transport and dose calculations, 2. uncertainty modeling and propagation for radiation therapy, and 3. robust optimization of the treatment set-up

    Tree tensor networks for high-dimensional quantum systems and beyond

    Get PDF
    This thesis presents the development of a numerical simulation technique, the Tree Tensor Network, aiming to overcome current limitations in the simulation of two- and higher-dimensional quantum many-body systems. The development and application of methods based on Tensor Networks (TNs) for such systems are one of the most relevant challenges of the current decade with the potential to promote research and technologies in a broad range of fields ranging from condensed matter physics, high-energy physics, and quantum chemistry to quantum computation and quantum simulation. The particular challenge for TNs is the combination of accuracy and scalability which to date are only met for one-dimensional systems by other established TN techniques. This thesis first describes the interdisciplinary field of TN by combining mathematical modelling, computational science, and quantum information before it illustrates the limitations of standard TN techniques in higher-dimensional cases. Following a description of the newly developed Tree Tensor Network (TTN), the thesis then presents its application to study a lattice gauge theory approximating the low-energy behaviour of quantum electrodynamics, demonstrating the successful applicability of TTNs for high-dimensional gauge theories. Subsequently, a novel TN is introduced augmenting the TTN for efficient simulations of high-dimensional systems. Along the way, the TTN is applied to problems from various fields ranging from low-energy to high-energy up to medical physics.In dieser Arbeit wird die Entwicklung einer numerischen Simulationstechnik, dem Tree Tensor Network (TTN), vorgestellt, die darauf abzielt, die derzeitigen Limitationen bei der Simulation von zwei- und höherdimensionalen Quanten-Vielteilchensystemen zu überwinden. Die Weiterentwicklung von auf Tensor-Netzwerken (TN) basierenden Methoden für solche Systeme ist eine der aktuellsten und relevantesten Herausforderungen. Sie birgt das Potential, Forschung und Technologien in einem breiten Spektrum zu fördern, welches sich von der Physik der kondensierten Materie, der Hochenergiephysik und der Quantenchemie bis hin zur Quantenberechnung und Quantensimulation erstreckt. Die besondere Herausforderung für TN ist die Kombination von Genauigkeit und Skalierbarkeit, die bisher nur für eindimensionale Systeme erfüllt wird. Diese Arbeit beschreibt zunächst das interdisziplinäre Gebiet der TN als eine Kombination von mathematischer Modellierung, Computational Science und Quanteninformation, um dann die Grenzen der Standard-TN-Techniken in höherdimensionalen Fällen aufzuzeigen. Nach einer Beschreibung des neu entwickelten TTN stellt die Arbeit dessen Anwendung zur Untersuchung einer Gittereichtheorie vor, die das Niederenergieverhalten der Quantenelektrodynamik approximiert und somit die erfolgreiche Anwendbarkeit von TTNs für hochdimensionale Eichtheorien demonstriert. Anschließend wird ein neuartiges TN eingeführt, welches das TTN für effiziente Simulationen hochdimensionaler Systeme erweitert. Zusätzlich wird das TTN auf diverse Probleme angewandt, die von Niederenergie- über Hochenergie- bis hin zur medizinischen Physik reichen

    A multi-technique hierarchical X-ray phase-based approach for the characterization and quantification of the effects of novel radiotherapies

    Get PDF
    Cancer is the first or second leading cause of premature deaths worldwide with an overall rapidly growing burden. Standard cancer therapies include surgery, chemotherapy and radiotherapy (RT) and often a combination of the three is applied to improve the probability of tumour control. Standard therapy protocols have been established for many types of cancers and new approaches are under study especially for treating radio-resistant tumours associated to an overall poor prognosis, as for brain and lung cancers. Follow up techniques able to monitor and investigate the effects of therapies are important for surveying the efficacy of conventionally applied treatments and are key for accessing the curing capabilities and the onset of acute and late adverse effects of new therapies. In this framework, this doctoral Thesis proposes the X-ray Phase Contrast Im-aging - Computed Tomography (XPCI-CT) technique as an imaging-based tool to study and quantify the effects of novel RTs, namely Microbeam and Minibeam Radiation therapy (MRT and MB), and to compare them to the standard Broad Beam (BB) induced effects on brain and lungs. MRT and MB are novel radiotherapies that deliver an array of spatially fractionated X-ray beamlets issued from a synchrotron radiation source, with widths of tens or hundreds of micrometres, respectively. MRT and MB exploit the so-called dose-volume effect: hundreds of Grays are well tolerated by healthy tissues and show a preferential effect on tumour cells and vasculature when delivered in a micrometric sized micro-plane, while induce lethal effects if applied over larger uniform irradiation fields. Such highly collimated X-ray beams need a high-resolution and a full-organ approach that can visualize, with high sensitivity, the effects of the treatment along and outside the beamlets path. XPCI-CT is here suggested and proven as a powerful imaging technique able to determine and quantify the effects of the radiation on normal and tumour-bearing tissues. Moreover, it is shown as an effective technique to complement, with 3D information, the histology findings in the follow-up of the RT treatments. Using a multi-scale and multi-technique X-ray-based approach, I have visualized and analysed the effects of RT delivery on healthy and glioblastoma multiforme (GBM)-bearing rat brains as well as on healthy rat lungs. Ex-vivo XPCI-CT datasets acquired with isotropic voxel sizes in the range 3.253 – 0.653 μm3 could distinguish, with high sensitivity, the idiopathic effects of MRT, MB and BB therapies. Histology, immunohistochemistry, Small- and Wide-Angle X-ray Scattering and X-ray Fluorescence experiments were also carried out to accurately interpret and complement the XPCI-CT findings as well as to obtain a detailed structural and chemical characterization of the detected pathological features. Overall, this multi-technique approach could detect: i) a different radio-sensitivity for the MRT-treated brain areas; ii) Ca and Fe deposits, hydroxyapatite crystals formation; iii) extended and isolated fibrotic contents. Full-organ XPCI-CT datasets allowed for the quantification of tumour and mi-crocalcifications’ volumes in treated brains and the amount of scarring tissue in irradiated lungs. Herein, the role of XPCI-CT as a 3D virtual histology technique for the follow-up of ex-vivo RT effects has been assessed as a complementary method for an accurate volumetric investigation of normal and pathological states in brains and lungs, in a small animal model. Moreover, the technique is proposed as a guidance and auxiliary tool for conventional histology, which is the gold standard for pathological evaluations, owing to its 3D capabilities and the possibility of virtually navigating within samples. This puts a landmark for XPCI-CT inclusion in the pre-clinical studies pipeline and for advancing towards in-vivo XPCI-CT imaging of treated organs.Weltweit gilt Krebs als häufigste bzw. zweithäufigste Ursache eines zu früh erfolgenden Todes, wobei die Zahlen rasch ansteigen. Standardmäßige Krebstherapien umfassen chirurgische Eingriffe, Chemotherapie und Strahlentherapie (radiotherapy, RT); oft kommt eine Kombination daraus zur Anwendung, um die Wahrscheinlichkeit der Tumorkontrolle zu erhöhen. Es wurden Standardtherapieprotokolle für zahlreiche Krebsarten eingerichtet und es wird vor allem in der Behandlung von strahlenresistenten Tumoren mit allgemein schlechter Prognose wie bei Hirn- und Lungentumoren an neuen Ansätzen geforscht. Nachverfolgungstechniken, welche die Auswirkungen von Therapien überwachen und ermitteln, sind zur Überwachung der Wirksamkeit herkömmlich angewandter Behandlungen wichtig und auch maßgeblich am Zugang zu den Fähigkeiten zur Heilung sowie zum Auftreten akuter und verzögerter Nebenwirkungen neuer Therapien beteiligt. In diesem Rahmenwerk unterbreitet diese Doktorarbeit die Technik der Röntgen-Phasenkontrast-Bildgebung über Computertomographie (X-ray Phase Contrast Imaging - Computed Tomography, XPCI‑CT) als bildverarbeitungs-basiertes Tool zur Untersuchung und Quantifizierung der Auswirkungen neuartiger Strahlentherapien, nämlich der Mikrobeam- und Minibeam-Strahlentherapie (MRT und MB), sowie zum Vergleich derselben mit den herkömmlichen durch Breitstrahlen (Broad Beam, BB) erzielten Auswirkungen auf Gehirn und Lunge. MRT und MB sind neuartige Strahlentherapien, die ein Array räumlich aufgeteilter Röntgenstrahlenbeamlets aus einer synchrotronen Strahlenquelle mit einer Breite von Zehnteln bzw. Hundersteln Mikrometern abgeben. MRT und MB nutzen den sogenannten Dosis-Volumen-Effekt: Hunderte Gray werden von gesundem Gewebe gut vertragen und wirken bei der Abgabe in einer Mikroebene im Mikrometerbereich vorrangig auf Tumorzellen und Blutgefäße, während sie bei einer Anwendung über größere gleichförmige Strahlungsfelder letale Auswirkungen aufweisen. Solche hoch kollimierten Röntgenstrahlen erfordern eine hohe Auflösung und einen Zugang zum gesamten Organ, bei dem die Auswirkungen der Behandlung entlang und außerhalb der Beamletpfade mit hoher Empfindlichkeit visualisiert werden können. Hier empfiehlt und bewährt sich die XPCI‑CT als leistungsstarke Bildverarbeitungstechnik, welche die Auswirkungen der Strahlung auf normale und tumortragende Gewebe feststellen und quantifizieren kann. Außerdem hat sich gezeigt, dass sie durch 3‑D-Informationen eine effektive Technik zur Ergänzung der histologischen Erkenntnisse in der Nachverfolgung der Strahlenbehandlung ist. Anhand eines mehrstufigen und multitechnischen röntgenbasierten Ansatzes habe ich die Auswirkungen der Strahlentherapie auf gesunde und von Glioblastomen (GBM) befallene Rattenhirne sowie auf gesunde Rattenlungen visualisiert und analysiert. Mit isotropen Voxelgrößen im Bereich von 3,53 bis 0,653 μm3 erfasste Ex-vivo-XPCI-CT-Datensätze konnten die idiopathischen Auswirkungen der MRT-, MB- und BB‑Behandlung mit hoher Empfindlichkeit unterscheiden. Es wurden auch Experimente zu Histologie, Immunhistochemie, Röntgenklein- und ‑weitwinkelstreuung und Röntgenfluoreszenz durchgeführt, um die XPCI‑CT-Erkenntnisse präzise zu interpretieren und zu ergänzen sowie eine detaillierte strukturelle und chemische Charakterisierung der nachgewiesenen pathologischen Merkmale zu erhalten. Im Allgemeinen wurde durch diesen multitechnischen Ansatz Folgendes ermittelt: i) eine un-terschiedliche Strahlenempfindlichkeit der mit MRT behandelten Gehirnbereiche; ii) Ca- und Fe-Ablagerungen und die Bildung von Hydroxylapatitkristallen; iii) ein ausgedehnter und isolierter Fibrosegehalt. XPCI‑CT-Datensätze des gesamten Organs ermöglichten die Quantifizierung der Volume von Tumoren und Mikroverkalkungen in den behandelten Gehirnen und der Menge des Narbengewebes in bestrahlten Lungen. Dabei wurde die Rolle der XPCI‑CT als virtuelle 3‑D-Histologietechnik für die Nachverfolgung von Ex-vivo-RT‑Auswirkungen als ergänzende Methode für eine präzise volumetrische Untersuchung des normalen und pathologischen Zustands von Gehirnen und Lungen im Kleintiermodell untersucht. Darüber hinaus wird die Technik aufgrund ihrer 3‑D-Fähigkeiten und der Möglichkeit zur virtuellen Navigation in den Proben als Leitfaden und Hilfstool für die herkömmliche Histologie vorgeschlagen, die der Goldstandard für die pathologische Evaluierung ist. Dies markiert einen Meilenstein für die Übernahme der XPCI‑CT in die Pipeline präklinischer Studien und für den Übergang zur In-vivo-XPCI‑CT von behandelten Organen
    corecore