81 research outputs found

    DARP: Divide Areas Algorithm for Optimal Multi-Robot Coverage Path Planning

    Get PDF
    This paper deals with the path planning problem of a team of mobile robots, in order to cover an area of interest, with prior-defined obstacles. For the single robot case, also known as single robot coverage path planning (CPP), an (n) optimal methodology has already been proposed and evaluated in the literature, where n is the grid size. The majority of existing algorithms for the multi-robot case (mCPP), utilize the aforementioned algorithm. Due to the complexity, however, of the mCPP, the best the existing mCPP algorithms can perform is at most 16 times the optimal solution, in terms of time needed for the robot team to accomplish the coverage task, while the time required for calculating the solution is polynomial. In the present paper, we propose a new algorithm which converges to the optimal solution, at least in cases where one exists. The proposed technique transforms the original integer programming problem (mCPP) into several single-robot problems (CPP), the solutions of which constitute the optimal mCPP solution, alleviating the original mCPP explosive combinatorial complexity. Although it is not possible to analytically derive bounds regarding the complexity of the proposed algorithm, extensive numerical analysis indicates that the complexity is bounded by polynomial curves for practically sized inputs. In the heart of the proposed approach lies the DARP algorithm, which divides the terrain into a number of equal areas each corresponding to a specific robot, so as to guarantee complete coverage, non-backtracking solution, minimum coverage path, while at the same time does not need any preparatory stage (video demonstration and standalone application are available on-line http://tinyurl.com/DARP-app)

    A general framework of high-performance machine learning algorithms : application in structural mechanics

    Get PDF
    Data-driven models utilizing powerful artificial intelligence (AI) algorithms have been implemented over the past two decades in different fields of simulation-based engineering science. Most numerical procedures involve processing data sets developed from physical or numerical experiments to create closed-form formulae to predict the corresponding systems’ mechanical response. Efficient AI methodologies that will allow the development and use of accurate predictive models for solving computational intensive engineering problems remain an open issue. In this research work, high-performance machine learning (ML) algorithms are proposed for modeling structural mechanics-related problems, which are implemented in parallel and distributed computing environments to address extremely computationally demanding problems. Four machine learning algorithms are proposed in this work and their performance is investigated in three different structural engineering problems. According to the parametric investigation of the prediction accuracy, the extreme gradient boosting with extended hyper-parameter optimization (XGBoost-HYT-CV) was found to be more efficient regarding the generalization errors deriving a 4.54% residual error for all test cases considered. Furthermore, a comprehensive statistical analysis of the residual errors and a sensitivity analysis of the predictors concerning the target variable are reported. Overall, the proposed models were found to outperform the existing ML methods, where in one case the residual error was decreased by 3-fold. Furthermore, the proposed algorithms demonstrated the generic characteristic of the proposed ML framework for structural mechanics problems.The EuroCC Project (GA 951732) and EuroCC 2 Project (101101903) of the European Commission. Open access funding provided by University of Pretoria.https://link.springer.com/journal/466hj2024Civil EngineeringSDG-09: Industry, innovation and infrastructur

    Autonomous trajectory design system for mapping of unknown sea-floors using a team of AUVs

    Get PDF
    This research develops a new on-line trajectory planning algorithm for a team of Autonomous Underwater Vehicles (AUVs). The goal of the AUVs is to cooperatively explore and map the ocean seafloor. As the morphology of the seabed is unknown and complex, standard non-convex algorithms perform insufficiently. To tackle this, a new simulationbased approach is proposed and numerically evaluated. This approach adapts the Parametrized Cognitive-based Adaptive Optimization (PCAO) algorithm. The algorithm transforms the exploration problem to a parametrized decision-making mechanism whose real-time implementation is feasible. Upon that transformation, this scheme calculates off-line a set of decision making mechanism’s parameters that approximate the - nonpractically feasible - optimal solution. The advantages of the algorithm are significant computational simplicity, scalability, and the fact that it can straightforwardly embed any type of physical constraints and system limitations. In order to train the PCAO controller, two morphologically different seafloors are used. During this training, the algorithm outperforms an unrealistic optimal-one-step-ahead search algorithm. To demonstrate the universality of the controller, the most effective controller is used to map three new morphologically different seafloors. During the latter mapping experiment, the PCAO algorithm outperforms several gradient-descent-like approaches

    Περιγραφή και ανάκτηση οπτικού περιεχομένου πολυμέσων βασισμένη σε ευφυείς τεχνικές

    No full text
    The goals which were set up at the beginning of this work and which were adjusted during the process are: • The creation of a new image retrieval evaluation method. • The creation of a new family of descriptors which will combine more than one low levels feature in a compact vector, and which will have the ability to be incorporated in the pre-existing MPEG-7 standard. The descriptors will be constructed via intelligent techniques. The creation of a method for accelerating the searching procedure. The investigation of several Late Fusion methods for image retrieval. The creation of methods which will allow the use of the proposed descriptors in distributed image databases. The development of a software which will contain a great amount of descriptors proposed in the literature. The development of open source libraries which will utilize the proposed descriptors as well as the MPEG-7 descriptors. The creation of a new method for encrypting images which will utilize features and parameters from the image retrieval field. The creation of a new method and system implementation which will employ the proposed descriptors in order to achieve video summarization. The creation of a new method and system implementation for image retrieval based on "Keywords" which will be automatically generated via the use of the proposed descriptors. Finally, the creation of a new method and system implementation for multi-modal search. The system will utilize both low level elements (which will originate from the proposed descriptors) as well as high level elements (which will originate from keywords which will accompany the images). In the past few years there has been a rapid increase in the field of multi-media data, mostly due to the evolution of information technology. One of the main components of multi-media data is that of visual multimedia data, which includes digital images and video. While the issue of producing, compressing and propagating such media might have been a subject of scientific interest for a long time, in the past few years, exactly due to the increase in the range of data, a large part of theresearch was turned towards the management of retrieval of such materials. Even though there are a large number of scientists which occupy themselves with this field, no satisfactory and widely accredited solution to the problem has been proposed. During the course of this thesis, a study carried out that describes the most commonly used methods for retrieval evaluation and notes their weaknesses. It also proposes a new method of measuring the performance of retrieval systems and an extension of this method so that during the evaluation of retrieval results the parameters describing both the size of the database in which the search is being executed as well as the size of the ground truth of each query are taken into account. The proposed method is generic and can be used for evaluating the retrieval performance of any type of information. The core of the method proposed in this thesis is incorporated into the second thematic unit. This section includes a number of low level descriptors, whose features originate from the content of multi-media data which they describe. In contrast to MPEG-7, each type of multimedia data will be described by a specific group of descriptors. The type of material will be determined by the content it describes. The descriptors created originate from fuzzy methods and are characterized by their low storage requirements (23-72 bytes per image). Moreover, each descriptor combines the structure of more than one features (i.e. color and texture). This attribute classifies them as composite descriptors. The sum of descriptors which are incorporated into the second thematical unit of the thesis can be described by the general term Compact Composite Descriptors. In its entirety, the second thematic unit of the thesis contains descriptors for the following types of multi-media material: Category 1: Images/ Video with natural content, Category 2: Images/Video with artificially generated content, Category 3: Images with medical content. For the description and retrieval of multi-media material with natural content, 4 descriptors were developed: The CEDD includes texture information produced by the six-bin histogram of a fuzzy system that uses the five digital filters proposed by the MPEG-7 EHD. Additionally, for color information the CEDD uses a 24-bin color histogram produced by the 24-bin fuzzy-linking system. Overall, the final histogram has 6 X 24=144 regions. The FCTH descriptor includes the texture information produced in the eight-bin histogram of a fuzzy system that uses the high frequency bands of the Haar wavelet transform. For color information, the descriptor uses a 24-bin color histogram produced by the 24-bin fuzzy-linking system. Overall, the final histogram includes 8 X 24=192 regions. The method for producing the C.CEDD differs from the CEDD method only in the color unit. The C.CEDD uses a fuzzy ten-bin linking system instead of the fuzzy 24-bin linking system. Overall, the final histogram has only 6 X 10=60 regions. Compact CEDD is the smallest descriptor of the proposed set requiring less than 23 bytes per image. The method for producing C.FCTH differs from the FCTH method only in the color unit. Like its C.CEDD counterpart, this descriptor uses only a fuzzy ten-bin linking system instead of the fuzzy 24-bin linking system. Overall, the final histogram includes only 8 X 10=80 regions. To restrict the proposed descriptors' length, the normalized bin values of the descriptors are quantized for binary representation in a three bits/bin quantization. Experiments conducted on several benchmarking image databases demonstrate the effectiveness of the proposed descriptors in outperforming the MPEG-7 Descriptors as well as other state-of-the-art descriptors from the literature. Spatial Color Distribution Descriptor (SpCD) combines color and spatial color distribution information. Since these descriptors capture the layout information of color features, they can be used for image retrieval by using hand-drawn sketch queries. In addition, the descriptors of this structure are considered to be suitable for colored graphics, since such images contain relatively small number of color and less texture regions than the natural color images. This descriptor uses a new fuzzy-linking system that maps the colors of the image in a custom 8 colors palette.Μπορεί το θέμα της παραγωγής, της συμπίεσης καθώς και της διάδοσης των πολυμεσικών δεδομένων να αποτελεί εδώ και χρόνια στοιχείο το οποίο παρουσιάζει εξαιρετικό επιστημονικό ενδιαφέρον, τα τελευταία χρόνια όμως, λόγω ακριβώς της αύξησης του πλήθους των δεδομένων, μεγάλο μέρος της έρευνας στράφηκε στην οργάνωση και ανάκτηση του υλικού αυτού. Η απαρχή του αντικειμένου της αυτόματης οργάνωσης, αρχειοθέτησης και ανάκτησης των οπτικών πολυμέσων τοποθετείται αρκετά πίσω, στο 1992, όπου για πρώτη φορά χρησιμοποιήθηκε ο όρος Ανάκτηση Εικόνων Βάσει Περιεχομένου (Content Based Retrieval). Έκτοτε, δημιουργήθηκε ένα νέο ερευνητικό πεδίο το οποίο, 20 χρόνια σχεδόν μετά, παραμένει ενεργό. Και ενώ αρχικά, το αντικείμενο φαινόταν ότι αποτελεί στοιχείο έρευνας που εντάσσεται στο πεδίο της Ανάκτησης Πληροφοριών (Information Retrieval), με την πάροδο των χρόνων το αντικείμενο κατάφερε να προσελκύσει επιστήμονες από διάφορους χώρους. To Moving Picture Experts Group (MPEG) καθόρισε ένα πρότυπο για περιγραφή, αρχειοθέτηση και ανάκτηση οπτικοακουστικού υλικού, το MPEG-7. Το πρότυπο περιλαμβάνει ένα σύνολο από περιγραφείς, καθώς και μία δομή που χρησιμοποιεί για να αποθηκεύσει πληροφορίεςαπό τα μέσα που αρχειοθετεί. Σε ότι αφορά το οπτικό πολυμεσικό υλικό, το πρότυπο αυτό χρησιμοποιεί μία ομάδα περιγραφέων για κάθε είδος πληροφορίας που περιγράφει. Για παράδειγμα, ένα πλήθος περιγραφέων περιγράφει πληροφορία χρώματος, ένα άλλο περιγράφει χαρακτηριστικά υφής κλπ. Στόχος της διδακτορικής έρευνας, κατά την έναρξη της το έτος 2005, τέθηκε η κατ' αναλογία με το MPEG-7 ανάπτυξη ενός σχήματος για την περιγραφή και ανάκτηση οπτικού περιεχομένου πολυμέσων βασισμένου σε ευφυείς τεχνικές, με έμφαση στη χρήση τεχνικών ασαφούς λογικής. Η έρευνα, με βάση τα αποτελέσματα που παρουσίασε, μπορεί να χωριστεί σε 6 θεματικές ενότητες οι οποίες συνδέονται άμεσα μεταξύ τους. Η μελέτη των αποτελεσμάτων που παρουσιάζει ένα σύστημα ανάκτησης σε κάποια δεδομένη Βάση Εικόνων Αξιολόγησης (benchmark database) αποτελεί το κριτήριο αξιολόγησης του συστήματος και την πρώτη θεματική ενότητα της διατριβής. Πολλές από τις μεθόδους που χρησιμοποιούνται στην ερευνητική περιοχή της

    Text localization using standard deviation analysis of structure elements and support vector machines

    No full text
    <p>Abstract</p> <p>A text localization technique is required to successfully exploit document images such as technical articles and letters. The proposed method detects and extracts text areas from document images. Initially a connected components analysis technique detects blocks of foreground objects. Then, a descriptor that consists of a set of suitable document structure elements is extracted from the blocks. This is achieved by incorporating an algorithm called Standard Deviation Analysis of Structure Elements (SDASE) which maximizes the separability between the blocks. Another feature of the SDASE is that its length adapts according to the requirements of the application. Finally, the descriptor of each block is used as input to a trained support vector machines that classify the block as text or not. The proposed technique is also capable of adjusting to the text structure of the documents. Experimental results on benchmarking databases demonstrate the effectiveness of the proposed method.</p

    Loki+Lire

    No full text
    corecore