384 research outputs found

    Feature Extraction and Grouping for Robot Vision Tasks

    Get PDF

    Construcción de mapas 3D y extracción de primitivas geométricas del entorno

    Get PDF
    Este trabajo se centra en la construcción de mapas 3D a partir de datos de rango obtenidos mediante una cámara estéreo. El proceso de reconstrucción lo hemos dividido en dos fases. En la primera, a partir de datos tomados a intervalos regulares por un robot dentro de un entorno, hemos solucionado parcialmente el problema del error de odometría, haciendo uso de un método novedoso de emparejamiento 3D. En la segunda y haciendo uso del resultado de la fase anterior, nos planteamos extraer primitivas geométricas (planos, cilindros, etc.) para realizar una reconstrucción del entorno mediante estas primitivas.Secretaría de Estado de Educación y Universidade

    A robust and fast method for 6DoF motion estimation from generalized 3D data

    Get PDF
    Nowadays, there is an increasing number of robotic applications that need to act in real three-dimensional (3D) scenarios. In this paper we present a new mobile robotics orientated 3D registration method that improves previous Iterative Closest Points based solutions both in speed and accuracy. As an initial step, we perform a low cost computational method to obtain descriptions for 3D scenes planar surfaces. Then, from these descriptions we apply a force system in order to compute accurately and efficiently a six degrees of freedom egomotion. We describe the basis of our approach and demonstrate its validity with several experiments using different kinds of 3D sensors and different 3D real environments.This work has been supported by project DPI2009-07144 from Ministerio de Educación y Ciencia (Spain) and GRE10-35 from Universidad de Alicante (Spain)

    JavaVis: An Integrated Computer Vision Library for Teaching Computer Vision

    Get PDF
    In this article, we present a new framework oriented to teach Computer Vision related subjects called JavaVis. It is a computer vision library divided in three main areas: 2D package is featured for classical computer vision processing; 3D package, which includes a complete 3D geometric toolset, is used for 3D vision computing; Desktop package comprises a tool for graphic designing and testing of new algorithms. JavaVis is designed to be easy to use, both for launching and testing existing algorithms and for developing new ones.This work was supported by project GRE10‐35 from Universidad de Alicante (Spain) and grant GITE‐09017‐UA of University of Alicante

    Special issue about advances in Physical Agents

    Get PDF
    Nowadays, there are a lot of Spanish groups which are doing research in areas related with physical agents: they use agent-based technologies concepts, especially industrial applications, robotics and domotics (physical agents) and applications related to the information society, (software agents) highlighting the similarities and synergies among physical and software agents. In this special issue we will show several works from those groups, focusing on the recent advances in Physical Agents

    Experiences Using an Open Source Sofware Library to Teach Computer Vision Subjects

    Get PDF
    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with enfasis on readability and understanding rather than on efficiency. However, the library can also be used for research purposes. JavaVis is an open source Java library, oriented to the teaching of Computer Vision. It consists of a framework with several features that meet its demands. It has been designed to be easy to use: the user does not have to deal with internal structures or graphical interface, and should the student need to add a new algorithm it can be done simply enough. Once we sketch the library, we focus on the experience the student gets using this library in several computer vision courses. Our main goal is to find out whether the students understand what they are doing, that is, find out how much the library helps the student in grasping the basic concepts of computer vision. In the last four years we have conducted surveys to assess how much the students have improved their skills by using this library

    Exploring Transferability on Adversarial Attacks

    Get PDF
    In spite of the progress that has been made in the field, the problem of adversarial attacks remains unresolved. The most up-to-date models are still vulnerable, and there is not a simple way to defend against these kinds of attacks; even transformers can be affected by this problem, although they have not been extensively studied yet. In this paper, we study transferability, which is a property of adversarial attacks in which images generated for one architecture can be transferred to another and still be effective. In real-world scenarios like self-driving cars, malware detection, and face recognition authentication systems, transferability can lead to security issues. In order to conduct a behavioral analysis, we select a diverse set of networks and measure how effectively the images produced by various attacks can be transferred among them. We generate adversarial samples for each network and then evaluate them with other networks to determine the corresponding transferability performance. We can observe that all networks are susceptible to transferability attacks, albeit in some cases at the expense of severely distorted images.This study has been funded by the ‘‘Methodology for EmotionAware Education Based on Artificial Intelligence’’ (Programa PROMETEO 2022—CIPROM/2021/017, Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital de la Generalitat Valenciana, Spain)

    A Malware Detection Approach Based on Feature Engineering and Behavior Analysis

    Get PDF
    Cybercriminals are constantly developing new techniques to circumvent the security measures implemented by experts and researchers, so malware is able to evolve very rapidly. In addition, detecting malware across multiple systems is a challenging problem because each computing environment has its own unique characteristics. Traditional techniques, such as signature-based malware detection, have been largely replaced by more modern approaches, such as machine learning and robust cross-platform behavior-based threat detection, as they have become less effective. Researchers employ these techniques across a variety of data sources, including network traffic, binaries, and behavioral data, to extract relevant features and feed them to models for accurate prediction. The aim of this research is to provide a novel dataset comprised of a substantial number of high-quality samples based on software behavior. Due to the lack of a standard representational format for malware behavior in current research, we also present an innovative method for representing malware behavior by converting API calls into 2D images, which builds on previous work. Additionally, we propose and describe the implementation of a new machine learning model based on binary classification (malware or benign software) using the previously mentioned novel dataset as its data source, thereby establishing an evaluation baseline. We have conducted extensive experimentation, validating the proposed model with both our novel dataset and real-world data. In terms of metrics, our proposed model outperforms a well-known model that is also based on behavior analysis and has a similar architecture

    Compression and registration of 3D point clouds using GMMs

    Get PDF
    3D data sensors provide an enormous amount of information. It is necessary to develop efficient methods to manage this information under certain time, bandwidth or storage space requirements. In this work, we propose a 3D compression and decompression method. This method also allows the use of the compressed data for a registration process. First, points are selected and grouped, using a 3D-model based on planar surfaces. Next, we use a fast variant of Gaussian Mixture Models and an Expectation-Maximization algorithm to replace the points grouped in the previous step with a set of Gaussian distributions. These learned models can be used as features to find matches between two consecutive poses and apply 3D pose registration using RANSAC. Finally, the 3D map can be obtained by decompressing the models.This work has been supported by the Spanish Government TIN2016-76515-R Grant, supported with Feder funds

    Automatic Understanding and Mapping of Regions in Cities Using Google Street View Images

    Get PDF
    The use of semantic representations to achieve place understanding has been widely studied using indoor information. This kind of data can then be used for navigation, localization, and place identification using mobile devices. Nevertheless, applying this approach to outdoor data involves certain non-trivial procedures, such as gathering the information. This problem can be solved by using map APIs which allow images to be taken from the dataset captured to add to the map of a city. In this paper, we seek to leverage such APIs that collect images of city streets to generate a semantic representation of the city, built using a clustering algorithm and semantic descriptors. The main contribution of this work is to provide a new approach to generate a map with semantic information for each area of the city. The proposed method can automatically assign a semantic label for the cluster on the map. This method can be useful in smart cities and autonomous driving approaches due to the categorization of the zones in a city. The results show the robustness of the proposed pipeline and the advantages of using Google Street View images, semantic descriptors, and machine learning algorithms to generate semantic maps of outdoor places. These maps properly encode the zones existing in the selected city and are able to provide new zones between current ones.This work has been supported by the Spanish Grant PID2019-104818RB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”. José Carlos Rangel and Edmanuel Cruz were supported by the Sistema Nacional de Investigación (SNI) of SENACYT, Panama
    corecore