205 research outputs found

    Similarity-based data transmission reduction solution for edge-cloud collaborative AI

    Get PDF
    Edge-cloud collaborative processing for IoT data is a relatively new approach that tries to solve processing and network issues in IoT systems. It consists of splitting the processing done by a Neural Network model into edge part and cloud part in order to solve network, privacy and load issues. However, it also has it shortcomings such as the big size of the edge part's output that has to be transmitted to the cloud. In this paper, we are proposing a data transmission reduction method for edge-cloud collaborative solutions that is based on data similarities in stationary objects. The performed experiments proved that we were able to reduce 62% of the data sent

    Biblioteca Aritmética de operaciones en Tiempo Real para Números en Coma Flotante

    Get PDF
    Lenguaje de alto nivel utilizado: JavaEl estándar IEEE754 es ampliamente utilizado en representación numérica de números reales y es actualmente seguida por los fabricantes en muchas de las implementaciones de CPU. Este estándar determina una serie de formatos para la representación de números en coma flotante, sus casos especiales y situaciones de error. Muchos lenguajes especifican que formatos y aritmética de la IEEE implementan, por ejemplo, en los lenguajes C/C++ y Java, el tipo float representa números en simple precisión y el tipo double representa números en doble precisión) y definen los operadores aritméticos básicos (suma, resta, producto y división) para operar con estos números. Sin embargo, estos lenguajes de alto nivel no permiten operar a nivel de bit con los números en punto flotante, por lo tanto, no se permite obtener el valor de un bit concreto ni aplicar operadores como el desplazamiento de bits. Esta biblioteca contiene la implementación en alto nivel de la colección de funciones aritméticas que operan con números codificados en el estándar. Su propósito es el de disponer de una base funcional que permita realizar experimentos de análisis de coste y precisión en el uso de los distintas variantes del formato. Asímismo, la biblioteca contiene la implementación de las operaciones básicas con restricciones temporales, es decir, con capacidad de prefijar el momento y precisión del resultado

    Time-Precision Flexible Adder

    Get PDF
    Paper submitted to 10th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Sharjah, Emiratos Árabes, 2003.A new conception of flexible calculation that allows us to adjust a sum depending on the available time computation is presented. More specifically, the objective is to obtain a calculation model that makes the processing time/precision more flexible. The addition method is based on carry-select scheme adder and the proposed design uses precalculated data stored in look-up tables, which provide, above all, quality results and systematization in the implementation of low level primitives that set parameters for the processing time. We report an evaluation of the architecture in area, delay and computation error, as well as a suitable implementation in FPGA to validate the design.This work is being backed by grant DPI2002-04434-C04-01 from the Ministerio de Ciencia y Tecnología of the Spanish Government

    Exact Numerical Processing

    Get PDF
    Paper submitted to Euromicro Symposium on Digital Systems Design (DSD), Belek-Antalya, Turkey, 2003.A model of an exact arithmetic processing is presented. We describe a representation format that gives us a greater expressive capability and covers a wider numerical set. The rational numbers are represented by means of fractional notation and explicit codification of its periodic part. We also give a brief description of exact arithmetic operations on the proposed format. This model constitutes a good alternative for the symbolic arithmetic, in special when numerical exact values are required. As an example, we show an application of the exact numerical processing to calculate the perpendicular vector to another one for aerospace purposes.This work is being backed by grant DPI2002-04434-C04-01 from the Ministerio de Ciencia y Tecnología of the Spanish Government

    Criptografía en bases de datos en cloud computing.

    Get PDF
    The IT managers of companies who are considering migrating their systems to the cloud computing have their reservationsabout the security and reliability of cloud-based services, these are not yet fully convinced that deliver sensitive data companies or theirclients is a good idea, in this context the use of encryption systems, in particular homomorphic encryption schemes are useful, since theoperations in the cloud provider are made with the encrypted information, providing a level of reliability and safety databases fromattacks as well as internal and external in cloud computing. This paper proposes a scheme to protect the different attributes ofinformation (confidentiality, integrity and authentication), stored in a BD in the Cloud.Los responsables de informática de las empresas que están pensando migrar sus sistemas de cómputo a la nube tienensus reservas con respecto a la seguridad y la confiabilidad de los servicios basados en la nube, éstos aún no están plenamenteconvencidos de que entregar datos sensibles de las empresas o de sus clientes sea buena idea, en este contexto el uso de los sistemas decifrado, y en especial los esquemas de cifrado homomórficos son de gran utilidad, ya que las operaciones realizadas en el proveedorcloud se realizan con la información cifrada, brindando así un nivel de confiabilidad y seguridad a las bases de datos frente a posiblesataques tanto internos como externos en el cloud computing. En el presente trabajo se propone un esquema para proteger los diferentesatributos de la información (confidencialidad, integridad y autenticación) almacenada en una BD en el Cloud

    Convergence analysis and validation of low cost distance metrics for computational cost reduction of the Iterative Closest Point algorithm

    Get PDF
    The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations

    Adjustable compression method for still JPEG images

    Get PDF
    There are a large number of image processing applications that work with different performance requirements and available resources. Recent advances in image compression focus on reducing image size and processing time, but offer no real-time solutions for providing time/quality flexibility of the resulting image, such as using them to transmit the image contents of web pages. In this paper we propose a method for encoding still images based on the JPEG standard that allows the compression/decompression time cost and image quality to be adjusted to the needs of each application and to the bandwidth conditions of the network. The real-time control is based on a collection of adjustable parameters relating both to aspects of implementation and to the hardware with which the algorithm is processed. The proposed encoding system is evaluated in terms of compression ratio, processing delay and quality of the compressed image when compared with the standard method

    ServiceNet:resource-efficient architecture for topology discovery in large-scale multi-tenant clouds

    Get PDF
    Modern computing infrastructures are evolving due to virtualisation, especially with the advent of 5G and future technologies. While this transition offers numerous benefits, it also presents challenges. Consequently, understanding these complex systems, including networks, services, and their interconnections, is crucial. This paper introduces ServiceNet, a groundbreaking architecture that accurately performs the important task of providing understanding of a multi-tenant architecture by discovering the complete topology, crucial in the realm of high-performance distributed computing. Experimental results have been carried out in different scenarios in order to validate our approach, demonstrating the effectiveness of our approach in comprehensive multi-tenant topology discovery. The experiments, involving up to forty tenant, highlight the adaptability of ServiceNet as a valuable tool for real-time monitoring in topology discovery purposes, even in challenging scenarios

    Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm

    Get PDF
    The Iterative Closest Point (ICP) algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results
    corecore