1,645 research outputs found

    Programming issues for video analysis on Graphics Processing Units

    Get PDF
    El procesamiento de vídeo es la parte del procesamiento de señales, donde las señales de entrada y/o de salida son secuencias de vídeo. Cubre una amplia variedad de aplicaciones que son, en general, de cálculo intensivo, debido a su complejidad algorítmica. Por otra parte, muchas de estas aplicaciones exigen un funcionamiento en tiempo real. El cumplimiento de estos requisitos hace necesario el uso de aceleradores hardware como las Unidades de Procesamiento Gráfico (GPU). El procesamiento de propósito general en GPU representa una tendencia exitosa en la computación de alto rendimiento, desde el lanzamiento de la arquitectura y el modelo de programación NVIDIA CUDA. Esta tesis doctoral trata sobre la paralelización eficiente de aplicaciones de procesamiento de vídeo en GPU. Este objetivo se aborda desde dos vertientes: por un lado, la programación adecuada de la GPU para aplicaciones de vídeo; por otro lado, la GPU debe ser considerada como parte de un sistema heterogéneo. Dado que las secuencias de vídeo se componen de fotogramas, que son estructuras de datos regulares, muchos componentes de las aplicaciones de vídeo son inherentemente paralelizables. Sin embargo, otros componentes son irregulares en el sentido de que llevan a cabo cálculos que dependen de la carga de trabajo, sufren contención en la escritura, contienen partes inherentemente secuenciales o desbalanceadas en carga... Esta tesis propone estrategias para hacer frente a estos aspectos, a través de varios casos de estudio. También se describe una aproximación optimizada al cálculo de histogramas basada en un modelo de rendimiento de la memoria. Las secuencias de vídeo son flujos continuos que deben ser transferidos desde el ¿host¿ (CPU) al dispositivo (GPU), y los resultados del dispositivo al ¿host¿. Esta tesis doctoral propone el uso de CUDA streams para implementar el paradigma de ¿stream processing¿ en la GPU, con el fin de controlar la ejecución simultánea de las transferencias de datos y de la computación. También propone modelos de rendimiento que permiten una ejecución óptima

    La política de la competencia en Colombia y la intervención de sus autoridades.

    Get PDF
    El objetivo de este trabajo de investigación fue realizar un análisis filosófico hermenéutico a la política de la competencia en Colombia y la intervención de sus autoridades, evento que se examina desde el análisis de contenido a textos jurídicos como leyes, jurisprudencias y textos doctrinales; La unidad de análisis fue Establecer de qué manera ha influido la Superindustria en los procesos jurídicos-legales en materia de competitividad en nuestro país, luego de que la carta política de 1991 entró a regir y como variables tenemos: Indagar como ha transcurrido la política de la competición en este país latinoamericano y en el derecho comparado; Examinar las condiciones que generan obligación a cumplir con la normatividad respecto al derecho a la competición y realizar un estudio de contenido sobre la política de la competencia y la intervención de sus autoridades en Colombia. La investigación que se llevó a cabo es jurídica pura y básica. El método que se aplicó fue el analítico. Como fuentes primarias se utilizó la ficha bibliográfica y el análisis se hizo a través del análisis de contenido de las leyes, jurisprudencias y doctrina examinadas. Como resultados tenemos que la actual Ley de la Competencia tiene un amparo poco seguro de los establecimientos de comercio y de los estipendios de los compradores colombianos, debido a las normas nacional y local que se han creado para tal fin, porque lo que han logrado es llevar a cabo la facilitación de prácticas anticompetitivas porque no se ha reducido la intensidad de las malas prácticas de la competencia entre empresas, lo que ha dado como resultado en una baja productividad en el país, lo que hace que los mercados sean ineficiente porque hay pocas empresas sofisticadas, poco cambio productivo, los productos tienen precios altos y son de baja calidad; lo anterior afecta a los consumidores de escasos recursos, por lo que las conclusiones son que la política pública de reglamentación de las prácticas anticompetitivas, necesita nuevas reformas institucionales para tener un país productivoThe objective of this research work was to conduct a hermeneutical philosophical analysis of competition policy in Colombia and the intervention of its authorities, an event that is examined from content analysis to legal texts such as laws, jurisprudence and doctrinal texts; The unit of analysis was to establish how the Super industry has influenced legal-legal processes in terms of competitiveness in our country, after the 1991 political letter came into force and as variables we have: To inquire how the policy of competition in this Latin American country and in comparative law; Examine the conditions that generate obligation to comply with the regulations regarding the right to competition and conduct a content study on competition policy and the intervention of its authorities in Colombia. The investigation that was carried out is pure and basic legal. The method that was applied was the analytical one. The bibliographic record was used as primary sources and the analysis was done through the content analysis of the laws, jurisprudence and doctrine examined. As a result we have that the current Competition Law has an unsafe protection of commercial establishments and stipends of Colombian buyers, due to the national and local regulations that have been created for this purpose, because what they have achieved is carry out the facilitation of anticompetitive practices because the intensity of bad practices of competition between companies has not been reduced, which has resulted in low productivity in the country, which makes markets inefficient because there are few sophisticated companies, little productive change, the products have high prices and are of low quality; The aforementioned affects low-income consumers, so the conclusions are that the public policy regulating anti-competitive practices needs new institutional reforms to have a productive countr

    UCO.MIPSIM: pipelined computer simulator for teaching purposes

    Get PDF
    Since pipelining is a very important implementation technique for processors, students in Computer Science need to achieve a good understanding of it. UCO.MIPSIM simulator has been developed to support teaching such concepts. This paper introduces the basics of pipelining and describes UCO.MIPSIM, its main components and its functions. The learning effectiveness of the simulator has been tested by means of the comparison with two learning tools, the traditional paper and pencil and another pipelined computer simulator. In this way, a tool evaluation methodology is also introduce

    A Modern Primer on Processing in Memory

    Full text link
    Modern computing systems are overwhelmingly designed to move data to computation. This design choice goes directly against at least three key trends in computing that cause performance, scalability and energy bottlenecks: (1) data access is a key bottleneck as many important applications are increasingly data-intensive, and memory bandwidth and energy do not scale well, (2) energy consumption is a key limiter in almost all computing platforms, especially server and mobile systems, (3) data movement, especially off-chip to on-chip, is very expensive in terms of bandwidth, energy and latency, much more so than computation. These trends are especially severely-felt in the data-intensive server and energy-constrained mobile systems of today. At the same time, conventional memory technology is facing many technology scaling challenges in terms of reliability, energy, and performance. As a result, memory system architects are open to organizing memory in different ways and making it more intelligent, at the expense of higher cost. The emergence of 3D-stacked memory plus logic, the adoption of error correcting codes inside the latest DRAM chips, proliferation of different main memory standards and chips, specialized for different purposes (e.g., graphics, low-power, high bandwidth, low latency), and the necessity of designing new solutions to serious reliability and security issues, such as the RowHammer phenomenon, are an evidence of this trend. This chapter discusses recent research that aims to practically enable computation close to data, an approach we call processing-in-memory (PIM). PIM places computation mechanisms in or near where the data is stored (i.e., inside the memory chips, in the logic layer of 3D-stacked memory, or in the memory controllers), so that data movement between the computation units and memory is reduced or eliminated.Comment: arXiv admin note: substantial text overlap with arXiv:1903.0398

    Homogenization of very thin elastic reticulated structures

    Get PDF
    This work is devoted to the homogenization of the anisotropic, linearized elasticity system posed on thin reticulated structures involving several parameters. We show that the result depends on the relative size of the parameters. In every case, we obtain a limit problem where both the microscopic and macroscopic scales appear together. From this problem, we get an asymptotic development which gives an approximation in L2 of the displacements and the linearized strain tensor.Ministerio de Ciencia y Tecnologí

    Professional certification as a teaching tool in Computer Science Degree at the Higher Polytechnic School of Cordoba

    Get PDF
    Existen múltiples empresas que emiten certificaciones de competencias profesionales dentro del ámbito de las Ingenierías. Estas certificaciones están siendo muy demandadas por las empresas empleadoras ya que permiten demostrar la cualificación de los solicitantes desde un punto de vista teórico y práctico muy exigente. Esto ha llevado a que muchos alumnos y egresados hayan tomado cursos de preparación para los exámenes de certificación. Una de las más relevantes es CISCO, que realiza certificaciones de competencias de Redes de Computadores. Un grupo de profesores de la EPSC de la Universidad de Córdoba ha planteado adaptar los contenidos de diversas asignaturas de la Titulación Oficial de Grado en Ingeniería Informática para que el alumnado cubra todos los requisitos formativos (tanto teóricos como prácticos) exigidos por la empresa CISCO para las diversas certificaciones en materia de Redes de Comunicación entre Ordenadores. Además, esta adaptación debe hacerse sin alterar el documento Verifica, aprobado por la Agencia Nacional de Acreditación y Evaluación (ANECA).There are a lot of enterprises which provide certifications of knowledge and skills in professional competences within the Engineering field. These certifications are highly demanded by employers, because they demonstrate the qualification of the applicants from a very demanding theoretical and practical point of view. Thus, a lot of students and graduates have enrolled in training courses for the certification exams. One of the most important corporations is CISCO, which provides certifications in the field of Computer Networking. A group of professors from the Higher Polytechnic School of Cordoba from the University of Cordoba (Spain) has adapted the contents of several subjects of the Computer Science Degree, in order to cover all the educational requirements required by CISCO for the Computer Networking field. Besides, this adaptation has to be taken without any modification in the VERIFICA document, approved by the Spanish National Certification and Evaluation Agency (ANECA)

    PUMA: Efficient and Low-Cost Memory Allocation and Alignment Support for Processing-Using-Memory Architectures

    Full text link
    Processing-using-DRAM (PUD) architectures impose a restrictive data layout and alignment for their operands, where source and destination operands (i) must reside in the same DRAM subarray (i.e., a group of DRAM rows sharing the same row buffer and row decoder) and (ii) are aligned to the boundaries of a DRAM row. However, standard memory allocation routines (i.e., malloc, posix_memalign, and huge pages-based memory allocation) fail to meet the data layout and alignment requirements for PUD architectures to operate successfully. To allow the memory allocation API to influence the OS memory allocator and ensure that memory objects are placed within specific DRAM subarrays, we propose a new lazy data allocation routine (in the kernel) for PUD memory objects called PUMA. The key idea of PUMA is to use the internal DRAM mapping information together with huge pages and then split huge pages into finer-grained allocation units that are (i) aligned to the page address and size and (ii) virtually contiguous. We implement PUMA as a kernel module using QEMU and emulate a RISC-V machine running Fedora 33 with v5.9.0 Linux Kernel. We emulate the implementation of a PUD system capable of executing row copy operations (as in RowClone) and Boolean AND/OR/NOT operations (as in Ambit). In our experiments, such an operation is performed in the host CPU if a given operation cannot be executed in our PUD substrate (due to data misalignment). PUMA significantly outperforms the baseline memory allocators for all evaluated microbenchmarks and allocation sizes
    corecore