66,016 research outputs found

    The XBOX 360 and Steganography: How Criminals and Terrorists Could Be Going Dark

    Get PDF
    Video game consoles have evolved from single-player embedded systems with rudimentary processing and graphics capabilities to multipurpose devices that provide users with parallel functionality to contemporary desktop and laptop computers. Besides offering video games with rich graphics and multiuser network play, today\u27s gaming consoles give users the ability to communicate via email, video and text chat; transfer pictures, videos, and file;, and surf the World-Wide-Web. These communication capabilities have, unfortunately, been exploited by people to plan and commit a variety of criminal activities. In an attempt to cover the digital tracks of these unlawful undertakings, anti-forensic techniques, such as steganography, may be utilized to hide or alter evidence. This paper will explore how criminals and terrorists might be using the Xbox 360 to convey messages and files using steganographic techniques. Specific attention will be paid to the going dark problem and the disjoint between forensic capabilities for analyzing traditional computers and forensic capabilities for analyzing video game consoles. Forensic approaches for examining Microsoft\u27s Xbox 360 will be detailed and the resulting evidentiary capabilities will be discussed. Keywords: Digital Forensics, Xbox Gaming Console, Steganography, Terrorism, Cyber Crim

    Using the PlayStation3 for speeding up metaheuristic optimization

    Get PDF
    Traditional computer software is written for serial computation. To solve an optimization problem, an algorithm or metaheuristic is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit (CPU) on one computer. Parallel computing uses multiple processing elements simultaneously to solve a problem. This is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above. Today most commodity CPU designs include single instructions for some vector processing on multiple (vectorized) data sets, typically known as SIMD (Single Instruction, Multiple Data). Modern video game consoles and consumer computer-graphics hardware rely heavily on vector processing in their architecture. In 2000, IBM, Toshiba and Sony collaborated to create the Cell Broadband Engine (Cell BE), consisting of one traditional microprocessor (called the Power Processing Element or PPE) and eight SIMD co-processing units, or the so-called Synergistic Processor Elements (SPEs), which found use in the Sony PlayStation3 among other applications The computational power of the Cell BE orPlayStation3 can also be used for scientific computing. Examples and applications have been reported in e.g. Kurzak et al. (2008), Bader et al. (2008), Olivier et al. (2007), Petrini et al. (2007). In this work, the potential of using the PlayStation3 for speeding up metaheuristic optimization is investigated. More specifically, we propose an adaptation of an evolutionary algorithm with embedded simulation for inspection optimization, developed in Van Volsem et al. (2007), Van Volsem (2009a) and Van Volsem (2009b

    Diseño, implementación y evaluación de clúster rb-pi para el procesamiento de visión en paralelo

    Get PDF
    En este proyecto se presenta la creación de un clúster de micropcs para análisis de imágenes. Para su creación partimos de cero: el análisis de las herramientas que nos ayuden a conseguir el objetivo, un análisis de las prestaciones, así como los pasos para su creación. Se parte de la base de que se alcanzó el límite de procesamiento monoprocesador desde hace varias generaciones, lo cual ha supuesto una fuerte evolución en el procesamiento paralelo. Para aplicaciones con grandes necesidades (cálculos matemáticos, procesamiento de vídeo, entrenamiento, etc.) se hace uso de clústeres; sin embargo, éstos poseen un coste muy elevado, lo cual provoca que sea uso exclusivo en supercomputación. En este trabajo se presenta el diseño e implementación de clúster de bajo coste basado en computadores empotrados Raspberry Pi. Se describe la configuración del clúster para comunicación por paso de mensajes. Y, finalmente, se realiza una prueba del sistema utilizando aplicación de procesamiento paralelo de vídeo.This project presents the creation of a micro-pc cluster for image analysis. For its creation we start from scratch: the analysis of the tools that help us achieve the goal, an analysis of the benefits, as well as the steps for its creation. Since the limit of single-processor processing has been reached for several generations, a strong evolution in parallel processing has been experimented in the last years. For applications with great computation power needs (mathematical calculations, video processing, training, ...) clusters are used; however, they have a very high cost, which makes them to be used exclusively in supercomputing. In this paper we present the design and implementation of a low-cost cluster based on Raspberry Pi embedded computers. The configuration of the cluster for communication by message passing is described. And, finally, a system test is performed using a parallel video processing application

    A Parallel Histogram-based Particle Filter for Object Tracking on SIMD-based Smart Cameras

    Get PDF
    We present a parallel implementation of a histogram-based particle filter for object tracking on smart cameras based on SIMD processors. We specifically focus on parallel computation of the particle weights and parallel construction of the feature histograms since these are the major bottlenecks in standard implementations of histogram-based particle filters. The proposed algorithm can be applied with any histogram-based feature sets—we show in detail how the parallel particle filter can employ simple color histograms as well as more complex histograms of oriented gradients (HOG). The algorithm was successfully implemented on an SIMD processor and performs robust object tracking at up to 30 frames per second—a performance difficult to achieve even on a modern desktop computer

    A Survey of Techniques For Improving Energy Efficiency in Embedded Computing Systems

    Full text link
    Recent technological advances have greatly improved the performance and features of embedded systems. With the number of just mobile devices now reaching nearly equal to the population of earth, embedded systems have truly become ubiquitous. These trends, however, have also made the task of managing their power consumption extremely challenging. In recent years, several techniques have been proposed to address this issue. In this paper, we survey the techniques for managing power consumption of embedded systems. We discuss the need of power management and provide a classification of the techniques on several important parameters to highlight their similarities and differences. This paper is intended to help the researchers and application-developers in gaining insights into the working of power management techniques and designing even more efficient high-performance embedded systems of tomorrow

    Fuzzy memoization for floating-point multimedia applications

    Get PDF
    Instruction memoization is a promising technique to reduce the power consumption and increase the performance of future low-end/mobile multimedia systems. Power and performance efficiency can be improved by reusing instances of an already executed operation. Unfortunately, this technique may not always be worth the effort due to the power consumption and area impact of the tables required to leverage an adequate level of reuse. In this paper, we introduce and evaluate a novel way of understanding multimedia floating-point operations based on the fuzzy computation paradigm: performance and power consumption can be improved at the cost of small precision losses in computation. By exploiting this implicit characteristic of multimedia applications, we propose a new technique called tolerant memoization. This technique expands the capabilities of classic memoization by associating entries with similar inputs to the same output. We evaluate this new technique by measuring the effect of tolerant memoization for floating-point operations in a low-power multimedia processor and discuss the trade-offs between performance and quality of the media outputs. We report energy improvements of 12 percent for a set of key multimedia applications with small LUT of 6 Kbytes, compared to 3 percent obtained using previously proposed techniques.Peer ReviewedPostprint (published version

    Challenging the Computational Metaphor: Implications for How We Think

    Get PDF
    This paper explores the role of the traditional computational metaphor in our thinking as computer scientists, its influence on epistemological styles, and its implications for our understanding of cognition. It proposes to replace the conventional metaphor--a sequence of steps--with the notion of a community of interacting entities, and examines the ramifications of such a shift on these various ways in which we think
    corecore