60,018 research outputs found

    Will the exploratory behavior of lobsters decrease as they become familiar with their environment?

    Get PDF
    Previous studies have shown that most lobsters have a home range in which they reside on a daily basis. The tendency for lobsters to reside in a particular area suggests that they have the ability to learn the characteristics of an area using exploratory behavior. We hypothesize that the exploratory behavior of juvenile lobsters will decrease as time spent in a novel environment increases; specifically exploratory behavior will decrease as the lobsters continuously learn the environment. Exploratory activity of juvenile lobsters was monitored in six lobsters using two separate maze complexities. Lobsters were video recorded and activity was measured based on the distance traveled each day. Lobsters were kept in the maze for ten days; three lobsters were tested in the simple maze and three were tested in the complex maze. A lobster tested in the simple maze followed our hypothesis and showed a continuous decline in activity for several days (activity decreased from 260.55 cms/day to 45.8 cms/day by Day 7) before reaching a constant baseline level. Another lobster tested in the simple maze was only active during the night and showed a steady decline in nighttime activity. Only one of the lobsters tested in the complex maze showed any decline in activity. Overall, these results suggest that lobsters are able to learn at least some features of a simple maze within seven days and that lobsters need far more than ten days to learn the environment of a more complex maze environment

    CU2CL: A CUDA-to-OpenCL Translator for Multi- and Many-core Architectures

    Get PDF
    The use of graphics processing units (GPUs) in high-performance parallel computing continues to become more prevalent, often as part of a heterogeneous system. For years, CUDA has been the de facto programming environment for nearly all general-purpose GPU (GPGPU) applications. In spite of this, the framework is available only on NVIDIA GPUs, traditionally requiring reimplementation in other frameworks in order to utilize additional multi- or many-core devices. On the other hand, OpenCL provides an open and vendorneutral programming environment and runtime system. With implementations available for CPUs, GPUs, and other types of accelerators, OpenCL therefore holds the promise of a “write once, run anywhere” ecosystem for heterogeneous computing. Given the many similarities between CUDA and OpenCL, manually porting a CUDA application to OpenCL is typically straightforward, albeit tedious and error-prone. In response to this issue, we created CU2CL, an automated CUDA-to- OpenCL source-to-source translator that possesses a novel design and clever reuse of the Clang compiler framework. Currently, the CU2CL translator covers the primary constructs found in CUDA runtime API, and we have successfully translated many applications from the CUDA SDK and Rodinia benchmark suite. The performance of our automatically translated applications via CU2CL is on par with their manually ported countparts

    An INTRODUCTION TO CUDA Programming

    Get PDF
    The graphics boards have become so powerful that they are usded for mathematical computations, such as matrix multiplication and transposition, which are required for complex visual and physics simulations in computer games. NVIDIA has supported this trend by releasing the CUDA (Compute Unified Device Architecture) interface library to allow applications developers to write code that can be uploaded into an NVIDIA-based card for execution by NVIDIA's massively parallel GPUs. This paper is an introduction to the CUDA programming based on the documentation from [2] and [4].cuda programming
    corecore