191 research outputs found

    DeLTA: GPU Performance Model for Deep Learning Applications with In-depth Memory System Traffic Analysis

    Full text link
    Training convolutional neural networks (CNNs) requires intense compute throughput and high memory bandwidth. Especially, convolution layers account for the majority of the execution time of CNN training, and GPUs are commonly used to accelerate these layer workloads. GPU design optimization for efficient CNN training acceleration requires the accurate modeling of how their performance improves when computing and memory resources are increased. We present DeLTA, the first analytical model that accurately estimates the traffic at each GPU memory hierarchy level, while accounting for the complex reuse patterns of a parallel convolution algorithm. We demonstrate that our model is both accurate and robust for different CNNs and GPU architectures. We then show how this model can be used to carefully balance the scaling of different GPU resources for efficient CNN performance improvement

    Use of human pluripotent stem cells to define initiating molecular mechanisms of cataract for anti-cataract drug discovery

    Get PDF
    Cataract is a leading cause of blindness worldwide. Currently, restoration of vision in cataract patients requires surgical removal of the cataract. Due to the large and increasing number of cataract patients, the annual cost of surgical cataract treatment amounts to billions of dollars. Limited access to functional human lens tissue during the early stages of cataract formation has hampered efforts to develop effective anti-cataract drugs. The ability of human pluripotent stem (PS) cells to make large numbers of normal or diseased human cell types raises the possibility that human PS cells may provide a new avenue for defining the molecular mechanisms responsible for different types of human cataract. Towards this end, methods have been established to differentiate human PS cells into both lens cells and transparent, light-focusing human micro-lenses. Sensitive and quantitative assays to measure light transmittance and focusing ability of human PS cell-derived micro-lenses have also been developed. This review will, therefore, examine how human PS cell-derived lens cells and micro-lenses might provide a new avenue for development of much-needed drugs to treat human cataract

    Cameras and settings for aerial surveys in the geosciences : Optimising image data

    Get PDF
    Aerial image capture has become very common within the geosciences due to the increasing affordability of low-payload (<20 kg) unmanned aerial vehicles (UAVs) for consumer markets. Their application to surveying has subsequently led to many studies being undertaken using UAV imagery and derived products as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion. In this contribution we firstly revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well-exposed and suitable image data are derived. Secondly, the platform, camera, lens and imaging settings relevant to image quality planning are discussed, with worked examples to guide users through the process of considering the factors required for capturing high-quality imagery for geoscience investigations. Given a target feature size and ground sample distance based on mission objectives, the flight height and velocity should be calculated to ensure motion blur is kept to a minimum. We recommend using a camera with as large a sensor as is permissible for the aerial platform being used (to maximise sensor sensitivity), effective focal lengths of 24–35 mm (to minimise errors due to lens distortion) and optimising ISO (to ensure the shutter speed is fast enough to minimise motion blur). Finally, we give recommendations for the reporting of results by researchers in order to help improve the confidence in, and reusability of, surveys through providing open access imagery where possible, presenting example images and excerpts and detailing appropriate metadata to rigorously describe the image capture process

    TRECVID 2004 experiments in Dublin City University

    Get PDF
    In this paper, we describe our experiments for TRECVID 2004 for the Search task. In the interactive search task, we developed two versions of a video search/browse system based on the Físchlár Digital Video System: one with text- and image-based searching (System A); the other with only image (System B). These two systems produced eight interactive runs. In addition we submitted ten fully automatic supplemental runs and two manual runs. A.1, Submitted Runs: • DCUTREC13a_{1,3,5,7} for System A, four interactive runs based on text and image evidence. • DCUTREC13b_{2,4,6,8} for System B, also four interactive runs based on image evidence alone. • DCUTV2004_9, a manual run based on filtering faces from an underlying text search engine for certain queries. • DCUTV2004_10, a manual run based on manually generated queries processed automatically. • DCU_AUTOLM{1,2,3,4,5,6,7}, seven fully automatic runs based on language models operating over ASR text transcripts and visual features. • DCUauto_{01,02,03}, three fully automatic runs based on exploring the benefits of multiple sources of text evidence and automatic query expansion. A.2, In the interactive experiment it was confirmed that text and image based retrieval outperforms an image-only system. In the fully automatic runs, DCUauto_{01,02,03}, it was found that integrating ASR, CC and OCR text into the text ranking outperforms using ASR text alone. Furthermore, applying automatic query expansion to the initial results of ASR, CC, OCR text further increases performance (MAP), though not at high rank positions. For the language model-based fully automatic runs, DCU_AUTOLM{1,2,3,4,5,6,7}, we found that interpolated language models perform marginally better than other tested language models and that combining image and textual (ASR) evidence was found to marginally increase performance (MAP) over textual models alone. For our two manual runs we found that employing a face filter disimproved MAP when compared to employing textual evidence alone and that manually generated textual queries improved MAP over fully automatic runs, though the improvement was marginal. A.3, Our conclusions from our fully automatic text based runs suggest that integrating ASR, CC and OCR text into the retrieval mechanism boost retrieval performance over ASR alone. In addition, a text-only Language Modelling approach such as DCU_AUTOLM1 will outperform our best conventional text search system. From our interactive runs we conclude that textual evidence is an important lever for locating relevant content quickly, but that image evidence, if used by experienced users can aid retrieval performance. A.4, We learned that incorporating multiple text sources improves over ASR alone and that an LM approach which integrates shot text, neighbouring shots and entire video contents provides even better retrieval performance. These findings will influence how we integrate textual evidence into future Video IR systems. It was also found that a system based on image evidence alone can perform reasonably and given good query images can aid retrieval performance
    • …
    corecore