2,865 research outputs found

    Supporting real time video over ATM networks

    Get PDF
    Includes bibliographical references.In this project, we propose and evaluate an approach to delimit and tag such independent video slice at the ATM layer for early discard. This involves the use of a tag cell differentiated from the rest of the data by its PTI value and a modified tag switch to facilitate the selective discarding of affected cells within each video slice as opposed to dropping of cells at random from multiple video frames

    CLOUD LIVE VIDEO TRANSFER

    Get PDF
    As multimedia content continues to grow, considerations for more effective storage options, like cloud technologies, become apparent. While video has become a mainstream media source on the web, live video streaming is growing as a prominent player in the modern marketplace for both businesses and individuals. For instance, a business owner may want to oversee operations while he or she is away, or an individual may want to surveillance their property. In this work, we propose Cloud Live Video Streaming (CLVS) - a very efficient method to stream live video that creates a separate pricing model from modern video streaming services. The key component to CLVS is Amazon Simple Storage Service (S3), which is used to store video segments and metadata. By using S3, CLVS employs what is referred to as a ”serverless” design by removing the need to stream video through an intermediary server. CLVS also removes the need for third party accounts and license agreements. We implement a prototype of CLVS and compare it with an existing commercial video streaming software - Wowza Streaming Engine. As live video streaming becomes more common, alternative and cost effective solutions are essential

    Identification and measurement of tropical tuna species in purse seiner catches using computer vision and deep learning

    Get PDF
    Fishery monitoring programs are essential for effective management of marine resources, as they provide scientists and managers with the necessary data for both the preparation of scientific advice and fisheries control and surveillance. The monitoring is generally done by human observers, both in port and onboard, with a high cost involved. Consequently, some Regional Fisheries Management Organizations (RFMO) are opting for electronic monitoring (EM) as an alternative or complement to human observers in certain fisheries. This is the case of the tropical tuna purse seine fishery operating in the Indian and Atlantic oceans, which started an EM program on a voluntary basis in 2017. However, even when the monitoring is conducted though EM, the image analysis is a tedious task manually performed by experts. In this paper, we propose a cost-effective methodology for the automatic processing of the images already being collected by cameras onboard tropical tuna purse seiners. Firstly, the images are preprocessed to homogenize them across all vessels and facilitate subsequent steps. Secondly, the fish are individually segmented using a deep neural network (Mask R-CNN). Then, all segments are passed through other deep neural network (ResNet50V2) to classify them by species and estimate their size distribution. For the classification of fish, we achieved an accuracy for all species of over 70%, i.e., about 3 out of 4 individuals are correctly classified to their corresponding species. The size distribution estimates are aligned with official port measurements but calculated using a larger number of individuals. Finally, we also propose improvements to the current image capture systems which can facilitate the work of the proposed automation methodology.This project is funded by the Basque Government, and the Spanish fisheries ministry through the EU next Generation funds. Jose A. Fernandes'work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements No 869342 (SusTunTech). This work is supported in part by the University of the Basque Country UPV/EHU grant GIU19/027. We want to thank the expert analysts who helped to annotate images with incredible effort: Manuel Santos and Inigo Krug. We also like to extend our gratitude to Marine Instruments for providing the necessary equipment tocollect the data. This paper is contribution no 1080 from AZTI, Marine Research, Basque Research and Technology Alliance (BRTA)

    Data preparation for artificial intelligence in medical imaging: A comprehensive guide to open-access platforms and tools

    Get PDF
    The vast amount of data produced by today's medical imaging systems has led medical professionals to turn to novel technologies in order to efficiently handle their data and exploit the rich information present in them. In this context, artificial intelligence (AI) is emerging as one of the most prominent solutions, promising to revolutionise every day clinical practice and medical research. The pillar supporting the development of reliable and robust AI algorithms is the appropriate preparation of the medical images to be used by the AI-driven solutions. Here, we provide a comprehensive guide for the necessary steps to prepare medical images prior to developing or applying AI algorithms. The main steps involved in a typical medical image preparation pipeline include: (i) image acquisition at clinical sites, (ii) image de-identification to remove personal information and protect patient privacy, (iii) data curation to control for image and associated information quality, (iv) image storage, and (v) image annotation. There exists a plethora of open access tools to perform each of the aforementioned tasks and are hereby reviewed. Furthermore, we detail medical image repositories covering different organs and diseases. Such repositories are constantly increasing and enriched with the advent of big data. Lastly, we offer directions for future work in this rapidly evolving field

    Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI

    Full text link
    Influenced by the great success of deep learning via cloud computing and the rapid development of edge chips, research in artificial intelligence (AI) has shifted to both of the computing paradigms, i.e., cloud computing and edge computing. In recent years, we have witnessed significant progress in developing more advanced AI models on cloud servers that surpass traditional deep learning models owing to model innovations (e.g., Transformers, Pretrained families), explosion of training data and soaring computing capabilities. However, edge computing, especially edge and cloud collaborative computing, are still in its infancy to announce their success due to the resource-constrained IoT scenarios with very limited algorithms deployed. In this survey, we conduct a systematic review for both cloud and edge AI. Specifically, we are the first to set up the collaborative learning mechanism for cloud and edge modeling with a thorough review of the architectures that enable such mechanism. We also discuss potentials and practical experiences of some on-going advanced edge AI topics including pretraining models, graph neural networks and reinforcement learning. Finally, we discuss the promising directions and challenges in this field.Comment: 20 pages, Transactions on Knowledge and Data Engineerin

    An Integrated Network Architecture for a High Speed Distributed Multimedia System.

    Get PDF
    Computer communication demands for higher bandwidth and smaller delays are increasing rapidly as the march into the twenty-first century gains momentum. These demands are generated by visualization applications which model complex real time phenomena in visual form, electronic document imaging and manipulation, concurrent engineering, on-line databases and multimedia applications which integrate audio, video and data. The convergence of the computer and video worlds is leading to the emergence of a distributed multimedia environment. This research investigates an integrated approach in the design of a high speed computer-video local area network for a distributed multimedia environment. The initial step in providing multimedia services over computer networks is to ensure bandwidth availability for these services. The bandwidth needs based on traffic generated in a distributed multimedia environment is computationally characterized by a model. This model is applied to the real-time problem of designing a backbone for a distributed multimedia environment at the NASA Classroom of the Future Program. The network incorporates legacy LANs and the latest high speed switching technologies. Performance studies have been conducted with different network topologies for various multimedia application scenarios to establish benchmarks for the operation of the network. In these performance studies it has been observed that network topologies play an important role in ensuring that sufficient bandwidth is available for multimedia traffic. After the implementation of the network and the performance studies, it was found that for true quality of service guarantees, some modifications will have to be made in the multimedia operating systems used in client workstations. These modifications would gather knowledge of the channel between source and destination and reserve resources for multimedia communication based on specified requirements. A scheme for reserving resources in a network consisting legacy LAN and ATM is presented to guarantee quality of service for multimedia applications
    corecore