327 research outputs found

    TechNews digests: Jan - Mar 2010

    Get PDF
    TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month

    Переклад термінології у галузі електроніки, електротехніки та енергетики з англійської на українську мову

    Get PDF
    У посібнику подано аутентичні матеріали та вправи з письмового та усного перекладу в галузі електротехніки, електроніки та енергетики; тексти для самостійної роботи, контрольні завдання з перекладу, а також англо-український та українсько-англійський словник термінів та понять електротехніки, електроніки та енергетики. Розраховано на студентів спеціальності "Переклад (англійська мова)" і аспірантів технічних спеціальностей.The book presents authentic materials and exercises in written and oral translation in the field of electronics, electrical engineering and power engineering; texts for independent home translation, English–Ukrainian and Ukrainian-English vocabularies of specific terms. For the students of "Translation and Interpreting" departments and post-graduate students of technical specialities

    BCBU + handbook : a guide to establish virtual cross-border campus for BCBU network

    Get PDF
    Vertaisarviointia edeltävä käsikirjoitu

    ACUTA Journal of Telecommunications in Higher Education

    Get PDF
    In This Issue New Bandwidth Boosts Opportunities at the University of ldaho Colleges Meld Data Functionality to Afford Larger, Better Facilities Focusing on Video Demands Wireless Optical Mesh Networking Wireless LANs for Voice Delivering Broadband over Power Lines The Real lmpact of Napster ACUTA Awards Presentations Interview President\u27s Message From the Executive Director Here\u27s My Advic

    Live Television in a Digital Library

    Get PDF
    Nowadays nearly everyone has access to digital television with a growing number of channels available for free. However due to the nature of broadcasting, this huge mass of information that reaches us is not, for the main part, organised—it is principally a succession of images and sound transmitted in a flow of data. Compare this with digital libraries which are powerful at organising a large but fixed set of documents. This project brings together these two concepts by concurrently capturing all the available live television channels, and segments them into files which are then imported into a digital video library. The system leverages off the information contained in the electronic program guide and the video recordings to generate metadata suitable for the digital library. By combining these two concepts together this way, the aim of this work is to look beyond what is currently available in the digital TV set top boxes on the market today and explore the full potential—unencumbered by commercial market constraints—to what the raw technology can provide

    Inventory management of the refrigerator\u27s produce bins using classification algorithms and hand analysis.

    Get PDF
    Tracking the inventory of one’s refrigerator has been a mission for consumers since the advent of the refrigerator. With the improvement of computer vision capabilities, automatic inventory systems are within reach. One inventory area with many potential benefits is the fresh food produce bins. The bins are a unique storage area due to their deep size. A user cannot easily see what is in the bins without opening the drawer. Produce items are also some of the quickest foods in the refrigerator to spoil, despite being temperature and humidity controlled to have the fruits and vegetables last longer. Allowing the consumer to have a list of items in their bins could ultimately lead to a more informed consumer and less food spoilage. A single camera could identify items by making predictions when the bins are open, but the camera would only be able to “see” the top layer of produce. If one could combine the data from the open bins with information from the user as they placed and removed items, it is hypothesized that a comprehensive produce bin inventory could be created. This thesis addresses the challenges presented by getting a full inventory of all items within the produce bins by observing if the hand can provide useful information. The thesis proposes that all items must go in or out of the refrigerator by the main door, and by using a single camera to observe the hand-object interactions, a more complete inventory list can be created. The work conducted for this hand analysis study consists of three main parts. The first was to create a model that could identify hands within the refrigerator. The model needed to be robust enough to detect different hand sizes, colors, orientations, and partially-occluded hands. The accuracy of the model was determined by comparing ground truth detections for 185 new images to the model versus the detections made by the model. The model was 93% accurate. The second was to track the hand and determine if it was moving in or out of the refrigerator. The tracker needed to record the coordinates of the hands to provide useful information on consumer behavior and to determine where items are placed. The accuracy of the tracker was determined by visual inspection. The final part was to analyze the detected hand to determine if it is holding a type of produce or empty, and track if the produce is added or removed from the refrigerator. As an initial proof-of-concept, a two types of produce, an apple and an orange, will be used as a testing ground. The accuracy of the hand analysis (e.g., hand with apple or orange vs. hand empty) was determined by comparing its output to a 301-frame video with ground truth labels. The hand analysis system was 87% accurate classifying an empty hand, 85% accurate on a hand holding an apple, and 74% accurate on a hand holding an orange. The system was 93% accurate at detecting what was added or removed from the refrigerator, and 100% accurate determining where within the refrigerator the item was added or removed

    Television Playout Development Towards Flexible IT-based Solutions

    Get PDF
    The purpose of this study was to update television playout system. An Estonian television and radio network operator Levira playout centre needed expansion in infrastructure to be able to accommodate larger channel count. Renewals of media asset management and automation systems were also required to handle requirements of the larger channel count and to automate processes. Television playout systems and solutions are going through big changes. These changes reflect partially new developments in the whole broadcasting industry and the way people use video, as well as changes and development in other areas of technology, especially in IT. These changes do not come without challenges with new workflows and ways of operating. These projects were to be done not only for current needs but also to be ready for the coming years. This was taken into consideration on system design and planning as well as on choosing the partners for the projects. As a result of this study, an up to date playout centre was designed with flexible IT based solutions that are easy to update and customize for varying needs of different television channels. Playout centre has room to grow and it is ready for requirements of future without need for major changes in the system.Työn tavoitteena oli television lähetyskeskuksen järjestelmän päivittäminen. Virolaisen television ja radion lähetysverkkotoimija Leviran lähetyskeskus tarvitsi laajennuksen olemassa olevaan järjestelmään, jotta se pystyisi lähettämään suuremman määrän kanavia. Myös lähetyskeskuksen medianhallinta- ja automaatiojärjestelmät täytyi päivittää tämän suuremman kanavamäärän tukemiseksi ja työnkulun tehostamiseksi. Television lähetysjärjestelmät ja ratkaisut käyvät läpi isoja muutoksia. Nämä muutokset heijastuvat osin koko televisiotoiminnan muutoksesta ja siitä, miten katsojat ylipäätään käyttävät videota ja osin informaatioteknologian (IT) kehityksestä. Se on selvää, että muutokset tuovat uusia haasteita, edellyttävät täysin uudenlaisten työnkulkujen ja ajatusmallien omaksumista. Molemmat projektit toteutettiin ottaen huomioon tämän hetken tarpeet tiedostopohjaisen työnkulun ja kasvavan kanavamäärän tuoman kapasiteettivaatimusten suhteen sekä ennen kaikkea lähivuosien mahdolliset vaatimukset muuttuvien jakelukanavien osalta. Systeemin suunnittelu ja yhteistyökumppaneiden valinta painotettiin sen mukaisesti. Toteutettuna lopputuloksena on television lähetyskeskusjärjestelmä, joka pohjautuu helposti päivitettäviin ja eri televisiokanavien vaihteleviin tarpeisiin mukautuviin IT-laitteistoihin. Järjestelmällä on tilaa kasvaa sekä kehittyä tulevaisuuteen ilman välitöntä tarvetta merkittäviin systeemitason muutoksiin

    Infrastructure sharing of 5G mobile core networks on an SDN/NFV platform

    Get PDF
    When looking towards the deployment of 5G network architectures, mobile network operators will continue to face many challenges. The number of customers is approaching maximum market penetration, the number of devices per customer is increasing, and the number of non-human operated devices estimated to approach towards the tens of billions, network operators have a formidable task ahead of them. The proliferation of cloud computing techniques has created a multitude of applications for network services deployments, and at the forefront is the adoption of Software-Defined Networking (SDN) and Network Functions Virtualisation (NFV). Mobile network operators (MNO) have the opportunity to leverage these technologies so that they can enable the delivery of traditional networking functionality in cloud environments. The benefit of this is reductions seen in the capital and operational expenditures of network infrastructure. When going for NFV, how a Virtualised Network Function (VNF) is designed, implemented, and placed over physical infrastructure can play a vital role on the performance metrics achieved by the network function. Not paying careful attention to this aspect could lead to the drastically reduced performance of network functions thus defeating the purpose of going for virtualisation solutions. The success of mobile network operators in the 5G arena will depend heavily on their ability to shift from their old operational models and embrace new technologies, design principles and innovation in both the business and technical aspects of the environment. The primary goal of this thesis is to design, implement and evaluate the viability of data centre and cloud network infrastructure sharing use case. More specifically, the core question addressed by this thesis is how virtualisation of network functions in a shared infrastructure environment can be achieved without adverse performance degradation. 5G should be operational with high penetration beyond the year 2020 with data traffic rates increasing exponentially and the number of connected devices expected to surpass tens of billions. Requirements for 5G mobile networks include higher flexibility, scalability, cost effectiveness and energy efficiency. Towards these goals, Software Defined Networking (SDN) and Network Functions Virtualisation have been adopted in recent proposals for future mobile networks architectures because they are considered critical technologies for 5G. A Shared Infrastructure Management Framework was designed and implemented for this purpose. This framework was further enhanced for performance optimisation of network functions and underlying physical infrastructure. The objective achieved was the identification of requirements for the design and development of an experimental testbed for future 5G mobile networks. This testbed deploys high performance virtualised network functions (VNFs) while catering for the infrastructure sharing use case of multiple network operators. The management and orchestration of the VNFs allow for automation, scalability, fault recovery, and security to be evaluated. The testbed developed is readily re-creatable and based on open-source software

    Understanding and Efficiently Servicing HTTP Streaming Video Workloads

    Get PDF
    Live and on-demand video streaming has emerged as the most popular application for the Internet. One reason for this success is the pragmatic decision to use HTTP to deliver video content. However, while all web servers are capable of servicing HTTP streaming video workloads, web servers were not originally designed or optimized for video workloads. Web server research has concentrated on requests for small items that exhibit high locality, while video files are much larger and have a popularity distribution with a long tail of less popular content. Given the large number of servers needed to service millions of streaming video clients, there are large potential benefits from even small improvements in servicing HTTP streaming video workloads. To investigate how web server implementations can be improved, we require a benchmark to analyze existing web servers and test alternate implementations, but no such HTTP streaming video benchmark exists. One reason for the lack of a benchmark is that video delivery is undergoing rapid evolution, so we devise a flexible methodology and tools for creating benchmarks that can be readily adapted to changes in HTTP video streaming methods. Using our methodology, we characterize YouTube traffic from early 2011 using several published studies and implement a benchmark to replicate this workload. We then demonstrate that three different widely-used web servers (Apache, nginx and the userver) are all poorly suited to servicing streaming video workloads. We modify the userver to use asynchronous serialized aggressive prefetching (ASAP). Aggressive prefetching uses a single large disk access to service multiple small sequential requests, and serialization prevents the kernel from interleaving disk accesses, which together greatly increase throughput. Using the modified userver, we show that characteristics of the workload and server affect the best prefetch size to use and we provide an algorithm that automatically finds a good prefetch size for a variety of workloads and server configurations. We conduct our own characterization of an HTTP streaming video workload, using server logs obtained from Netflix. We study this workload because, in 2015, Netflix alone accounted for 37% of peak period North American Internet traffic. Netflix clients employ DASH (Dynamic Adaptive Streaming over HTTP) to switch between different bit rates based on changes in network and server conditions. We introduce the notion of chains of sequential requests to represent the spatial locality of workloads and find that even with DASH clients, the majority of bytes are requested sequentially. We characterize rate adaptation by separating sessions into transient, stable and inactive phases, each with distinct patterns of requests. We find that playback sessions are surprisingly stable; in aggregate, 5% of total session duration is spent in transient phases, 79% in stable and 16% in inactive phases. Finally we evaluate prefetch algorithms that exploit knowledge about workload characteristics by simulating the servicing of the Netflix workload. We show that the workload can be serviced with either 13% lower hard drive utilization or 48% less system memory than a prefetch algorithm that makes no use of workload characteristics
    corecore