23,655 research outputs found

    Improvement of Decision on Coding Unit Split Mode and Intra-Picture Prediction by Machine Learning

    Get PDF
    High efficiency Video Coding (HEVC) has been deemed as the newest video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The reference software (i.e., HM) have included the implementations of the guidelines in appliance with the new standard. The software includes both encoder and decoder functionality. Machine learning (ML) works with data and processes it to discover patterns that can be later used to analyze new trends. ML can play a key role in a wide range of critical applications, such as data mining, natural language processing, image recognition, and expert systems. In this research project, in compliance with H.265 standard, we are focused on improvement of the performance of encode/decode by optimizing the partition of prediction block in coding unit with the help of supervised machine learning. We used Keras library as the main tool to implement the experiments. Key parameters were tuned for the model in our convolution neuron network. The coding tree unit mode decision time produced in the model was compared with that produced in HM software, and it was proved to have improved significantly. The intra-picture prediction mode decision was also investigated with modified model and yielded satisfactory results

    The pros and cons of using SDL for creation of distributed services

    Get PDF
    In a competitive market for the creation of complex distributed services, time to market, development cost, maintenance and flexibility are key issues. Optimizing the development process is very much a matter of optimizing the technologies used during service creation. This paper reports on the experience gained in the Service Creation projects SCREEN and TOSCA on use of the language SDL for efficient service creation

    TINA as a virtual market place for telecommunication and information services: the VITAL experiment

    No full text
    The VITAL (Validation of Integrated Telecommunication Architectures for the Long-Term) project has defined, implemented and demonstrated an open distributed telecommunication architecture (ODTA) for deploying, managing and using a set of heterogeneous multimedia, multi-party, and mobility services. The architecture was based on the latest specifications released by TINA-C. The architecture was challenged in a set of trials by means of a heterogeneous set of applications. Some of the applications were developed within the project from scratch, while some others focused on integrating commercially available applications. The applications were selected in such a way as to assure full coverage of the architecture implementation and reflect a realistic use of it. The VITAL experience of refining and implementing TINA specifications and challenging the resulting platform by a heterogeneous set of services has proven the openness, flexibility and reusability of TINA. This paper describes the VITAL approach when choosing the different services and how they challenge and interact with the architecture, focusing especially on the service architecture and the Ret reference point definitions. The VITAL adjustments and enhancements to the TINA architecture are described. This paper contributes to proving that the TINA-based VITAL ODTA allows for easy and cost-effective development and deployment of advanced end-user and operator services, and can indeed act as the basis for a virtual market place for telecommunications service

    Social justice and an information democracy with free and open source software

    Get PDF
    This paper includes some thoughts on the implications of proprietary software versus free and open source software with regards to social justice, capital, and notions of an information society versus an information democracy. It outlines what free and open source software is and why it is important for social justice, and it offers three cases that highlight two salient themes. This includes a case about preference ordering and decision-making and two cases about knowing and knowledge

    Everything counts in small amounts

    Get PDF
    This paper describes an encoding tool which utilises the "data is code" principle of symbolic expressions available in Lisp-like languages to allow the scripting of tightly packed, cross-platform network protocols. This dynamic approach provides specific flexibility when working on embedded systems as it reduces the amount of cross compilation and deploy cycles that occur following more traditional development approaches. In addition, the separation of how the data is encoded from the compiled application facilitates a concept known as extensibility of the network protocol without requiring special handling

    Emergency TeleOrthoPaedics m-health system for wireless communication links

    Get PDF
    For the first time, a complete wireless and mobile emergency TeleOrthoPaedics system with field trials and expert opinion is presented. The system enables doctors in a remote area to obtain a second opinion from doctors in the hospital using secured wireless telecommunication networks. Doctors can exchange securely medical images and video as well as other important data, and thus perform remote consultations, fast and accurately using a user friendly interface, via a reliable and secure telemedicine system of low cost. The quality of the transmitted compressed (JPEG2000) images was measured using different metrics and doctors opinions. The results have shown that all metrics were within acceptable limits. The performance of the system was evaluated successfully under different wireless communication links based on real data

    A user-oriented network forensic analyser: the design of a high-level protocol analyser

    Get PDF
    Network forensics is becoming an increasingly important tool in the investigation of cyber and computer-assisted crimes. Unfortunately, whilst much effort has been undertaken in developing computer forensic file system analysers (e.g. Encase and FTK), such focus has not been given to Network Forensic Analysis Tools (NFATs). The single biggest barrier to effective NFATs is the handling of large volumes of low-level traffic and being able to exact and interpret forensic artefacts and their context – for example, being able extract and render application-level objects (such as emails, web pages and documents) from the low-level TCP/IP traffic but also understand how these applications/artefacts are being used. Whilst some studies and tools are beginning to achieve object extraction, results to date are limited to basic objects. No research has focused upon analysing network traffic to understand the nature of its use – not simply looking at the fact a person requested a webpage, but how long they spend on the application and what interactions did they have with whilst using the service (e.g. posting an image, or engaging in an instant message chat). This additional layer of information can provide an investigator with a far more rich and complete understanding of a suspect’s activities. To this end, this paper presents an investigation into the ability to derive high-level application usage characteristics from low-level network traffic meta-data. The paper presents a three application scenarios – web surfing, communications and social networking and demonstrates it is possible to derive the user interactions (e.g. page loading, chatting and file sharing ) within these systems. The paper continues to present a framework that builds upon this capability to provide a robust, flexible and user-friendly NFAT that provides access to a greater range of forensic information in a far easier format

    Keeping the Cost of Process Change Low through Refactoring

    Get PDF
    With the increasing adoption of process-aware information systems (PAIS) large process model repositories have emerged. Over time respective models have to be re-aligned to the real world business processes through customization or adaptation. This bears the risk that model redundancies are introduced and complexity is increased. If no continuous investment is made in keeping models simple, changes are becoming increasingly costly and error-prone. Although refactoring techniques are widely used in software engineering to address related problems, this does not yet constitute state-of-the art in business process management. Consequently, process designers either have to refactor process models by hand or can not apply respective techniques at all. In this paper we propose a set of techniques for refactoring large process repositories, which are behaviour-preserving. The proposed refactorings enable process designers to effectively deal with model complexity by making process models easier to change, less error-prone and better understandable
    corecore