3,063 research outputs found

    Contract-Based Design of Dataflow Programs

    Get PDF
    Quality and correctness are becoming increasingly important aspects of software development, as our reliance on software systems in everyday life continues to increase. Highly complex software systems are today found in critical appliances such as medical equipment, cars, and telecommunication infrastructure. Failures in these kinds of systems may have disastrous consequences. At the same time, modern computer platforms are increasingly concurrent, as the computational capacity of modern CPUs is improved mainly by increasing the number of processor cores. Computer platforms are also becoming increasingly parallel, distributed and heterogeneous, often involving special processing units, such as graphics processing units (GPU) or digital signal processors (DSP) for performing specific tasks more efficiently than possible on general-purpose CPUs. These modern platforms allow implementing increasingly complex functionality in software. Cost efficient development of software that efficiently exploits the power of this type of platforms and at the same time ensures correctness is, however, a challenging task. Dataflow programming has become popular in development of safetycritical software in many domains in the embedded community. For instance, in the automotive domain, the dataflow language Simulink has become widely used in model-based design of control software. However, for more complex functionality, this model of computation may not be expressive enough. In the signal processing domain, more expressive, dynamic models of computation have attracted much attention. These models of computation have, however, not gained as significant uptake in safety-critical domains due to a great extent to that it is challenging to provide guarantees regarding e.g. timing or determinism under these more expressive models of computation. Contract-based design has become widespread to specify and verify correctness properties of software components. A contract consists of assumptions (preconditions) regarding the input data and guarantees (postconditions) regarding the output data. By verifying a component with respect to its contract, it is ensured that the output fulfils the guarantees, assuming that the input fulfils the assumptions. While contract-based verification of traditional object-oriented programs has been researched extensively, verification of asynchronous dataflow programs has not been researched to the same extent. In this thesis, a contract-based design framework tailored specifically to dataflow programs is proposed. The proposed framework supports both an extensive subset of the discrete-time Simulink synchronous language, as well as a more general, asynchronous and dynamic, dataflow language. The proposed contract-based verification techniques are automatic, only guided by user-provided invariants, and based on encoding dataflow programs in existing, mature verification tools for sequential programs, such as the Boogie guarded command language and its associated verifier. It is shown how dataflow programs, with components implemented in an expressive programming language with support for matrix computations, can be efficiently encoded in such a verifier. Furthermore, it is also shown that contract-based design can be used to improve runtime performance of dataflow programs by allowing more scheduling decisions to be made at compile-time. All the proposed techniques have been implemented in prototype tools and evaluated on a large number of different programs. Based on the evaluation, the methods were proven to work in practice and to scale to real-world programs.Kvalitet och korrekthet blir idag allt viktigare aspekter inom mjukvaruutveckling, då vi i allt högre grad förlitar oss på mjukvarusystem i våra vardagliga sysslor. Mycket komplicerade mjukvarusystem finns idag i kritiska tillämpningar så som medicinsk utrustning, bilar och infrastruktur för telekommunikation. Fel som uppstår i de här typerna av system kan ha katastrofala följder. Samtidigt utvecklas kapaciteten hos moderna datorplattformar idag främst genom att öka antalet processorkärnor. Därtill blir datorplattformar allt mer parallella, distribuerade och heterogena, och innefattar ofta specialla processorer så som grafikprocessorer (GPU) eller signalprocessorer (DSP) för att utföra specifika beräkningar snabbare än vad som är möjligt på vanliga processorer. Den här typen av plattformar möjligör implementering av allt mer komplicerade beräkningar i mjukvara. Kostnadseffektiv utveckling av mjukvara som effektivt utnyttjar kapaciteten i den här typen av plattformar och samtidigt säkerställer korrekthet är emellertid en mycket utmanande uppgift. Dataflödesprogrammering har blivit ett populärt sätt att utveckla mjukvara inom flera områden som innefattar säkerhetskritiska inbyggda datorsystem. Till exempel inom fordonsindustrin har dataflödesspråket Simulink kommit att användas i bred utsträckning för modellbaserad design av kontrollsystem. För mer komplicerad funktionalitet kan dock den här modellen för beräkning vara för begränsad beträffande vad som kan beksrivas. Inom signalbehandling har mera expressiva och dynamiska modeller för beräkning attraherat stort intresse. De här modellerna för beräkning har ändå inte tagits i bruk i samma utsträckning inom säkerhetskritiska tillämpningar. Det här beror till en stor del på att det är betydligt svårare att garantera egenskaper gällande till exempel timing och determinism under sådana här modeller för beräkning. Kontraktbaserad design har blivit ett vanligt sätt att specifiera och verifiera korrekthetsegenskaper hos mjukvarukomponeneter. Ett kontrakt består av antaganden (förvillkor) gällande indata och garantier (eftervillkor) gällande utdata. Genom att verifiera en komponent gentemot sitt konktrakt kan man bevisa att utdatan uppfyller garantierna, givet att indatan uppfyller antagandena. Trots att kontraktbaserad verifiering i sig är ett mycket beforskat område, så har inte verifiering av asynkrona dataflödesprogram beforskats i samma utsträckning. I den här avhandlingen presenteras ett ramverk för kontraktbaserad design skräddarsytt för dataflödesprogram. Det föreslagna ramverket stödjer så väl en stor del av det synkrona språket. Simulink med diskret tid som ett mera generellt asynkront och dynamiskt dataflödesspråk. De föreslagna kontraktbaserade verifieringsteknikerna är automatiska. Utöver kontraktets för- och eftervillkor ger användaren endast de invarianter som krävs för att möjliggöra verifieringen. Verifieringsteknikerna grundar sig på att omkoda dataflödesprogram till input för existerande och beprövade verifieringsverktyg för sekventiella program så som Boogie. Avhandlingen visar hur dataflödesprogram implementerade i ett expressivt programmeringsspråk med inbyggt stöd för matrisoperationer effektivt kan omkodas till input för ett verifieringsverktyg som Boogie. Utöver detta visar avhandlingen också att kontraktbaserad design också kan förbättra prestandan hos dataflödesprogram i körningsskedet genom att möjliggöra flera schemaläggningsbeslut redan i kompileringsskedet. Alla tekniker som presenteras i avhandlingen har implementerats i prototypverktyg och utvärderats på en stor mängd olika program. Utvärderingen bevisar att teknikerna fungerar i praktiken och är tillräckligt skalbara för att också fungera på program av realistisk storlek

    On Versatile Video Coding at UHD with Machine-Learning-Based Super-Resolution

    Full text link
    Coding 4K data has become of vital interest in recent years, since the amount of 4K data is significantly increasing. We propose a coding chain with spatial down- and upscaling that combines the next-generation VVC codec with machine learning based single image super-resolution algorithms for 4K. The investigated coding chain, which spatially downscales the 4K data before coding, shows superior quality than the conventional VVC reference software for low bitrate scenarios. Throughout several tests, we find that up to 12 % and 18 % Bjontegaard delta rate gains can be achieved on average when coding 4K sequences with VVC and QP values above 34 and 42, respectively. Additionally, the investigated scenario with up- and downscaling helps to reduce the loss of details and compression artifacts, as it is shown in a visual example.Comment: Originally published as conference paper at QoMEX 202

    Investigating the Effects of Network Dynamics on Quality of Delivery Prediction and Monitoring for Video Delivery Networks

    Get PDF
    Video streaming over the Internet requires an optimized delivery system given the advances in network architecture, for example, Software Defined Networks. Machine Learning (ML) models have been deployed in an attempt to predict the quality of the video streams. Some of these efforts have considered the prediction of Quality of Delivery (QoD) metrics of the video stream in an effort to measure the quality of the video stream from the network perspective. In most cases, these models have either treated the ML algorithms as black-boxes or failed to capture the network dynamics of the associated video streams. This PhD investigates the effects of network dynamics in QoD prediction using ML techniques. The hypothesis that this thesis investigates is that ML techniques that model the underlying network dynamics achieve accurate QoD and video quality predictions and measurements. The thesis results demonstrate that the proposed techniques offer performance gains over approaches that fail to consider network dynamics. This thesis results highlight that adopting the correct model by modelling the dynamics of the network infrastructure is crucial to the accuracy of the ML predictions. These results are significant as they demonstrate that improved performance is achieved at no additional computational or storage cost. These techniques can help the network manager, data center operatives and video service providers take proactive and corrective actions for improved network efficiency and effectiveness

    Region-Based Template Matching Prediction for Intra Coding

    Get PDF
    Copy prediction is a renowned category of prediction techniques in video coding where the current block is predicted by copying the samples from a similar block that is present somewhere in the already decoded stream of samples. Motion-compensated prediction, intra block copy, template matching prediction etc. are examples. While the displacement information of the similar block is transmitted to the decoder in the bit-stream in the first two approaches, it is derived at the decoder in the last one by repeating the same search algorithm which was carried out at the encoder. Region-based template matching is a recently developed prediction algorithm that is an advanced form of standard template matching. In this method, the reference area is partitioned into multiple regions and the region to be searched for the similar block(s) is conveyed to the decoder in the bit-stream. Further, its final prediction signal is a linear combination of already decoded similar blocks from the given region. It was demonstrated in previous publications that region-based template matching is capable of achieving coding efficiency improvements for intra as well as inter-picture coding with considerably less decoder complexity than conventional template matching. In this paper, a theoretical justification for region-based template matching prediction subject to experimental data is presented. Additionally, the test results of the aforementioned method on the latest H.266/Versatile Video Coding (VVC) test model (version VTM-14.0) yield an average Bjøntegaard-Delta (BD) bit-rate savings of −0.75% using all intra (AI) configuration with 130% encoder run-time and 104% decoder run-time for a particular parameter selection

    Design and Implementation of HD Wireless Video Transmission System Based on Millimeter Wave

    Get PDF
    With the improvement of optical fiber communication network construction and the improvement of camera technology, the video that the terminal can receive becomes clearer, with resolution up to 4K. Although optical fiber communication has high bandwidth and fast transmission speed, it is not the best solution for indoor short-distance video transmission in terms of cost, laying difficulty and speed. In this context, this thesis proposes to design and implement a multi-channel wireless HD video transmission system with high transmission performance by using the 60GHz millimeter wave technology, aiming to improve the bandwidth from optical nodes to wireless terminals and improve the quality of video transmission. This thesis mainly covers the following parts: (1) This thesis implements wireless video transmission algorithm, which is divided into wireless transmission algorithm and video transmission algorithm, such as 64QAM modulation and demodulation algorithm, H.264 video algorithm and YUV420P algorithm. (2) This thesis designs the hardware of wireless HD video transmission system, including network processing unit (NPU) and millimeter wave module. Millimeter wave module uses RWM6050 baseband chip and TRX-BF01 rf chip. This thesis will design the corresponding hardware circuit based on the above chip, such as 10Gb/s network port, PCIE. (3) This thesis realizes the software design of wireless HD video transmission system, selects FFmpeg and Nginx to build the sending platform of video transmission system on NPU, and realizes video multiplex transmission with Docker. On the receiving platform of video transmission, FFmpeg and Qt are selected to realize video decoding, and OpenGL is combined to realize video playback. (4) Finally, the thesis completed the wireless HD video transmission system test, including pressure test, Web test and application scenario test. It has been verified that its HD video wireless transmission system can transmit HD VR video with three-channel bit rate of 1.2GB /s, and its rate can reach up to 3.7GB /s, which meets the research goal

    Perceptual Video Coding for Machines via Satisfied Machine Ratio Modeling

    Full text link
    Video Coding for Machines (VCM) aims to compress visual signals for machine analysis. However, existing methods only consider a few machines, neglecting the majority. Moreover, the machine perceptual characteristics are not effectively leveraged, leading to suboptimal compression efficiency. In this paper, we introduce Satisfied Machine Ratio (SMR) to address these issues. SMR statistically measures the quality of compressed images and videos for machines by aggregating satisfaction scores from them. Each score is calculated based on the difference in machine perceptions between original and compressed images. Targeting image classification and object detection tasks, we build two representative machine libraries for SMR annotation and construct a large-scale SMR dataset to facilitate SMR studies. We then propose an SMR prediction model based on the correlation between deep features differences and SMR. Furthermore, we introduce an auxiliary task to increase the prediction accuracy by predicting the SMR difference between two images in different quality levels. Extensive experiments demonstrate that using the SMR models significantly improves compression performance for VCM, and the SMR models generalize well to unseen machines, traditional and neural codecs, and datasets. In summary, SMR enables perceptual coding for machines and advances VCM from specificity to generality. Code is available at \url{https://github.com/ywwynm/SMR}

    Estrategias de visión por computador para la estimación de pose en el contexto de aplicaciones robóticas industriales: avances en el uso de modelos tanto clásicos como de Deep Learning en imágenes 2D

    Get PDF
    184 p.La visión por computador es una tecnología habilitadora que permite a los robots y sistemas autónomos percibir su entorno. Dentro del contexto de la industria 4.0 y 5.0, la visión por ordenador es esencial para la automatización de procesos industriales. Entre las técnicas de visión por computador, la detección de objetos y la estimación de la pose 6D son dos de las más importantes para la automatización de procesos industriales. Para dar respuesta a estos retos, existen dos enfoques principales: los métodos clásicos y los métodos de aprendizaje profundo. Los métodos clásicos son robustos y precisos, pero requieren de una gran cantidad de conocimiento experto para su desarrollo. Por otro lado, los métodos de aprendizaje profundo son fáciles de desarrollar, pero requieren de una gran cantidad de datos para su entrenamiento.En la presente memoria de tesis se presenta una revisión de la literatura sobre técnicas de visión por computador para la detección de objetos y la estimación de la pose 6D. Además se ha dado respuesta a los siguientes retos: (1) estimación de pose mediante técnicas de visión clásicas, (2) transferencia de aprendizaje de modelos 2D a 3D, (3) la utilización de datos sintéticos para entrenar modelos de aprendizaje profundo y (4) la combinación de técnicas clásicas y de aprendizaje profundo. Para ello, se han realizado contribuciones en revistas de alto impacto que dan respuesta a los anteriores retos

    Encryption and Decryption of Images with Pixel Data Modification Using Hand Gesture Passcodes

    Get PDF
    To ensure data security and safeguard sensitive information in society, image encryption and decryption as well as pixel data modifications, are essential. To avoid misuse and preserve trust in our digital environment, it is crucial to use these technologies responsibly and ethically. So, to overcome some of the issues, the authors designed a way to modify pixel data that would hold the hidden information. The objective of this work is to change the pixel values in a way that can be used to store information about black and white image pixel data. Prior to encryption and decryption, by using Python we were able to construct a passcode with hand gestures in the air, then encrypt it without any data loss. It concentrates on keeping track of simply two pixel values. Thus, pixel values are slightly changed to ensure the masked image is not misleading. Considering that the RGB values are at their border values of 254, 255 the test cases of masking overcome issues with the corner values susceptibility
    corecore