924 research outputs found
Algorithms & implementation of advanced video coding standards
Advanced video coding standards have become widely deployed coding techniques used in numerous products, such as broadcast, video conference, mobile television and blu-ray disc, etc. New compression techniques are gradually included in video coding standards so that a 50% compression rate reduction is achievable every five years. However, the trend also has brought many problems, such as, dramatically increased computational complexity, co-existing multiple standards and gradually increased development time. To solve the above problems, this thesis intends to investigate efficient algorithms for the latest video coding standard, H.264/AVC. Two aspects of H.264/AVC standard are inspected in this thesis: (1) Speeding up intra4x4 prediction with parallel architecture. (2) Applying an efficient rate control algorithm based on deviation measure to intra frame. Another aim of this thesis is to work on low-complexity algorithms for MPEG-2 to H.264/AVC transcoder. Three main mapping algorithms and a computational complexity reduction algorithm are focused by this thesis: motion vector mapping, block mapping, field-frame mapping and efficient modes ranking algorithms. Finally, a new video coding framework methodology to reduce development time is examined. This thesis explores the implementation of MPEG-4 simple profile with the RVC framework. A key technique of automatically generating variable length decoder table is solved in this thesis. Moreover, another important video coding standard, DV/DVCPRO, is further modeled by RVC framework. Consequently, besides the available MPEG-4 simple profile and China audio/video standard, a new member is therefore added into the RVC framework family. A part of the research work presented in this thesis is targeted algorithms and implementation of video coding standards. In the wide topic, three main problems are investigated. The results show that the methodologies presented in this thesis are efficient and encourage
Motion correlation based low complexity and low power schemes for video codec
制度:新 ; 報告番号:甲3750号 ; 学位の種類:博士(工学) ; 授与年月日:2012/11/19 ; 早大学位記番号:新6121Waseda Universit
Fractal-based models for internet traffic and their application to secure data transmission
This thesis studies the application of fractal geometry to the application of
covert communications systems. This involves the process of hiding information
in background noise; the information being encrypted or otherwise.
Models and methods are considered with regard to two communications systems: (i) wireless communications; (ii) internet communications.
In practice, of course, communication through the Internet cannot be disassociated
from wireless communications as Internet traffic is 'piped' through a
network that can include wireless communications (e.g. satellite telecommunications).
However, in terms of developing models and methods for covert communications
in general, points (i) and (ii) above require different approaches
and access to different technologies. With regard to (i) above, we develop
two methods based on fractal modulation and multi-fractal modulation. With
regard to (ii), we implement a practical method and associated software for
covert transmission of file attachments based on an analysis of Internet traffic
noise. In both cases, however, two fractal models are considered; the first is
the standard Random Scaling Fractal model and the second is a generalisation
of this model that incorporates a greater range of spectral properties than the
first—a Generalised Random Scaling Fractal Model. [Continues.
Post Quantum Cryptography
Riassunto tesi magistrale:
Post Quantum Cryptography
Candidato: VAIRA Antonio
Durante la stesura della mia tesi, su cui ho lavorato quest'ultimo anno della mia carriera universitaria, ho cercato di rispondere alla domanda:
Qual è lo stato dell'arte della crittografia odierna in grado di sostenere un attacco da parte di utente malevolo in possesso di un “grande” computer quantistico?
Per “grande” si intende un computer quantistico che abbia un registro di diverse migliaia di qubit e quindi sia in grado di far girare degli algoritmi quantistici effettivamente utilizzabili.
Ad oggi sono stati ideati solamente due algoritmi quantistici per scopi cripto-analitici ed uno di questi, l’algoritmo di Shor, permette di fattorizzare grandi numeri in un tempo ridotto.
Questo “semplice” problema matematico (o meglio algoritmico) è alla base della sicurezza dei cripto-sistemi a chiave pubblica utilizzati oggi. In un sistema a chiave pubblica odierno, come l'RSA, la difficoltà di manomissione è legata alla difficoltà algoritmica di fattorizzare grandi numeri, dove difficoltà implica non l'impossibilità ma bensì un dispendio di risorse/tempo per l’hacking maggiore del valore stesso dell'informazione che si otterrebbe da tale hacking.
Nel mio lavoro inizialmente ho preso in esame, in linea di massima, sia i cripto-sistemi utilizzati oggi sia gli algoritmi quantistici utilizzabili in un ipotetico futuro per comprometterli. Successivamente, ho focalizzato la mia attenzione sulle alternative “mainstream” ai cripto-sistemi impiegati oggi: ovvero algoritmi post-quantum, proposti dalla comunità di cripto-analisti attivi nel campo, e ho provato a costruire un framework per valutarne l'effettivo impiego.
In questo framework, quindi, ho scelto di approfondire lo studio della famiglia dei lattice-based (algoritmi la cui sicurezza è basata sulla difficoltà di risolvere problemi relativi ai lattici multidimensionali). All'interno di questa famiglia di cripto-sistemi ne ho individuato uno in particolare, lo NTRU, particolarmente promettente per un impiego, nell'immediato futuro, all'interno di una corporate PKI (un esempio a me vicino è la PKI sviluppata all'interno dell'Airbus, per la quale ho lavorato durante quest'ultimo anno).
Ho in seguito approfondito lo studio di un altro algoritmo appartenente alla medesima famiglia dei cripto-sistemi lattice-based : il ring-LWE in quanto molto più interessante da un punto di vista accademico ma ancora relativamente sconosciuto. Successivamente ho elaborato alcune modifiche per lo stesso algoritmo con lo scopo di renderlo più veloce e affidabile, incrementando così la probabilità di recuperare un messaggio corretto dal corrispondente crittogramma.
Come ultima tappa, ho implementato l'algoritmo di criptazione modificato utilizzando un linguaggio ad alto livello (molto vicino a python) e ho comparato sia i tempi di esecuzione sia le risorse utilizzate con un versione non modificata ma implementata con lo stesso linguaggio, rendendo così i confronti il più coerenti possibile. Dai risultati ottenuti risulta che la versione modificata del ring-LWE è estremamente promettente ma necessita di una più approfondila analisi da un punto di vista cripto-analitico, ovvero è necessario stressare l'algoritmo per capire se introduca o meno ulteriori vulnerabilità rispetto alla versione originale.
In conclusione l'algoritmo ring-LWE, che ho maggiormente approfondito, offre diversi e interessanti spunti di riflessione ma è ben lontano dall'essere implementato in una infrastruttura reale. Nel evenienza di una sostituzione immediata è in generale buona norma in crittografia ripiegare su strade già battute e implementazioni più vecchie e fidate, un esempio all'interno degli algoritmi post-quantum è sicuramente lo NTRU (già dal 2008 standard IEEE: Std 1363.1).
Come ultima riflessione personale vorrei aggiungere che malgrado in questo mio lavoro di tesi l'aspetto fisico appaia marginale è solo grazie alla preparazione, che ho maturato nel mio percorso di studi, che ho potuto svolgerlo senza grandi difficoltà e facendo un esperienza davvero costruttiva in una grande azienda come l'Airbus. Dopo tutto un bagaglio fisico ci rende dei “problem-solver” nelle situazioni più disparate
Recommended from our members
Vector Signal Processors in Data Compression and Image Processing
The objective is to evaluate the applicability of the Vector Signal Processor to real time signal processing for data compression or manipulation. Particular emphasis has been placed on its role as a co-processor and the contribution that it might be expected to make during joint activities with the host.
These activities would have the combination used as the embedded computing subsystem of a FAX machine or as an image processing unit in desk top publishing. In these cases the hypothesis is that the Vector Signal Processor would act as an accelerator for many computationally intensive applicable processes.
After a review of current data compression techniques and of specialised architectures which may also be appropriate it is concluded that the Vector Signal Processor is the best option available. The operational details are then discussed. In order to be able to approximately compare experimental results with other workers a benchmarking exercise is undertaken.
Following this is the core of the study which details schemes for data compression of data sources involving character symbols, line drawings, and grey scale pictures. This involves pattern matching and substitution,Transform coding and quadtrees.
New encoding procedures are suggested based on Morse code for the secondary encoding of symbols and on Delta modulation for quadtrees. Image entity manipulation is discussed followed by some speculative work on neural networks and error control coding.
It is concluded that some processes are well served by the Vector Signal Processor but that the lack of conditional decision making and the difficulty of performing certain arithmetic functions make the processor unwieldy in its necessary host interactions
Large Scale Kernel Methods for Fun and Profit
Kernel methods are among the most flexible classes of machine learning models with strong theoretical guarantees. Wide classes of functions can be approximated arbitrarily well with kernels, while fast convergence and learning rates have been formally shown to hold. Exact kernel methods are known to scale poorly with increasing dataset size, and we believe that one of the factors limiting their usage in modern machine learning is the lack of scalable and easy to use algorithms and software. The main goal of this thesis is to study kernel methods from the point of view of efficient learning, with particular emphasis on large-scale data, but also on low-latency training, and user efficiency. We improve the state-of-the-art for scaling kernel solvers to datasets with billions of points using the Falkon algorithm, which combines random projections with fast optimization. Running it on GPUs, we show how to fully utilize available computing power for training kernel machines. To boost the ease-of-use of approximate kernel solvers, we propose an algorithm for automated hyperparameter tuning. By minimizing a penalized loss function, a model can be learned together with its hyperparameters, reducing the time needed for user-driven experimentation. In the setting of multi-class learning, we show that – under stringent but realistic assumptions on the separation between classes – a wide set of algorithms needs much fewer data points than in the more general setting (without assumptions on class separation) to reach the same accuracy. The first part of the thesis develops a framework for efficient and scalable kernel machines. This raises the question of whether our approaches can be used successfully in real-world applications, especially compared to alternatives based on deep learning which are often deemed hard to beat. The second part aims to investigate this question on two main applications, chosen because of the paramount importance of having an efficient algorithm. First, we consider the problem of instance segmentation of images taken from the iCub robot. Here Falkon is used as part of a larger pipeline, but the efficiency afforded by our solver is essential to ensure smooth human-robot interactions. In the second instance, we consider time-series forecasting of wind speed, analysing the relevance of different physical variables on the predictions themselves. We investigate different schemes to adapt i.i.d. learning to the time-series setting. Overall, this work aims to demonstrate, through novel algorithms and examples, that kernel methods are up to computationally demanding tasks, and that there are concrete applications in which their use is warranted and more efficient than that of other, more complex, and less theoretically grounded models
- …