141 research outputs found
Generalized Threshold Decoding of Convolutional Codes
It is shown that any rate l/b systematic convolutional code over GF(p) can be decoded up to its minimum distance with respect to the decoding constraint length by a one-step threshold decoder. It is further shown that this decoding method can be generalized in a natural way to allow “decoding” of a received sequence in its unquantized analog form
A study of digital holographic filters generation. Phase 2: Digital data communication system, volume 1
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error
Polynomial-Division-Based Algorithms for Computing Linear Recurrence Relations
Sparse polynomial interpolation, sparse linear system solving or modular
rational reconstruction are fundamental problems in Computer Algebra. They come
down to computing linear recurrence relations of a sequence with the
Berlekamp-Massey algorithm. Likewise, sparse multivariate polynomial
interpolation and multidimensional cyclic code decoding require guessing linear
recurrence relations of a multivariate sequence.Several algorithms solve this
problem. The so-called Berlekamp-Massey-Sakata algorithm (1988) uses polynomial
additions and shifts by a monomial. The Scalar-FGLM algorithm (2015) relies on
linear algebra operations on a multi-Hankel matrix, a multivariate
generalization of a Hankel matrix. The Artinian Gorenstein border basis
algorithm (2017) uses a Gram-Schmidt process.We propose a new algorithm for
computing the Gr{\"o}bner basis of the ideal of relations of a sequence based
solely on multivariate polynomial arithmetic. This algorithm allows us to both
revisit the Berlekamp-Massey-Sakata algorithm through the use of polynomial
divisions and to completely revise the Scalar-FGLM algorithm without linear
algebra operations.A key observation in the design of this algorithm is to work
on the mirror of the truncated generating series allowing us to use polynomial
arithmetic modulo a monomial ideal. It appears to have some similarities with
Pad{\'e} approximants of this mirror polynomial.As an addition from the paper
published at the ISSAC conferance, we give an adaptive variant of this
algorithm taking into account the shape of the final Gr{\"o}bner basis
gradually as it is discovered. The main advantage of this algorithm is that its
complexity in terms of operations and sequence queries only depends on the
output Gr{\"o}bner basis.All these algorithms have been implemented in Maple
and we report on our comparisons
Brain-Inspired Computational Intelligence via Predictive Coding
Artificial intelligence (AI) is rapidly becoming one of the key technologies
of this century. The majority of results in AI thus far have been achieved
using deep neural networks trained with the error backpropagation learning
algorithm. However, the ubiquitous adoption of this approach has highlighted
some important limitations such as substantial computational cost, difficulty
in quantifying uncertainty, lack of robustness, unreliability, and biological
implausibility. It is possible that addressing these limitations may require
schemes that are inspired and guided by neuroscience theories. One such theory,
called predictive coding (PC), has shown promising performance in machine
intelligence tasks, exhibiting exciting properties that make it potentially
valuable for the machine learning community: PC can model information
processing in different brain areas, can be used in cognitive control and
robotics, and has a solid mathematical grounding in variational inference,
offering a powerful inversion scheme for a specific class of continuous-state
generative models. With the hope of foregrounding research in this direction,
we survey the literature that has contributed to this perspective, highlighting
the many ways that PC might play a role in the future of machine learning and
computational intelligence at large.Comment: 37 Pages, 9 Figure
Terminological Methods in Lexicography: Conceptualising, Organising, and Encoding Terms in General Language Dictionaries
Os dicionários de língua geral apresentam inconsistências de uniformização e cientificidade no tratamento do conteúdo lexicográfico especializado. Analisando a presença e o tratamento de termos em dicionários de língua geral, propomos um tratamento mais uniforme e cientificamente rigoroso desse conteúdo, considerando também a necessidade de compilar e alinhar futuros recursos lexicais em consonância com padrões interoperáveis. Partimos da premissa de que o tratamento dos itens lexicais, sejam unidades lexicais (palavras em geral) ou unidades terminológicas (termos ou palavras pertencentes a determinados domínios), deve ser diferenciado, e recorremos a métodos terminológicos para tratar os termos dicionarizados. A nossa abordagem assume que a terminologia – na sua dupla dimensão linguística e conceptual – e a lexicografia, como domínios interdisciplinares, podem ser complementares. Assim, apresentamos objetivos teóricos (aperfeiçoamento da metalinguagem e descrição lexicográfica a partir de pressupostos terminológicos) e práticos (representação consistente de dados lexicográficos), que visam facilitar a organização, descrição e modelização consistente de componentes lexicográficos, nomeadamente a hierarquização das etiquetas de domínio, que são marcadores de identificação de léxico especializados. Queremos ainda facilitar a redação de definições, as quais podem ser otimizadas e elaboradas com maior precisão científica ao seguir uma abordagem terminológica no tratamento dos termos. Analisámos os dicionários desenvolvidos por três instituições académicas distintas: a Academia das Ciências de Lisboa, a Real Academia Española e a Académie Française, que representam um valioso legado da tradição lexicográfica académica europeia. A análise inicial inclui um levantamento exaustivo e a comparação das etiquetas de domínio usadas, bem como um debate sobre as opções escolhidas e um estudo comparativo do tratamento dos termos. Elaborámos, depois, uma proposta metodológica para o tratamento de termos em dicionários de língua geral, tomando como exemplo dois domínios, GEOLOGIA e FUTEBOL, extraídos da edição de 2001 do dicionário da Academia das Ciências de Lisboa. Revimos os termos selecionados de acordo com os princípios terminológicos defendidos, dando origem a sentidos especializados revistos/novos para a primeira edição digital deste dicionário. Representamos e anotamos os dados usando as especificações da TEI Lex-0, uma extensão da TEI (Text Encoding Initiative), dedicada à codificação de dados lexicográficos. Destacamos também a importância de ter etiquetas de domínio hierárquicas em vez de uma lista simples de domínios, vantajosas para a organização dos dados, correspondência e possíveis futuros alinhamentos entre diferentes recursos lexicográficos. A investigação revelou que a) os modelos estruturais dos recursos lexicais são complexos e contêm informação de natureza diversa; b) as etiquetas de domínio nos dicionários gerais da língua são planas, desequilibradas, inconsistentes e, muitas vezes, estão desatualizadas, havendo necessidade de as hierarquizar para organizar o conhecimento especializado; c) os critérios adotados para a marcação dos termos e as fórmulas utilizadas na definição são díspares; d) o tratamento dos termos é heterogéneo e formulado de diferentes formas, pelo que o recurso a métodos terminológicos podem ajudar os lexicógrafos a redigir definições; e) a aplicação de métodos terminológicos e lexicográficos interdisciplinares, e também de padrões, é vantajosa porque permite a construção de bases de dados lexicais estruturadas, concetualmente organizadas, apuradas do ponto de vista linguístico e interoperáveis. Em suma, procuramos contribuir para a questão urgente de resolver problemas que afetam a partilha, o alinhamento e vinculação de dados lexicográficos.General language dictionaries show inconsistencies in terms of uniformity and scientificity in the treatment of specialised lexicographic content. By analysing the presence and treatment of terms in general language dictionaries, we propose a more uniform and scientifically rigorous treatment of this content, considering the necessity of compiling and aligning future lexical resources according to interoperable standards. We begin from the premise that the treatment of lexical items, whether lexical units (words in general) or terminological units (terms or words belonging to particular subject fields), must be differentiated, and resort to terminological methods to treat dictionary terms. Our approach assumes that terminology – in its dual dimension, both linguistic and conceptual – and lexicography, as interdisciplinary domains, can be complementary. Thus, we present theoretical (improvement of metalanguage and lexicographic description based on terminological assumptions) and practical (consistent representation of lexicographic data) objectives that aim to facilitate the organisation, description and consistent modelling of lexicographic components, namely the hierarchy of domain labels, as they are specialised lexicon identification markers. We also want to facilitate the drafting of definitions, which can be optimised and elaborated with greater scientific precision by following a terminological approach for the treatment of terms. We analysed the dictionaries developed by three different academic institutions: the Academia das Ciências de Lisboa, the Real Academia Española and the Académie Française, which represent a valuable legacy of the European academic lexicographic tradition. The initial analysis includes an exhaustive survey and comparison of the domain labels used, as well as a debate on the chosen options and a comparative study of the treatment of the terms. We then developed a methodological proposal for the treatment of terms in general language dictionaries, exemplified using terms from two domains, GEOLOGY and FOOTBALL, taken from the 2001 edition of the dictionary of the Academia das Ciências de Lisboa. We revised the selected terms according to the defended terminological principles, giving rise to revised/new specialised meanings for the first digital edition of this dictionary. We represent and annotate the data using the TEI Lex-0 specifications, a TEI (Text Encoding Initiative) subset for encoding lexicographic data. We also highlight the importance of having hierarchical domain labels instead of a simple list of domains, which are beneficial to the data organisation itself, correspondence and possible future alignments between different lexicographic resources. Our investigation revealed the following: a) structural models of lexical resources are complex and contain information of a different nature; b) domain labels in general language dictionaries are flat, unbalanced, inconsistent and often outdated, requiring the need to hierarchise them for organising specialised knowledge; c) the criteria adopted for marking terms and the formulae used in the definition are disparate; d) the treatment of terms is heterogeneous and formulated differently, whereby terminological methods can help lexicographers to draft definitions; e) the application of interdisciplinary terminological and lexicographic methods, and of standards, is advantageous because it allows the construction of structured, conceptually organised, linguistically accurate and interoperable lexical databases. In short, we seek to contribute to the urgent issue of solving problems that affect the sharing, alignment and linking of lexicographic data
Resource Management for Edge Computing in Internet of Things (IoT)
Die große Anzahl an Geräten im Internet der Dinge (IoT) und deren kontinuierliche Datensammlungen führen zu einem rapiden Wachstum der gesammelten Datenmenge. Die Daten komplett mittels zentraler Cloud Server zu verarbeiten ist ineffizient und zum Teil sogar unmöglich oder unnötig. Darum wird die Datenverarbeitung an den Rand des Netzwerks verschoben, was zu den Konzepten des Edge Computings geführt hat. Informationsverarbeitung nahe an der Datenquelle (z.B. auf Gateways und Edge Geräten) reduziert nicht nur die hohe Arbeitslast zentraler Server und Netzwerke, sondern verringer auch die Latenz für Echtzeitanwendungen, da die potentiell unzuverlässige Kommunikation zu Cloud Servern mit ihrer unvorhersehbaren Netzwerklatenz vermieden wird. Aktuelle IoT Architekturen verwenden Gateways, um anwendungsspezifische Verbindungen zu IoT Geräten herzustellen. In typischen Konfigurationen teilen sich mehrere IoT Edge Geräte ein IoT Gateway. Wegen der begrenzten verfügbaren Bandbreite und Rechenkapazität eines IoT Gateways muss die Servicequalität (SQ) der verbundenen IoT Edge Geräte über die Zeit angepasst werden. Nicht nur um die Anforderungen der einzelnen Nutzer der IoT Geräte zu erfüllen, sondern auch um die SQBedürfnisse der anderen IoT Edge Geräte desselben Gateways zu tolerieren.
Diese Arbeit untersucht zuerst essentielle Technologien für IoT und existierende Trends. Dabei werden charakteristische Eigenschaften von IoT für die Embedded Domäne, sowie eine umfassende IoT Perspektive für Eingebettete Systeme vorgestellt. Mehrere Anwendungen aus dem Gesundheitsbereich werden untersucht und implementiert, um ein Model für deren Datenverarbeitungssoftware abzuleiten. Dieses Anwendungsmodell hilft bei der Identifikation verschiedener Betriebsmodi. IoT Systeme erwarten von den Edge Geräten, dass sie mehrere Betriebsmodi unterstützen, um sich während des Betriebs an wechselnde Szenarien anpassen zu können. Z.B. Energiesparmodi bei geringen Batteriereserven trotz gleichzeitiger Aufrechterhaltung der kritischen Funktionalität oder einen Modus, um die Servicequalität auf Wunsch des Nutzers zu erhöhen etc. Diese Modi verwenden entweder verschiedene Auslagerungsschemata (z.B. die übertragung von Rohdaten, von partiell bearbeiteten Daten, oder nur des finalen Ergebnisses) oder verschiedene Servicequalitäten. Betriebsmodi unterscheiden sich in ihren Ressourcenanforderungen sowohl auf dem Gerät (z.B. Energieverbrauch), wie auch auf dem Gateway (z.B. Kommunikationsbandbreite, Rechenleistung, Speicher etc.). Die Auswahl des besten Betriebsmodus für Edge Geräte ist eine Herausforderung in Anbetracht der begrenzten Ressourcen am Rand des Netzwerks (z.B. Bandbreite und Rechenleistung des gemeinsamen Gateways), diverser Randbedingungen der IoT Edge Geräte (z.B. Batterielaufzeit, Servicequalität etc.) und der Laufzeitvariabilität am Rand der IoT Infrastruktur. In dieser Arbeit werden schnelle und effiziente Auswahltechniken für Betriebsmodi entwickelt und präsentiert.
Wenn sich IoT Geräte in der Reichweite mehrerer Gateways befinden, ist die Verwaltung der gemeinsamen Ressourcen und die Auswahl der Betriebsmodi für die IoT Geräte sogar noch komplexer. In dieser Arbeit wird ein verteilter handelsorientierter Geräteverwaltungsmechanismus für IoT Systeme mit mehreren Gateways präsentiert. Dieser Mechanismus zielt auf das kombinierte Problem des Bindens (d.h. ein Gateway für jedes IoT Gerät bestimmen) und der Allokation (d.h. die zugewiesenen Ressourcen für jedes Gerät bestimmen) ab. Beginnend mit einer initialen Konfiguration verhandeln und kommunizieren die Gateways miteinander und migrieren IoT Geräte zwischen den Gateways, wenn es den Nutzen für das Gesamtsystem erhöht.
In dieser Arbeit werden auch anwendungsspezifische Optimierungen für IoT Geräte vorgestellt. Drei Anwendungen für den Gesundheitsbereich wurden realisiert und für tragbare IoT Geräte untersucht. Es wird auch eine neuartige Kompressionsmethode vorgestellt, die speziell für IoT Anwendungen geeignet ist, die Bio-Signale für Gesundheitsüberwachungen verarbeiten. Diese Technik reduziert die zu übertragende Datenmenge des IoT Gerätes, wodurch die Ressourcenauslastung auf dem Gerät und dem gemeinsamen Gateway reduziert wird.
Um die vorgeschlagenen Techniken und Mechanismen zu evaluieren, wurden einige Anwendungen auf IoT Plattformen untersucht, um ihre Parameter, wie die Ausführungszeit und Ressourcennutzung, zu bestimmen. Diese Parameter wurden dann in einem Rahmenwerk verwendet, welches das IoT Netzwerk modelliert, die Interaktion zwischen Geräten und Gateway erfasst und den Kommunikationsoverhead sowie die erreichte Batterielebenszeit und Servicequalität der Geräte misst. Die Algorithmen zur Auswahl der Betriebsmodi wurden zusätzlich auf IoT Plattformen implementiert, um ihre Overheads bzgl. Ausführungszeit und Speicherverbrauch zu messen
The History of Lawrence University, 1847-1925
viii, 493 pages. William Francis Raney, David G. Ormsby Professor of European History, was on the faculty at Lawrence University from 1920 to 1955. He researched and wrote this extensive history of Lawrence University between the time of his retirement in 1955 and his death in 1962. Marshall Hulbert \u2726 prepared the work for printing and a limited run was published by Lawrence University in 1984.https://lux.lawrence.edu/archives_selections/1005/thumbnail.jp
3D Medical Image Lossless Compressor Using Deep Learning Approaches
The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in efficient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies effectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an effective compression method. Since medical information is critical and imposes an influential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable efforts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information efficiently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the differences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the effectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by different hospitals, representing different body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are significantly more informative than others, specifically in medical domains where samples are available on a scale of billions. The effectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling
Structured manifolds for motion production and segmentation : a structured Kernel Regression approach
Steffen JF. Structured manifolds for motion production and segmentation : a structured Kernel Regression approach. Bielefeld (Germany): Bielefeld University; 2010
Hidden Markov Models
Hidden Markov Models (HMMs), although known for decades, have made a big career nowadays and are still in state of development. This book presents theoretical issues and a variety of HMMs applications in speech recognition and synthesis, medicine, neurosciences, computational biology, bioinformatics, seismology, environment protection and engineering. I hope that the reader will find this book useful and helpful for their own research
- …