21 research outputs found
Recommended from our members
Coding and Probabilistic Inference Methods for Data-Dependent Two-Dimensional Channels
Recent advances in magnetic recording systems, optical recording devices and flash memory drives necessitate to study two-dimensional (2-D) coding techniques for reliable storage/retrieval of information. Most channels in such systems introduce errors in messages in response to certain data patterns, and messages containing these patterns are more prone to errors than others. For example, in a single-level cell flash memory channel, inter-cell interference (ICI) is at its maximum when 101 patterns are programmed over adjacent cells in either horizontal or vertical directions. As another example, in two-dimensional magnetic recording channels, 2-D isolated-bits patterns are shown empirically to be the dominant error event, and during the read-back process inter-symbol interference (ISI) and inter-track interference (ITI) arise when these patterns are recorded over the magnetic medium. Shannon in his seminal work, ``A Mathematical Theory of Communications," presented two techniques for reliable transmission of messages over noisy channels, namely error correction coding and constrained coding. In the first method, messages are protected via an error correction code (ECC) from random errors which are independent of input data. The theory of ECCs is well studied, and efficient code construction methods are developed for simple binary channels, additive white Gaussian noise (AWGN) channels and partial response channels. On the other hand, constrained coding reduces the likelihood of corruption by removing problematic patterns before transmission over data-dependent channels. Prominent examples of constraints include a family of binary one-dimensional (1-D) and 2-D -run-length-limited (RLL) constraints which improves resilience to ISI timing recovery and synchronization for bandwidth limited partial response channels, where d and k represent the minimum and maximum number of admissible zeros between two successive ones in any direction of array. In principle, the ultimate coding approach for such data-dependent channels is to design a set of sufficiently distinct error correction codewords that also satisfy channel constraints. Designing channel codewords satisfying both ECC and channel constraints is important as it would achieve the channel capacity. However, in practice this is difficult, and we rely on sub-optimal methods such as forward concatenation method (standard concatenation), reverse concatenation method (modified concatenation), and combinations of these approaches. In this dissertation, we focus on the problem of reliable transmission of binary messages over data-dependent 2-D communication channels. Our work is concerned with several challenges in regard to the transmission of binary messages over data-dependent 2-D channels.
Design of Two-Dimensional Magnetic Recording (TDMR) Detector and Decoder: TDMR achieves high areal densities by reducing the size of a bit comparable to the size of the magnetic grains resulting in 2-D ISI and very high media noise. Therefore, it is critical to handle the media noise along with the 2-D ISI detection. In this work, we tune the Generalized Belief Propagation (GBP) algorithm to handle the media noise seen in TDMR. We also provide an intuition into the nature of hard decisions provided by the GBP algorithm.
Investigation into Harmful Patterns for TDMR channels: This work investigates into the Voronoi based media model to study the harmful patterns over multi-track shingled recording systems. Through realistic quasi micromagnetic simulations studies, we identify 2-D data patterns that contribute to high media noise. We look into the generic Voronoi model and present our analysis on multi-track detection with constrained coded data. We show that 2-D constraints imposed on input patterns result in an order of magnitude improvement in the bit error rate for TDMR systems.
Understanding of Constraint Gain for TDMR Channels: We study performance gains of constrained codes in TDMR channels using the notion of constraint gain. We consider Voronoi based TDMR channels with realistic grain, bit, track and magnetic-head dimensions. Specifically, we investigate the constraint gain for 2-D no-isolated-bits constraint over Voronoi based TDMR channels. We focus on schemes that employ the GBP algorithm for obtaining information rate estimates for TDMR channels.
Design of Novel Constrained Coding Methods: In this work, we present a deliberate bit flipping (DBF) coding scheme for binary 2-D channels, where specific patterns in channel inputs are the significant cause of errors. The idea is to eliminate a constrained encoder and, instead, embed a constraint into an error correction codeword that is arranged into a 2-D array by deliberately flipping the bits that violate the constraint. The DBF method relies on the error correction capability of the code being used so that it should be able to correct both deliberate errors and channel errors. Therefore, it is crucial to flip minimum number of bits in order not to overburden the error correction decoder. We devise a constrained combinatorial formulation for minimizing the number of flipped bits for a given set of harmful patterns. The GBP algorithm is used to find an approximate solution for the problem.
Devising Reduced Complexity Probabilistic Inference Methods: We propose a reduced complexity GBP that propagates messages in Log-Likelihood Ratio (LLR) domain. The key novelties of the proposed LLR-GBP are: (i) reduced fixed point precision for messages instead of computational complex floating point format, (ii) operations performed in logarithm domain, thus eliminating the need for multiplications and divisions, (iii) usage of message ratios that leads to simple hard decision mechanisms
High-performance hardware accelerators for image processing in space applications
Mars is a hard place to reach. While there have been many notable success stories in getting probes to the Red Planet, the historical record is full of bad news. The success rate for actually landing on the Martian surface is even worse, roughly 30%. This low success rate must be mainly credited to the Mars environment characteristics. In the Mars atmosphere strong winds frequently breath. This phenomena usually modifies the lander descending trajectory diverging it from the target one. Moreover, the Mars surface is not the best place where performing a safe land. It is pitched by many and close craters and huge stones, and characterized by huge mountains and hills (e.g., Olympus Mons is 648 km in diameter and 27 km tall). For these reasons a mission failure due to a landing in huge craters, on big stones or on part of the surface characterized by a high slope is highly probable.
In the last years, all space agencies have increased their research efforts in order to enhance the success rate of Mars missions. In particular, the two hottest research topics are: the active debris removal and the guided landing on Mars.
The former aims at finding new methods to remove space debris exploiting unmanned spacecrafts. These must be able to autonomously: detect a debris, analyses it, in order to extract its characteristics in terms of weight, speed and dimension, and, eventually, rendezvous with it. In order to perform these tasks, the spacecraft must have high vision capabilities. In other words, it must be able to take pictures and process them with very complex image processing algorithms in order to detect, track and analyse the debris.
The latter aims at increasing the landing point precision (i.e., landing ellipse) on Mars. Future space-missions will increasingly adopt Video Based Navigation systems to assist the entry, descent and landing (EDL) phase of space modules (e.g., spacecrafts), enhancing the precision of automatic EDL navigation systems. For instance, recent space exploration missions, e.g., Spirity, Oppurtunity, and Curiosity, made use of an EDL procedure aiming at following a fixed and precomputed descending trajectory to reach a precise landing point. This approach guarantees a maximum landing point precision of 20 km. By comparing this data with the Mars environment characteristics, it is possible to understand how the mission failure probability still remains really high.
A very challenging problem is to design an autonomous-guided EDL system able to even more reduce the landing ellipse, guaranteeing to avoid the landing in dangerous area of Mars surface (e.g., huge craters or big stones) that could lead to the mission failure. The autonomous behaviour of the system is mandatory since a manual driven approach is not feasible due to the distance between Earth and Mars. Since this distance varies from 56 to 100 million of km approximately due to the orbit eccentricity, even if a signal transmission at the light speed could be possible, in the best case the transmission time would be around 31 minutes, exceeding so the overall duration of the EDL phase.
In both applications, algorithms must guarantee self-adaptability to the environmental conditions. Since the Mars (and in general the space) harsh conditions are difficult to be predicted at design time, these algorithms must be able to automatically tune the internal parameters depending on the current conditions.
Moreover, real-time performances are another key factor. Since a software implementation of these computational intensive tasks cannot reach the required performances, these algorithms must be accelerated via hardware.
For this reasons, this thesis presents my research work done on advanced image processing algorithms for space applications and the associated hardware accelerators. My research activity has been focused on both the algorithm and their hardware implementations. Concerning the first aspect, I mainly focused my research effort to integrate self-adaptability features in the existing algorithms. While concerning the second, I studied and validated a methodology to efficiently develop, verify and validate hardware components aimed at accelerating video-based applications. This approach allowed me to develop and test high performance hardware accelerators that strongly overcome the performances of the actual state-of-the-art implementations.
The thesis is organized in four main chapters.
Chapter 2 starts with a brief introduction about the story of digital image processing. The main content of this chapter is the description of space missions in which digital image processing has a key role. A major effort has been spent on the missions in which my research activity has a substantial impact. In particular, for these missions, this chapter deeply analizes and evaluates the state-of-the-art approaches and algorithms.
Chapter 3 analyzes and compares the two technologies used to implement high performances hardware accelerators, i.e., Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs). Thanks to this information the reader may understand the main reasons behind the decision of space agencies to exploit FPGAs instead of ASICs for high-performance hardware accelerators in space missions, even if FPGAs are more sensible to Single Event Upsets (i.e., transient error induced on hardware component by alpha particles and solar radiation in space). Moreover, this chapter deeply describes the three available space-grade FPGA technologies (i.e., One-time Programmable, Flash-based, and SRAM-based), and the main fault-mitigation techniques against SEUs that are mandatory for employing space-grade FPGAs in actual missions.
Chapter 4 describes one of the main contribution of my research work: a library of high-performance hardware accelerators for image processing in space applications. The basic idea behind this library is to offer to designers a set of validated hardware components able to strongly speed up the basic image processing operations commonly used in an image processing chain. In other words, these components can be directly used as elementary building blocks to easily create a complex image processing system, without wasting time in the debug and validation phase. This library groups the proposed hardware accelerators in IP-core families. The components contained in a same family share the same provided functionality and input/output interface. This harmonization in the I/O interface enables to substitute, inside a complex image processing system, components of the same family without requiring modifications to the system communication infrastructure. In addition to the analysis of the internal architecture of the proposed components, another important aspect of this chapter is the methodology used to develop, verify and validate the proposed high performance image processing hardware accelerators. This methodology involves the usage of different programming and hardware description languages in order to support the designer from the algorithm modelling up to the hardware implementation and validation.
Chapter 5 presents the proposed complex image processing systems. In particular, it exploits a set of actual case studies, associated with the most recent space agency needs, to show how the hardware accelerator components can be assembled to build a complex image processing system. In addition to the hardware accelerators contained in the library, the described complex system embeds innovative ad-hoc hardware components and software routines able to provide high performance and self-adaptable image processing functionalities. To prove the benefits of the proposed methodology, each case study is concluded with a comparison with the current state-of-the-art implementations, highlighting the benefits in terms of performances and self-adaptability to the environmental conditions
Hidden Markov Models
Hidden Markov Models (HMMs), although known for decades, have made a big career nowadays and are still in state of development. This book presents theoretical issues and a variety of HMMs applications in speech recognition and synthesis, medicine, neurosciences, computational biology, bioinformatics, seismology, environment protection and engineering. I hope that the reader will find this book useful and helpful for their own research
Writing Illness and Identity in Seventeenth-century Britain
This thesis begins from the observation that seventeenth-century life-writing appears to have little recourse to the age's revolutionary medical developments when describing personal illness. It therefore seeks to explore the available textual frameworks for writing autobiographical accounts of illness, and the rhetorical strategies that writers of such texts used for adapting their illnesses to those frameworks.
My research is contextualised within discussions of early modern selfhood. Like a number of recent scholars, I reject the Burkhardtian assumption of a vibrant Renaissance self, born, fully formed, sometime during the Tudor age. I present examples of illnesses described both as self-obliterating and self-invigorating, but the moments of self-invigoration, I argue, are not evidence of a thoroughgoing subjectivity, but glimpses of a nascent, fragmentary and problematic selfhood, often kept forcibly in check by strict observance of religious routines and adherence to restrictive textual conventions for recording life events.
Those textual conventions, I claim, are best uncovered by attending – where possible – to the material texts of the various autobiographical sources I consult. From predominantly manuscript sources, I present examples of writers, for instance, using prescriptive methods such as that of financial accounting, or collecting and adapting non-original material to account for their illnesses, neither of which techniques suggests an introspective and sustained expression of selfhood in sickness.
I present chapters examining descriptions of personal illness in diaries, autobiography, letters and poetry, attending in each case to the ways in which illness and identity are written and rewritten. My evidence suggests that a sense of collectivity appears to dominate the life-writing of illness, one in which the subject is frequently defined by his or her participation in familial, social or religious networks, and in which material from other texts is collected and redeployed to account for events in an individual life. The textual frameworks examined in this thesis, I hold, are readily adaptable to accommodate and treat moments of personal crisis such as illness
Role of Human and Mouse Rad54 in DNA Recombination and Repair
DNA double-strand breaks (DSBs) which can be induced by endogenously
produced radicals or by ionizing radiation are among the most genotoxic DNA
lesions. Repair of DSBs is of cardinal importance for the prevention of
chromosomal fragmentation, translocations, and deletions. The genetic instability
resulting from persistent or incorrectly repaired DSBs can eventually result in
cancer. Therefore, to understand the biological consequences of exposure to
ionizing radiation, insight into the mechanisms of DSB repair in mammalian cells is
essential. The pace of identification of mammalian DSB repair genes has rapidly
increased over the last few years. However, the functional analysis of the encoded
proteins and the analysis of the role of the different DSB repair mechanisms in
mammals are far from complete. This thesis describes the generation and
phenotypic characterization of cells and mice, with a defect in one of the DSB
repair genes, the RAD54 recombinational DNA repair gene. Furthermore, the initial
characterization and cellular behavior of the mammalian Rad54 protein is described.
Chapter 1 outlines the current knowledge on the role and molecular mechanisms of
the multiple pathways that have evolved for the repair of DSBs. Our main findings
concerning mammalian Rad54 at the protein and cellular level are discussed and
integrated in the emerging picture of the DSB repair mechanisms in mammals.
Chapters 2 and 3 describe the isolation of mammalian RAD54 genes and genomic
characterization of the mouse RAD54 gene. Chapters 4 and 5 describe the
generation and phenotypic characterization of RAD54 knockout cells and mice.
Chapters 6 and 7 describe the characterization of the in vitro activities of the
purified human Rad54 protein and the cellular behavior of the mouse Rad54 protein
upon induction of DNA damage
Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022
The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing