60,104 research outputs found
Homogeneous and Scalable Gene Expression Regulatory Networks with Random Layouts of Switching Parameters
We consider a model of large regulatory gene expression networks where the
thresholds activating the sigmoidal interactions between genes and the signs of
these interactions are shuffled randomly. Such an approach allows for a
qualitative understanding of network dynamics in a lack of empirical data
concerning the large genomes of living organisms. Local dynamics of network
nodes exhibits the multistationarity and oscillations and depends crucially
upon the global topology of a "maximal" graph (comprising of all possible
interactions between genes in the network). The long time behavior observed in
the network defined on the homogeneous "maximal" graphs is featured by the
fraction of positive interactions () allowed between genes.
There exists a critical value such that if , the
oscillations persist in the system, otherwise, when it tends to
a fixed point (which position in the phase space is determined by the initial
conditions and the certain layout of switching parameters). In networks defined
on the inhomogeneous directed graphs depleted in cycles, no oscillations arise
in the system even if the negative interactions in between genes present
therein in abundance (). For such networks, the bidirectional edges
(if occur) influence on the dynamics essentially. In particular, if a number of
edges in the "maximal" graph is bidirectional, oscillations can arise and
persist in the system at any low rate of negative interactions between genes
(). Local dynamics observed in the inhomogeneous scalable regulatory
networks is less sensitive to the choice of initial conditions. The scale free
networks demonstrate their high error tolerance.Comment: LaTeX, 30 pages, 20 picture
System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing
This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications.
Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance.
This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB.
Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy).
The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption.
Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude
Recommended from our members
Optimized multi-objective design of herringbone micromixers
This paper was presented at the 2nd Micro and Nano Flows Conference (MNF2009), which was held at Brunel University, West London, UK. The conference was organised by Brunel University and supported by the Institution of Mechanical Engineers, IPEM, the Italian Union of Thermofluid dynamics, the Process Intensification Network, HEXAG - the Heat Exchange Action Group and the Institute of Mathematics and its Applications.A design method which systematically integrates Computational Fluids Dynamics (CFD) with an optimization scheme based on the use of the techniques Design of Experiments (DOE), Function Approximation technique (FA) and Multi-Objective Genetic Algorithm (MOGA), has been applied to the shape optimization of the staggered herringbone micromixer (SHM) at different Reynolds numbers. To quantify the mixing intensity in the mixer a Mixing index is defined on the basis of the intensity of segregation of the mass concentration on the outlet section. Four geometric parameters, i.e., aspect ratio of the mixing channel, ratio of groove depth to channel height, ratio of groove width to groove pitch and the asymmetry factor (offset) of groove, are the design variables selected for optimization. The mixing index at the outlet section and the pressure drop in the mixing channel are the performance criteria used as objective functions. The Pareto front with the optimum trade-offs, maximum mixing index with minimum pressure drop, is obtained. Experiments for qualitative and quantitative validation have been implemented.This study is supported by the Dorothy Hodgkin Postgraduate Award (DHPA) of the Engineering and Physical Sciences Research Council (EPSRC) of United Kingdom and Ebara Research Co. Ltd. of Japan
Scalable Approach to Uncertainty Quantification and Robust Design of Interconnected Dynamical Systems
Development of robust dynamical systems and networks such as autonomous
aircraft systems capable of accomplishing complex missions faces challenges due
to the dynamically evolving uncertainties coming from model uncertainties,
necessity to operate in a hostile cluttered urban environment, and the
distributed and dynamic nature of the communication and computation resources.
Model-based robust design is difficult because of the complexity of the hybrid
dynamic models including continuous vehicle dynamics, the discrete models of
computations and communications, and the size of the problem. We will overview
recent advances in methodology and tools to model, analyze, and design robust
autonomous aerospace systems operating in uncertain environment, with stress on
efficient uncertainty quantification and robust design using the case studies
of the mission including model-based target tracking and search, and trajectory
planning in uncertain urban environment. To show that the methodology is
generally applicable to uncertain dynamical systems, we will also show examples
of application of the new methods to efficient uncertainty quantification of
energy usage in buildings, and stability assessment of interconnected power
networks
A survey on OFDM-based elastic core optical networking
Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed
Contextual-based Image Inpainting: Infer, Match, and Translate
We study the task of image inpainting, which is to fill in the missing region
of an incomplete image with plausible contents. To this end, we propose a
learning-based approach to generate visually coherent completion given a
high-resolution image with missing components. In order to overcome the
difficulty to directly learn the distribution of high-dimensional image data,
we divide the task into inference and translation as two separate steps and
model each step with a deep neural network. We also use simple heuristics to
guide the propagation of local textures from the boundary to the hole. We show
that, by using such techniques, inpainting reduces to the problem of learning
two image-feature translation functions in much smaller space and hence easier
to train. We evaluate our method on several public datasets and show that we
generate results of better visual quality than previous state-of-the-art
methods.Comment: ECCV 2018 camera read
- …