1,439 research outputs found
Recommended from our members
Efficient Computer Simulation of Polymer Conformation. I. Geometric Properties of the Hard-Sphere Model
A system of efficient computer programs has been developed for simulating the conformations of macromolecules. The conformation of an individual polymer is defined as a point in conformation space, whose mutually orthogonal axes represent the successive dihedral angles of the backbone chain. The statistical-mechanical average of any property is obtained as the usual configuration integral over this space. A Monte Carlo method for estimating averages is used because of the impossibility of direct numerical integration. Monte Carlo corresponds to the execution of a Markoffian random walk of a representative point through the conformation space. Unlike many previous Monte Carlo studies of polymers, which sample conformation space indiscriminately, importance sampling increases efficiency because selection of new polymers is biased to reflect their Boltzmann probabilities in the canonical ensemble, leading to reduction of sampling variance and hence to greater accuracy! in given computing time. The simulation is illustrated in detail. Overall running time is proportional to n^(5/4), where n is the chain length. Results are presented for a hard-sphere linear polymer of n atoms, with free dihedral rotation, with n = 20-298. The fraction of polymers accepted in the importance sampling scheme, fA, is fit to a Fisher-Sykes attrition relation, giving an effective attrition constant of zero. fA is itself an upper bound to the partition function, Q, relative to the unrestricted walk. The mean-squared end-to-end distance and radius of gyration exhibit the expected exponential dependence, but with exponent for the radius of gyration significantly greater than that of the end-to-end distance. The 90% confidence limits calculated for both exponents did not include either 6/5 or 4/3, the lattice and zero-order perturbation values, respectively. A self-correcting scheme for generating coordinates free of roundoff error is given in an Appendix
Dynamic covalent chemistry in polymer networks : a mechanistic perspective
The incorporation of dynamic covalent linkages within and between polymer chains brings new properties to classical thermosetting polymer formulations, in particular in terms of thermal responses, processing options and intrinsic recycling abilities. Thus, in recent years, there has been a rapidly growing interest in the design and synthesis of monomers and cross-linkers that can be used as robust but at the same time reactive organic building blocks for dynamic polymer networks. In this perspective, a selection of such chemistries is highlighted, with a particular focus on the reaction mechanisms of molecular network rearrangements, and on how various mechanistic profiles can be related to the mechanical and physicochemical properties of polymer materials, in particular in relation with vitrimers, the recently defined third category of polymer materials. The recent advances in this area are not only expected to help direct promising emerging polymer applications, but also point towards the need for a better fundamental understanding of chemical reactivity within a macromolecular context
Modeling, Analysis, and Optimization Issues for Large Space Structures
Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design
Virtual Reality Games for Motor Rehabilitation
This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
Algorithm Development and VLSI Implementation of Energy Efficient Decoders of Polar Codes
With its low error-floor performance, polar codes attract significant attention as the potential standard error correction code (ECC) for future communication and data storage. However, the VLSI implementation complexity of polar codes decoders is largely influenced by its nature of in-series decoding. This dissertation is dedicated to presenting optimal decoder architectures for polar codes. This dissertation addresses several structural properties of polar codes and key properties of decoding algorithms that are not dealt with in the prior researches. The underlying concept of the proposed architectures is a paradigm that simplifies and schedules the computations such that hardware is simplified, latency is minimized and bandwidth is maximized.
In pursuit of the above, throughput centric successive cancellation (TCSC) and overlapping path list successive cancellation (OPLSC) VLSI architectures and express journey BP (XJBP) decoders for the polar codes are presented.
An arbitrary polar code can be decomposed by a set of shorter polar codes with special characteristics, those shorter polar codes are referred to as constituent polar codes. By exploiting the homogeneousness between decoding processes of different constituent polar codes, TCSC reduces the decoding latency of the SC decoder by 60% for codes with length n = 1024. The error correction performance of SC decoding is inferior to that of list successive cancellation decoding. The LSC decoding algorithm delivers the most reliable decoding results; however, it consumes most hardware resources and decoding cycles. Instead of using multiple instances of decoding cores in the LSC decoders, a single SC decoder is used in the OPLSC architecture. The computations of each path in the LSC are arranged to occupy the decoder hardware stages serially in a streamlined fashion. This yields a significant reduction of hardware complexity. The OPLSC decoder has achieved about 1.4 times hardware efficiency improvement compared with traditional LSC decoders. The hardware efficient VLSI architectures for TCSC and OPLSC polar codes decoders are also introduced.
Decoders based on SC or LSC algorithms suffer from high latency and limited throughput due to their serial decoding natures. An alternative approach to decode the polar codes is belief propagation (BP) based algorithm. In BP algorithm, a graph is set up to guide the beliefs propagated and refined, which is usually referred to as factor graph. BP decoding algorithm allows decoding in parallel to achieve much higher throughput. XJBP decoder facilitates belief propagation by utilizing the specific constituent codes that exist in the conventional factor graph, which results in an express journey (XJ) decoder. Compared with the conventional BP decoding algorithm for polar codes, the proposed decoder reduces the computational complexity by about 40.6%. This enables an energy-efficient hardware implementation. To further explore the hardware consumption of the proposed XJBP decoder, the computations scheduling is modeled and analyzed in this dissertation. With discussions on different hardware scenarios, the optimal scheduling plans are developed. A novel memory-distributed micro-architecture of the XJBP decoder is proposed and analyzed to solve the potential memory access problems of the proposed scheduling strategy. The register-transfer level (RTL) models of the XJBP decoder are set up for comparisons with other state-of-the-art BP decoders. The results show that the power efficiency of BP decoders is improved by about 3 times
Algorithm Development and VLSI Implementation of Energy Efficient Decoders of Polar Codes
With its low error-floor performance, polar codes attract significant attention as the potential standard error correction code (ECC) for future communication and data storage. However, the VLSI implementation complexity of polar codes decoders is largely influenced by its nature of in-series decoding. This dissertation is dedicated to presenting optimal decoder architectures for polar codes. This dissertation addresses several structural properties of polar codes and key properties of decoding algorithms that are not dealt with in the prior researches. The underlying concept of the proposed architectures is a paradigm that simplifies and schedules the computations such that hardware is simplified, latency is minimized and bandwidth is maximized.
In pursuit of the above, throughput centric successive cancellation (TCSC) and overlapping path list successive cancellation (OPLSC) VLSI architectures and express journey BP (XJBP) decoders for the polar codes are presented.
An arbitrary polar code can be decomposed by a set of shorter polar codes with special characteristics, those shorter polar codes are referred to as constituent polar codes. By exploiting the homogeneousness between decoding processes of different constituent polar codes, TCSC reduces the decoding latency of the SC decoder by 60% for codes with length n = 1024. The error correction performance of SC decoding is inferior to that of list successive cancellation decoding. The LSC decoding algorithm delivers the most reliable decoding results; however, it consumes most hardware resources and decoding cycles. Instead of using multiple instances of decoding cores in the LSC decoders, a single SC decoder is used in the OPLSC architecture. The computations of each path in the LSC are arranged to occupy the decoder hardware stages serially in a streamlined fashion. This yields a significant reduction of hardware complexity. The OPLSC decoder has achieved about 1.4 times hardware efficiency improvement compared with traditional LSC decoders. The hardware efficient VLSI architectures for TCSC and OPLSC polar codes decoders are also introduced.
Decoders based on SC or LSC algorithms suffer from high latency and limited throughput due to their serial decoding natures. An alternative approach to decode the polar codes is belief propagation (BP) based algorithm. In BP algorithm, a graph is set up to guide the beliefs propagated and refined, which is usually referred to as factor graph. BP decoding algorithm allows decoding in parallel to achieve much higher throughput. XJBP decoder facilitates belief propagation by utilizing the specific constituent codes that exist in the conventional factor graph, which results in an express journey (XJ) decoder. Compared with the conventional BP decoding algorithm for polar codes, the proposed decoder reduces the computational complexity by about 40.6%. This enables an energy-efficient hardware implementation. To further explore the hardware consumption of the proposed XJBP decoder, the computations scheduling is modeled and analyzed in this dissertation. With discussions on different hardware scenarios, the optimal scheduling plans are developed. A novel memory-distributed micro-architecture of the XJBP decoder is proposed and analyzed to solve the potential memory access problems of the proposed scheduling strategy. The register-transfer level (RTL) models of the XJBP decoder are set up for comparisons with other state-of-the-art BP decoders. The results show that the power efficiency of BP decoders is improved by about 3 times
The institutional basis of efficiency in resource-rich countries
The “resource curse” is a familiar and recurring theme in development economics. But does resource abundance also lead to resource inefficiency? And if so, what can contribute to better usage of a country's resources for development? This paper examines 130 countries from 1970 to 2011, both resource-abundant and resource-scarce, and concludes that, on average, resource-abundant countries utilize resources less efficiently. Examining the institutional factors that may explain this disparity in usage, we find that several key institutions are necessary for increasing resource use efficiency, with private property showing the largest economic and statistical significance. By improving basic institutions, resource-rich countries can thus see more environmentally sustainable growth
Roadmap on signal processing for next generation measurement systems
Signal processing is a fundamental component of almost any sensor-enabled system, with a wide range of applications across different scientific disciplines. Time series data, images, and video sequences comprise representative forms of signals that can be enhanced and analysed for information extraction and quantification. The recent advances in artificial intelligence and machine learning are shifting the research attention towards intelligent, data-driven, signal processing. This roadmap presents a critical overview of the state-of-the-art methods and applications aiming to highlight future challenges and research opportunities towards next generation measurement systems. It covers a broad spectrum of topics ranging from basic to industrial research, organized in concise thematic sections that reflect the trends and the impacts of current and future developments per research field. Furthermore, it offers guidance to researchers and funding agencies in identifying new prospects.AerodynamicsMicrowave Sensing, Signals & System
A built-in self-test technique for high speed analog-to-digital converters
Fundação para a Ciência e a Tecnologia (FCT) - PhD grant (SFRH/BD/62568/2009
- …