275 research outputs found
Reductie van het geheugengebruik van besturingssysteemkernen Memory Footprint Reduction for Operating System Kernels
In ingebedde systemen is er vaak maar een beperkte hoeveelheid geheugen beschikbaar. Daarom wordt er veel aandacht besteed aan het produceren van compacte programma's voor deze systemen, en zijn er allerhande technieken ontwikkeld die automatisch het geheugengebruik van programma's kunnen verkleinen. Tot nu toe richtten die technieken zich voornamelijk op de toepassingssoftware die op het systeem draait, en werd het besturingssysteem over het hoofd gezien. In dit proefschrift worden een aantal technieken beschreven die het mogelijk maken om op een geautomatiseerde manier het geheugengebruik van een besturingssysteemkern gevoelig te verkleinen. Daarbij wordt in eerste instantie gebruik gemaakt van compactietransformaties tijdens het linken. Als we de hardware en software waaruit het systeem samengesteld is kennen, is het mogelijk om nog verdere reducties te bekomen. Daartoe wordt de kern gespecialiseerd voor een bepaalde hardware-software combinatie. Overbodige functionaliteit wordt opgespoord en uit de kern verwijderd, terwijl de resterende functionaliteit wordt aangepast aan de specifieke gebruikspatronen die uit de hardware en software kunnen afgeleid worden. Als laatste worden technieken voorgesteld die het mogelijk maken om weinig of niet uitgevoerde code (bijvoorbeeld code voor het afhandelen van slechts zeldzaam optredende foutcondities) uit het geheugen te verwijderen. Deze code wordt dan enkel ingeladen op het moment dat ze effectief nodig is. Voor ons testsysteem kunnen we met de gecombineerde technieken het geheugengebruik van een Linux 2.4 kern met meer dan 48% verminderen
Search engine optimisation using past queries
World Wide Web search engines process millions of queries per day from users all over the world. Efficient query evaluation is achieved through the use of an inverted index, where, for each word in the collection the index maintains a list of the documents in which the word occurs. Query processing may also require access to document specific statistics, such as document length; access to word statistics, such as the number of unique documents in which a word occurs; and collection specific statistics, such as the number of documents in the collection. The index maintains individual data structures for each these sources of information, and repeatedly accesses each to process a query. A by-product of a web search engine is a list of all queries entered into the engine: a query log. Analyses of query logs have shown repetition of query terms in the requests made to the search system. In this work we explore techniques that take advantage of the repetition of user queries to improve the accuracy or efficiency of text search. We introduce an index organisation scheme that favours those documents that are most frequently requested by users and show that, in combination with early termination heuristics, query processing time can be dramatically reduced without reducing the accuracy of the search results. We examine the stability of such an ordering and show that an index based on as little as 100,000 training queries can support at least 20 million requests. We show the correlation between frequently accessed documents and relevance, and attempt to exploit the demonstrated relationship to improve search effectiveness. Finally, we deconstruct the search process to show that query time redundancy can be exploited at various levels of the search process. We develop a model that illustrates the improvements that can be achieved in query processing time by caching different components of a search system. This model is then validated by simulation using a document collection and query log. Results on our test data show that a well-designed cache can reduce disk activity by more than 30%, with a cache that is one tenth the size of the collection
WAVELET-DCT BASED IMAGE CODER FOR VIDEO CODING APPLICATIONS
This project is about the implementation ofWavelet-DCT intra-frame coder for video
coding applications. Wavelet-DCT is a novel algorithm that uses Forward Discrete
Wavelet Transform (DWT) to compute DCT. It is proved that the algorithm has better
compression performance for difference images compared to conventional DCT. This
is possible since the algorithm allows discarding insignificant DWT coefficients or
more popularly known thresholding the DWT coefficients while computing the DCT.
In video coder applications, wavelet-DCT is capable to achieve greater compression.
This project is a feasibility study on the performance ofWavelet-DCT in video coder
applications. ASIMULINK model for conventional intra-frame coder is developed
and tested, with very significant data bit reduction achieved. Then, the conventional
DCT block has been replaced with a Wavelet-DCT block. In the study, on one hand,
experiment is conducted on difference image for conventional intra-frame coder; on
the other, the same difference image with Wavelet-DCT based intra-frame coder. The
thresholding algorithm is used to remove some ofthe insignificant DWT coefficients
from the difference image. The main objective is to achieve a better compression
capability for difference image within video coding applications. The project's
experimental results supports our claim that implementation ofWavelet-DCT in intraframe
coder within a video coding application could improve the system's
performance with a greater compression ratio at the same Mean Squared Error
Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures
Quantum computers have recently made great strides and are on a long-term
path towards useful fault-tolerant computation. A dominant overhead in
fault-tolerant quantum computation is the production of high-fidelity encoded
qubits, called magic states, which enable reliable error-corrected computation.
We present the first detailed designs of hardware functional units that
implement space-time optimized magic-state factories for surface code
error-corrected machines. Interactions among distant qubits require surface
code braids (physical pathways on chip) which must be routed. Magic-state
factories are circuits comprised of a complex set of braids that is more
difficult to route than quantum circuits considered in previous work [1]. This
paper explores the impact of scheduling techniques, such as gate reordering and
qubit renaming, and we propose two novel mapping techniques: braid repulsion
and dipole moment braid rotation. We combine these techniques with graph
partitioning and community detection algorithms, and further introduce a
stitching algorithm for mapping subgraphs onto a physical machine. Our results
show a factor of 5.64 reduction in space-time volume compared to the best-known
previous designs for magic-state factories.Comment: 13 pages, 10 figure
DUAL-MODALITY (NEUTRON AND X-RAY) IMAGING FOR CHARACTERIZATION OF PARTIALLY SATURATED GRANULAR MATERIALS AND FLOW THROUGH POROUS MEDIA
Problems involving mechanics of partially saturated soil and physics of flow through porous media are complex and largely unresolved based on using continuum approach. Recent advances in radiation based imaging techniques provide unique access to simultaneously observe continuum scale response while probing corresponding microstructure for developing predictive science and engineering tools in place of phenomenological approach used to date.
Recent developments with X-ray/Synchrotron and neutron imaging techniques provided tools to visualize the interior of soil specimen at pore/grain level. X-ray and neutron radiation often presents complementary contrast for given condensed matter in the images due to different fundamental interaction mechanisms. While X-rays mainly interact with the electron clouds, neutrons directly interact with the nucleus of an atom. The dual-modal contrasts are well suited for probing the three phases (silica, air and water) of partially saturated sand since neutrons provide high penetration through large sample size and are very sensitive to water and X-rays of high energy can penetrate moderate sample sizes and clearly show the particle and void phases.
Both neutron and X-ray imaging techniques are used to study microstructure of partially saturated compacted sand and water flow behavior through sand with different initial structures. Water distribution in compacted sand with different water contents for different grain shapes of sand was visualized with relatively coarse resolution neutron radiographs and tomograms. Dual-modal contrast of partially saturated sand was presented by using high spatial resolution neutron and X-ray imaging. Advanced image registration technique was used to combine the dual modality data for a more complete quantitative analysis. Quantitative analysis such as grain size distribution, pore size distribution, coordination number, and water saturation along the height were obtained from the image data. Predictive simulations were performed to obtain capillary pressure – saturation curves and simulated two fluid phase (water and air) distribution based image data. In-situ water flow experiments were performed to investigate the effect of initial microstructure. Flow patterns for dense and loose states of Ottawa sand specimens were compared. Flow patterns and water distribution of dense Ottawa and Q-ROK sand specimens was visualized with high resolution neutron and X-ray image data
ニュートン流体における粉体の二相動力学
Many scientific and technical problems which concern the dynamics of complex fluids such as multi-phase-flow and realistic flow in porous and granular media deal with the interaction between fluids and particles, rather than with the dynamics of the fluid alone. The research of how the surrounding fluid affects the dynamics of particles, or how to deal with the problem computationally for the microscopic level is still at the beginning. The aim of this study is to develop a microscopic simulation method (fluid goes around the particles) where granular particles can be simulated inside fluids to study those problems. This is done by combining the simulation method for granular particles with the simulation method for the incompressible Newtonian fluid. The granular particles are implemented via the discrete element method (DEM) where the elastic contact force between two undeformed contacting polygonal particles is proportional to the overlap area ("hard particle, soft contact"). The Gear Predictor-Corrector of 2nd-order (BDF2) is used as the time integrator to solve the equations of motion of the particles. For the fluid phase, the implementation of the incompressible Navier-Stokes equations via the Galerkin finite element method (FEM) is formulated as differential algebraic equations (DAE) with the pressures as the Lagrange parameters. The time integration is again via the BDF2 while the resulting non-linear equations are solved via the Newton-Raphson methods. The spatial discretization is via the Taylor-Hood elements from Delaunay triangulations with additional post-processing with the relaxation algorithm. The coupling of the DEM for the granular particles and the FEM for the fluid is via appropriate boundary conditions and the drag force (computed by the integration of the fluid stress tensor over the particle\u27s surface). This is being verified via the computation of wall correction factors of a sinking particle. The fluid simulation is extended to a simulation of free surfaces where the motion of the surface is integrated out according to the velocity on the surface which is obtained from the FEM-scheme. The second-order Adams-Bashforth method turns out to be the most suitable integrator for the surface motion. Compared to conventional efforts, which try to solve partial differential equations for the motion of the surface, the additional effort in our method with respect to new data structures etc. is minimal. The free surfaces code is verified by simulating the collapse of a water column. For the speed of the wavefronts, excellent agreement is obtained for large viscosity with the lubrication approximation. The agreement of the results with the experimental data for water is a further gratifying result. Two numerical experiments are conducted using the DEM-FEM code: one with a rather slow dynamics, another one relatively more "violent". The compaction simulation has shown that the addition of fluid to a granular assembly can increase the sound velocity in the system, compared to the dry case. The high viscosity slowed down the compaction, irrespective whether the system was tapped only on the ground or on the whole boundary. The granular column simulations show that for systems immersed under fluids, rolling of particles becomes less important than for the corresponding dry systems.電気通信大学201
Recommended from our members
Automatic synthesis of analog layout : a survey
A review of recent research in the automatic synthesis of physical geometry for analog integrated circuits is presented. On introduction, an explanation of the difficulties involved in analog layout as opposed to digital layout is covered. Review of the literature then follows. Emphasis is placed on the exposition of general methods for addressing problems specific to analog layout, with the details of specific systems only being given when they surve to illustrate these methods well. The conclusion discusses problems remaining and offers a prediction as to how technology will evolve to solve them. It is argued that although progress has been and will continue to be made in the automation of analog IC layout, due to fundamental differences in the nature of analog IC design as opposed to digital design, it should not be expected that the level of automation of the former will reach that of the latter any time soon
- …