90 research outputs found

    Sviluppo di metodi fotochimici per la sintesi di particelle Janus con specifiche proprietà e capacità di auto-assemblaggio

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    The acquisition of Hyperspectral Digital Surface Models of crops from UAV snapshot cameras

    Get PDF
    This thesis develops a new approach to capture information about agricultural crops by utilizing advances in the field of robotics, sensor technology, computer vision and photogrammetry: Hyperspectral digital surface models (HS DSMs) generated with UAV snapshot cameras are a representation of a surface in 3D space linked with hyperspectral information emitted and reflected by the objects covered by that surface. The overall research aim of this thesis is to evaluate if HS DSMs are suited for supporting a site-specific crop management. Based on six research studies, three research objectives are discussed for this evaluation. Firstly the influences of environmental effects, the sensing system and data processing of the spectral data within HS DSMs are discussed. Secondly, the comparability of HS DSMs to data from other remote sensing methods is investigated and thirdly their potential to support site-specific crop management is evaluated. Most data within this thesis was acquired at a plant experimental-plot experiment in Klein-Altendorf, Germany, with six different barley varieties and two different fertilizer treatments in the growing seasons of 2013 and 2014. In total, 22 measurement campaigns were carried out in the context of this thesis. HS DSMs acquired with the hyperspectral snapshot cameras Cubert UHD 185-Firefly show great potential for practical applications. The combination of UAVs and the UHD allowed data to be captured at a high spatial, spectral and temporal resolution. The spatial resolution allowed detection of small-scale heterogeneities within the plant population. Additionally, with the spectral and 3D information contained in HS DSMs, plant parameters such as chlorophyll, biomass and plant height could be estimated within individual, and across different growing stages. The techniques developed in this thesis therefore offer a significant contribution towards increasing cropping efficiency through the support of site-specific management

    Application of evolutionary computing in the design of high throughput digital filters.

    Get PDF

    New approaches for efficient on-the-fly FE operator assembly in a high-performance mantle convection framework

    Get PDF

    Toatie : functional hardware description with dependent types

    Get PDF
    Describing correct circuits remains a tall order, despite four decades of evolution in Hardware Description Languages (HDLs). Many enticing circuit architectures require recursive structures or complex compile-time computation — two patterns that prove difficult to capture in traditional HDLs. In a signal processing context, the Fast FIR Algorithm (FFA) structure for efficient parallel filtering proves to be naturally recursive, and most Multiple Constant Multiplication (MCM) blocks decompose multiplications into graphs of simple shifts and adds using demanding compile time computation. Generalised versions of both remain mostly in academic folklore. The implementations which do exist are often ad hoc circuit generators, written in software languages. These pose challenges for verification and are resistant to composition. Embedded functional HDLs, that represent circuits as data, allow for these descriptions at the cost of forcing the designer to work at the gate-level. A promising alternative is to use a stand-alone compiler, representing circuits as plain functions, exemplified by the CλaSH HDL. This, however, raises new challenges in capturing a circuit’s staging — which expressions in the single language should be reduced during compile-time elaboration, and which should remain in the circuit’s run-time? To better reflect the physical separation between circuit phases, this work proposes a new functional HDL (representing circuits as functions) with first-class staging constructs. Orthogonal to this, there are also long-standing challenges in the verification of parameterised circuit families. Industry surveys have consistently reported that only a slim minority of FPGA projects reach production without non-trivial bugs. While a healthy growth in the adoption of automatic formal methods is also reported, the majority of testing remains dynamic — presenting difficulties for testing entire circuit families at once. This research offers an alternative verification methodology via the combination of dependent types and automatic synthesis of user-defined data types. Given precise enough types for synthesisable data, this environment can be used to develop circuit families with full functional verification in a correct-by-construction fashion. This approach allows for verification of entire circuit families (not just one concrete member) and side-steps the state-space explosion of model checking methods. Beyond the existing work, this research offers synthesis of combinatorial circuits — not just a software model of their behaviour. This additional step requires careful consideration of staging, erasure & irrelevance, deriving bit representations of user-defined data types, and a new synthesis scheme. This thesis contributes steps towards HDLs with sufficient expressivity for awkward, combinatorial signal processing structures, allowing for a correct-by-construction approach, and a prototype compiler for netlist synthesis.Describing correct circuits remains a tall order, despite four decades of evolution in Hardware Description Languages (HDLs). Many enticing circuit architectures require recursive structures or complex compile-time computation — two patterns that prove difficult to capture in traditional HDLs. In a signal processing context, the Fast FIR Algorithm (FFA) structure for efficient parallel filtering proves to be naturally recursive, and most Multiple Constant Multiplication (MCM) blocks decompose multiplications into graphs of simple shifts and adds using demanding compile time computation. Generalised versions of both remain mostly in academic folklore. The implementations which do exist are often ad hoc circuit generators, written in software languages. These pose challenges for verification and are resistant to composition. Embedded functional HDLs, that represent circuits as data, allow for these descriptions at the cost of forcing the designer to work at the gate-level. A promising alternative is to use a stand-alone compiler, representing circuits as plain functions, exemplified by the CλaSH HDL. This, however, raises new challenges in capturing a circuit’s staging — which expressions in the single language should be reduced during compile-time elaboration, and which should remain in the circuit’s run-time? To better reflect the physical separation between circuit phases, this work proposes a new functional HDL (representing circuits as functions) with first-class staging constructs. Orthogonal to this, there are also long-standing challenges in the verification of parameterised circuit families. Industry surveys have consistently reported that only a slim minority of FPGA projects reach production without non-trivial bugs. While a healthy growth in the adoption of automatic formal methods is also reported, the majority of testing remains dynamic — presenting difficulties for testing entire circuit families at once. This research offers an alternative verification methodology via the combination of dependent types and automatic synthesis of user-defined data types. Given precise enough types for synthesisable data, this environment can be used to develop circuit families with full functional verification in a correct-by-construction fashion. This approach allows for verification of entire circuit families (not just one concrete member) and side-steps the state-space explosion of model checking methods. Beyond the existing work, this research offers synthesis of combinatorial circuits — not just a software model of their behaviour. This additional step requires careful consideration of staging, erasure & irrelevance, deriving bit representations of user-defined data types, and a new synthesis scheme. This thesis contributes steps towards HDLs with sufficient expressivity for awkward, combinatorial signal processing structures, allowing for a correct-by-construction approach, and a prototype compiler for netlist synthesis

    Computational methodology for modelling the dynamics of statistical arbitrage

    Get PDF
    Recent years have seen the emergence of a multi-disciplinary research area known as "Computational Finance". In many cases the data generating processes of financial and other economic time-series are at best imperfectly understood. By allowing restrictive assumptions about price dynamics to be relaxed, recent advances in computational modelling techniques offer the possibility to discover new "patterns" in market activity. This thesis describes an integrated "statistical arbitrage" framework for identifying, modelling and exploiting small but consistent regularities in asset price dynamics. The methodology developed in the thesis combines the flexibility of emerging techniques such as neural networks and genetic algorithms with the rigour and diagnostic techniques which are provided by established modelling tools from the fields of statistics, econometrics and time-series forecasting. The modelling methodology which is described in the thesis consists of three main parts. The first part is concerned with constructing combinations of time-series which contain a significant predictable component, and is a generalisation of the econometric concept of cointegration. The second part of the methodology is concerned with building predictive models of the mispricing dynamics and consists of low-bias estimation procedures which combine elements of neural and statistical modelling. The third part of the methodology controls the risks posed by model selection and performance instability through actively encouraging diversification across a "portfolio of models". A novel population-based algorithm for joint optimisation of a set of trading strategies is presented, which is inspired both by genetic and evolutionary algorithms and by modern portfolio theory. Throughout the thesis the performance and properties of the algorithms are validated by means of experimental evaluation on synthetic data sets with known characteristics. The effectiveness of the methodology is demonstrated by extensive empirical analysis of real data sets, in particular daily closing prices of FTSE 100 stocks and international equity indices

    Block-level test scheduling under power dissipation constraints

    Get PDF
    As dcvicc technologies such as VLSI and Multichip Module (MCM) become mature, and larger and denser memory ICs arc implemented for high-performancc digital systems, power dissipation becomes a critical factor and can no longer be ignored cither in normal operation of the system or under test conditions. One of the major considerations in test scheduling is the fact that heat dissipated during test application is significantly higher than during normal operation (sometimes 100 - 200% higher). Therefore, this is one of the recent major considerations in test scheduling. Test scheduling is strongly related to test concurrency. Test concurrency is a design property which strongly impacts testability and power dissipation. To satisfy high fault coverage goals with reduced test application time under certain power dissipation constraints, the testing of all components on the system should be performed m parallel to the greatest extent possible. Some theoretical analysis of this problem has been carried out, but only at IC level. The problem was basically described as a compatible test clustering, where the compatibility among tests was given by test resource and power dissipation conflicts at the same time. From an implementation point of view this problem was identified as an Non-Polynomial (NP) complete problem In this thesis, an efficient scheme for overlaying the block-tcsts, called the extended tree growing technique, is proposed together with classical scheduling algorithms to search for power-constrained blocktest scheduling (PTS) profiles m a polynomial time Classical algorithms like listbased scheduling and distribution-graph based scheduling arc employed to tackle at high level the PTS problem. This approach exploits test parallelism under power constraints. This is achieved by overlaying the block-tcst intervals of compatible subcircuits to test as many of them as possible concurrently so that the maximum accumulated power dissipation is balanced and does not exceed the given limit. The test scheduling discipline assumed here is the partitioned testing with run to completion. A constant additive model is employed for power dissipation analysis and estimation throughout the algorithm

    Theory and application of the adjoint method in geodynamics and an extended review of analytical solution methods to the Stokes equation

    Get PDF
    The initial condition problem with respect to the temperature distribution in the Earth's mantle is Pandora's box of geodynamics. The heat transport inside the Earth follows the principles of advection and conduction. But since conduction is an irreversible process, this mechanism leads to a huge amount of information getting lost over time. Due to this reason, a recovery of a detailed state of the Earth's mantle some million years ago is an intrinsically unsolvable problem. In this work we present a novel mathematical method, the adjoint method in geodynamics, that is not capable of solving but of circumventing the presented initial condition problem by reformulating this task in terms of an optimisation problem. We are aiming at a past state of the Earth's mantle that approaches the current and thus, observable state over time in an optimal way. To this end, huge computational resources are needed since the 'optimal' solution can only be found in an iterative process. In this work, we developed a new general operator formulation in order to determine the adjoint version of the governing equations of mantle flow and applied this method to the high-resolution numerical mantle circulation code TERRA. For our models, we used a global grid spacing of approx. 30 km and more than 80 million mesh elements. We found a reconstruction of the Earth's mantle at 40 Ma that is, with respect to our modelling parameters, consistent with today's observations, gathered from seismic tomography. With this published fundamental work, we are opening the door to a variety of future applications, e.g. a possible incorporation of geological and geodetic data sets as further constraints for the model trajectory over geological time scales. Where high-resolution numerical models and even the implementation of inversion schemes have become feasible over the past decades due to increasing computational resources, in the community there is still a high demand for analytical solution methods. Restricting the physical parameter space in the governing equations, e.g. by only allowing for a radial varying viscosity, it can be shown that in some cases, the resulting simplified equations can even be solved in a (semi-)analytical way. In other words, in these simplified scenarios, no large scale computational resources or even high-performance clusters are needed but the solution for a global flow system can be determined in minutes even on a standard computer. Besides this apparent advantage, analytical and numerical solutions can even go hand-in-hand since numerical computer codes may be tested and benchmarked by means of these manufactured solutions. Here, we spend a large portion of this work with a detailed derivation of these analytical approaches. We basically start from scratch, having the intention to cover all possible traps and pitfalls on the way from the governing equations to their solutions and to provide a service to future scientists that are stuck somewhere in the middle of this road. Besides the derivation, we also present in detail how such an analytical approach can be used as a benchmark for a high-resolution mantle circulation code. We applied this theory to the prototype for a new high-performance mantle convection framework being developed in the Terra-Neo project and published the results along with a small portion of the derived theory. In an additional chapter of this work, we focus on a detailed analysis of the current state of the Earth's gravitational field that is measured in an unimaginably accurate way by the recent satellite missions CHAMP, GRACE and GOCE. The origin of the link of our work to the gravitational field also lies in the analytical solution methods. It can be shown that due to the effect of flow induced dynamic topography, the Earth's gravity field is highly sensitive to the viscosity profile in the Earth's mantle. We show that even without using any other external knowledge or data set, the gravitational field itself restricts the possible choices for the Earth's mantle viscosity to a well-defined parameter space. Furthermore, in the course of these examinations, we found that mantle processes are not capable of explaining the short wavelength signals in the observed gravity field at all, even with the best-fitting viscosity profile. To this end, we developed a simple crustal model that is only based on topographic data (ETOPO) and the principle of isostasy and showed that even with this very basic approach we can explain the majority of short length-scale features in the observed gravity signal. Finally, in combination with a (simple, static and analytic) mantle flow model based on a density field derived from seismic topography and mineralogy, we found a nearly perfect fit of modelled and observed gravitational data throughout all wavelengths under consideration (spherical harmonic degree and order up to l=100)

    Optimization Algorithms For The Multiple Constant Multiplications Problem

    Get PDF
    (Doktora) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2009(PhD) -- İstanbul Technical University, Institute of Science and Technology, 2009Bu tezde, birden fazla katsayının çarpımı (MCM) problemi, bir başka deyişle, bir değişkenin birden fazla katsayı ile çarpımının minimum sayıda toplama/çıkarma işlemi kullanılarak gerçeklenmesi için tasarlanmış kesin ve yaklaşık algoritmalar sunulmaktadır. Bir kesin alt ifade eliminasyonu (CSE) algoritmasının tasarımında, MCM problemini bir 0-1 tamsayı lineer programlama problemi olarak modelleyen daha önceden önerilmiş bir algoritma temel alınmıştır. Kesin CSE algoritması içinde, alan ve gecikme ölçütlerini ele alabilmek için yeni bir kesin model önerilmektedir. Kesin CSE algoritması tarafından taranacak arama uzayını küçültmek için problem indirgeme ve model basitleştirme teknikleri sunulmaktadır. Bu tekniklerin kullanımının kesin CSE algoritmasının daha büyük örnekler üzerinde uygulanmasına olanak sağladığı gösterilmektedir. Ayrıca, bu teknikler ile donatılmış kesin CSE algoritması, katsayıları genel sayı gösteriminde ele alacak ve kesin CSE algoritmasından daha iyi sonuçlar elde edecek şekilde genişletilmektedir. Bunların yanında, gerçek boyutlu örnekler üzerinde uygulanabilen bir kesin graf tabanlı algoritma sunulmaktadır. Bu kesin algoritmalara ek olarak, minimum sonuçlara oldukça yakın çözümler bulabilen ve kesin algoritmaların ele almakta zorlandığı örneklere uygulanabilen yaklaşık CSE ve graf tabanlı algoritmalar verilmektedir. Bu tezde önerilen kesin ve yaklaşık algoritmaların daha önceden önerilmiş sezgisel yöntemlerden daha iyi sonuçlar verdiği gösterilmektedir. Bunların yanısıra, bu tezde, kesin CSE algoritması gecikme kısıtı altında alanın minimize edilmesi, kapı seviyesinde alanın minimize edilmesi ve yüksek hızlı sayısal sonlu impuls cevaplı filtrelerin tasarımında alanın optimize edilmesi problemlerine uygulanmaktadır.In this thesis, exact and approximate algorithms designed for the multiple constant multiplications (MCM) problem, i.e., the implementation of the multiplication of a variable with multiple constants using minimum number of addition/subtraction operations, are introduced. In the design of an exact common subexpression elimination (CSE) algorithm, we relied on the previously proposed algorithm that models the MCM problem as a 0-1 integer linear programming problem. To handle the area and delay parameters in the exact CSE algorithm, a new exact model is proposed. To reduce the search space to be explored by the exact algorithm, problem reduction and model simplification techniques are introduced. It is shown that the use of these techniques enable the exact CSE algorithm to be applied on larger size instances. Also, the exact CSE algorithm equipped with these techniques is extended to handle the constants under general number representation yielding better solutions than those of the exact CSE algorithm. Besides, an exact graph-based algorithm that can be applied on real size instances is introduced. In addition to the exact algorithms, approximate CSE and graph-based algorithms that find similar results with the minimum solutions and can be applied on instances that the exact algorithms cannot deal with are presented. It is shown that the exact and approximate algorithms proposed in this thesis give better solutions than those of the previously proposed heuristic algorithms. Furthermore, in this thesis, the exact CSE algorithm is applied on the minimization of area under a delay constraint, the minimization of area at gate-level, and the optimization of area in high-speed digital finite impulse response filters synthesis problems.DoktoraPh
    corecore