386 research outputs found

    Fundamental principles of data assimilation underlying the Verdandi library: applications to biophysical model personalization within euHeart

    Get PDF
    International audienceWe present the fundamental principles of data assimilation underlying the Verdandi library, and how they are articulated with the modular architecture of the library. This translates -- in particular -- into the definition of standardized interfaces through which the data assimilation library interoperates with the model simulation software and the so-called observation manager. We also survey various examples of data assimilation applied to the personalization of biophysical models, in particular for cardiac modeling applications within the euHeart European project. This illustrates the power of data assimilation concepts in such novel applications, with tremendous potential in clinical diagnosis assistance

    Mathematics and Digital Signal Processing

    Get PDF
    Modern computer technology has opened up new opportunities for the development of digital signal processing methods. The applications of digital signal processing have expanded significantly and today include audio and speech processing, sonar, radar, and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, signal processing for telecommunications, control systems, biomedical engineering, and seismology, among others. This Special Issue is aimed at wide coverage of the problems of digital signal processing, from mathematical modeling to the implementation of problem-oriented systems. The basis of digital signal processing is digital filtering. Wavelet analysis implements multiscale signal processing and is used to solve applied problems of de-noising and compression. Processing of visual information, including image and video processing and pattern recognition, is actively used in robotic systems and industrial processes control today. Improving digital signal processing circuits and developing new signal processing systems can improve the technical characteristics of many digital devices. The development of new methods of artificial intelligence, including artificial neural networks and brain-computer interfaces, opens up new prospects for the creation of smart technology. This Special Issue contains the latest technological developments in mathematics and digital signal processing. The stated results are of interest to researchers in the field of applied mathematics and developers of modern digital signal processing systems

    A Kalman filter approach for exploiting bluetooth traffic data when estimating time-dependent OD matrices

    Get PDF
    Time-dependent origin–destination (OD) matrices are essential input for dynamic traffic models such as microscopic and mesoscopic traffic simulators. Dynamic traffic models also support real-time traffic management decisions, and they are traditionally used in the design and evaluation of advanced traffic traffic management and information systems (ATMS/ATIS). Time-dependent OD estimations are typically based either on Kalman filtering or on bilevel mathematical programming, which can be considered in most cases as ad hoc heuristics. The advent of the new information and communication technologies (ICT) provides new types of traffic data with higher quality and accuracy, which in turn allows new modeling hypotheses that lead to more computationally efficient algorithms. This article presents ad hoc, Kalman filtering procedures that explicitly exploit Bluetooth sensor traffic data, and it reports the numerical results from computational experiments performed at a network test site.Peer ReviewedPostprint (author’s final draft

    Sensor Data Fusion for Improving Traffic Mobility in Smart Cities

    Get PDF
    The ever-increasing urban population and vehicular traffic without a corresponding expansion of infrastructure have been a challenge to transportation facilities managers and commuters. While some parts of transportation infrastructure have big data available, so many other locations have sparse data. This has posed a challenge in traffic state estimation and prediction for efficient and effective infrastructure management and route guidance. This research focused on traffic prediction problems and aims to develop novel spatial-temporal and robust algorithms, that can provide high accuracy in the presence of both big data and sparse data in a large urban road network. Intelligent transportation systems require the knowledge of current traffic state and forecast for effective implementation. The actual traffic state has to be estimated as the existing sensors do not capture the needed state. Sensor measurements often contain missing or incomplete data as a result of communication issues, faulty sensors or cost leading to incomplete monitoring of the entire road network. This missing data pose challenges to traffic estimation approaches. In this work, a robust spatio-temporal traffic imputation approach capable of withstanding high missing data rate is presented. A particle-based approach with Kriging interpolation is proposed. The performance of the particle-based Kriging interpolation for different missing data ratios was investigated for a large road network. A particle-based framework for dealing with missing data is also proposed. An expression of the likelihood function is derived for the case when the missing value is calculated based on Kriging interpolation. With the Kriging interpolation, the missing values of the measurements are predicted, which are subsequently used in the computation of likelihood terms in the particle filter algorithm. In the commonly used Kriging approaches, the covariance function depends only on the separation distance irrespective of the traffic at the considered locations. A key limitation of such an approach is its inability to capture well the traffic dynamics and transitions between different states. This thesis proposes a Bayesian Kriging approach for the prediction of urban traffic. The approach can capture these dynamics and model changes via the covariance matrix. The main novelty consists in representing both stationary and non-stationary changes in traffic flows by a discriminative covariance function conditioned on the observation at each location. An advantage is that by considering the surrounding traffic information distinctively, the proposed method is very likely to represent congested regions and interactions in both upstream and downstream areas

    Refined isogeometric analysis for generalized Hermitian eigenproblems

    Get PDF
    We use refined isogeometric analysis (rIGA) to solve generalized Hermitian eigenproblems (Ku = λMu). rIGA conserves the desirable properties of maximum-continuity isogeometric analysis (IGA) while it reduces the solution cost by adding zero-continuity basis functions, which decrease the matrix connectivity. As a result, rIGA enriches the approximation space and reduces the interconnection between degrees of freedom. We compare computational costs of rIGA versus those of IGA when employing a Lanczos eigensolver with a shift-and-invert spectral transformation. When all eigenpairs within a given interval [λ_s,λ_e] are of interest, we select several shifts σ_k ∈ [λ_s,λ_e] using a spectrum slicing technique. For each shift σ_k, the factorization cost of the spectral transformation matrix K − σ_k M controls the total computational cost of the eigensolution. Several multiplications of the operator matrix (K − σ_k M)^−1 M by vectors follow this factorization. Let p be the polynomial degree of the basis functions and assume that IGA has maximum continuity of p−1. When using rIGA, we introduce C^0 separators at certain element interfaces to minimize the factorization cost. For this setup, our theoretical estimates predict computational savings to compute a fixed number of eigenpairs of up to O(p^2) in the asymptotic regime, that is, large problem sizes. Yet, our numerical tests show that for moderate-size eigenproblems, the total observed computational cost reduction is O(p). In addition, rIGA improves the accuracy of every eigenpair of the first N_0 eigenvalues and eigenfunctions, where N_0 is the total number of modes of the original maximum-continuity IGA discretization

    On the predictability of U.S. stock market using machine learning and deep learning techniques

    Get PDF
    Conventional market theories are considered to be inconsistent approach in modern financial analysis. This thesis focuses mainly on the application of sophisticated machine learning and deep learning techniques in stock market statistical predictability and economic significance over the benchmark conventional efficient market hypothesis and econometric models. Five chapters and three publishable papers were proposed altogether, and each chapter is developed to solve specific identifiable problem(s). Chapter one gives the general introduction of the thesis. It presents the statement of the research problems identified in the relevant literature, the objective of the study and the significance of the study. Chapter two applies a plethora of machine learning techniques to forecast the direction of the U.S. stock market. The notable sophisticated techniques such as regularization, discriminant analysis, classification trees, Bayesian and neural networks were employed. The empirical findings revealed that the discriminant analysis classifiers, classification trees, Bayesian classifiers and penalized binary probit models demonstrate significant outperformance over the binary probit models both statistically and economically, proving significant alternatives to portfolio managers. Chapter three focuses mainly on the application of regression training (RT) techniques to forecast the U.S. equity premium. The RT models demonstrate significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. Chapter four investigates the statistical predictive power and economic significance of financial stock market data by deep learning techniques. Chapter five give the summary, conclusion and present area(s) of further research. The techniques are proven to be robust both statistically and economically when forecasting the equity premium out-of-sample using recursive window method. Overall, the deep learning techniques produced the best result in this thesis. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk

    SOLID-SHELL FINITE ELEMENT MODELS FOR EXPLICIT SIMULATIONS OF CRACK PROPAGATION IN THIN STRUCTURES

    Get PDF
    Crack propagation in thin shell structures due to cutting is conveniently simulated using explicit finite element approaches, in view of the high nonlinearity of the problem. Solidshell elements are usually preferred for the discretization in the presence of complex material behavior and degradation phenomena such as delamination, since they allow for a correct representation of the thickness geometry. However, in solid-shell elements the small thickness leads to a very high maximum eigenfrequency, which imply very small stable time-steps. A new selective mass scaling technique is proposed to increase the time-step size without affecting accuracy. New ”directional” cohesive interface elements are used in conjunction with selective mass scaling to account for the interaction with a sharp blade in cutting processes of thin ductile shells

    Refined isogeometric analysis: a solver-based discretization method

    Get PDF
    112 p.Isogeometric Analysis (IGA) is a computational approach frequently employed nowadaysto study problems governed by partial differential equations (PDEs). This approach definesthe geometry using conventional CAD functions and, in particular, NURBS. Thesefunctions represent complex geometries commonly found in engineering design and arecapable of preserving exactly the geometry description under refinement as required in theanalysis. Moreover, the use of NURBS as basis functions is compatible with theisoparametric concept, allowing to build algebraic systems directly from the computationaldomain representation based on spline functions, which arise from CAD. Therefore, itavoids to define a second space for the numerical analysis resulting in huge reductions inthe total analysis time.For the case of direct solvers, the performance strongly depends upon the employeddiscretization method. In particular, on IGA, the continuity of the solution spaces plays asignificant role in their performance. High continuous spaces degrade the direct solver'sperformance, increasing the solution times by a factor up to O(p^3) with respect totraditional finite element analysis (FEA) per unknown, being p the polynomial order.In this work, we propose a solver-based discretization that employs highly continuous finiteelement spaces interconnected with low continuity hyperplanes to maximize theperformance of direct solvers. Starting from a highly continuous IGA discretization, weintroduce C^0 hyperplanes, which act as separators for the direct solver, to reduce theinterconnection between the degrees of freedom (DoF) in the mesh. By doing so, both thesolution time and best approximation errors are simultaneously improved. We call theresulting method ``refined Isogeometric analysis" (rIGA). Numerical results indicate thatrIGA delivers speed-up factors proportional to p^2. For instance, in a 2D mesh with fourmillion elements and p=5, a Laplace linear system resulting from rIGA is solved 22 timesfaster than the one from highly continuous IGA. In a 3D mesh with one million elementsand p=3, the linear rIGA system is solved 15 times faster than the IGA one.We have also designed and implemented a similar rIGA strategy for iterative solvers. Thisis a hybrid solver strategy that combines a direct solver (static condensation step) toeliminate the internal macro-elements DoF, with an iterative method to solve the skeletonsystem. The hybrid solver strategy achieves moderate savings with respect to IGA whensolving a 2D Poisson problem with a structured mesh and a uniform polynomial degree ofapproximation. For instance, for a mesh with four million elements and polynomial degreep=3, the iterative solver is approximately 2.6 times faster (in time) when applied to the rIGAsystem than to the IGA one. These savings occur because the skeleton rIGA systemcontains fewer non-zero entries than the IGA one. The opposite situation occurs for 3Dproblems, and as a result, 3D rIGA discretizations provide no gains with respect to theirIGA counterparts.Thesis director(s): David Pardo from UPV/EHU university and Victor M. Calo from Curtinuniversit
    corecore