142 research outputs found

    Adaptive lifting schemes with perfect reconstruction

    Get PDF
    In this paper, we propose a framework for constructing adaptive wavelet decompositions using the lifting scheme. A major requirement is that perfect reconstruction is possible without any overhead cost. In this paper we restrict ourselves to the update lifting stage. It is assumed that the update filter utilises local gradient information to adapt itself to the signal in the sense that smaller gradients `evoke' stronger update filters. As a result, sharp transitions in a signal will not be smoothed to the same extent as regions which are more homogeneous. The approach taken in this paper differs from other adaptive schemes found in the literature in the sense that that no bookkeeping is required in order to have perfect reconstruction

    Size-dependent protein-nanoparticle interactions in citrate-stabilized gold nanoparticles : the emergence of the protein corona

    Get PDF
    Surface modifications of highly monodisperse citrate-stabilized gold nanoparticles (AuNPs) with sizes ranging from 3.5 to 150 nm after their exposure to cell culture media supplemented with fetal bovine serum were studied and characterized by the combined use of UV-vis spectroscopy, dynamic light scattering, and zeta potential measurements. In all the tested AuNPs, a dynamic process of protein adsorption was observed, evolving toward the formation of an irreversible hard protein coating known as Protein Corona. Interestingly, the thickness and density of this protein coating were strongly dependent on the particle size, making it possible to identify different transition regimes as the size of the particles increased: (i) NP-protein complexes (or incomplete corona), (ii) the formation of a near-single dense protein corona layer, and (iii) the formation of a multilayer corona. In addition, the different temporal patterns in the evolution of the protein coating came about more quickly for small particles than for the larger ones, further revealing the significant role that size plays in the kinetics of this process. Since the biological identity of the NPs is ultimately determined by the protein corona and different NP-biological interactions take place at different time scales, these results are relevant to biological and toxicological studies

    Modeling the Optical Responses of Noble Metal Nanoparticles Subjected to Physicochemical Transformations in Physiological Environments : Aggregation, Dissolution and Oxidation

    Get PDF
    Herein, we study how optical properties of colloidal dispersions of noble metal nanoparticles (Au and Ag) are affected by processes such as aggregation and oxidative dissolution. The optical contributions of these processes to the extinction spectra in the UV-vis region are often overlapped, making difficult its interpretation. In this regard, modeling the UV-vis spectra (in particular absorbance curve, peaks position, intensity and full width at half maximum-FWHM) of each process separately offers a powerful tool to identify the transformation of NPs under relevant and complex scenarios, such as in biological media. The proper identification of these transformations is crucial to understand the biological effects of the NPs

    Computer-aided detection system for pulmonary embolism with integrated cardiac assessment based on embolic burden

    Get PDF
    Pulmonary embolism (PE) is a cardiovascular disease re- sulting from occlusion(s) in the pulmonary arteries. Its definitive diagnosis relies mainly on imaging, being comput- erized tomography pulmonary angiogram the gold standard. Recently, there has been increasing interest in automatiz- ing PE detection with the use of computer-aided detection systems, aiming to reduce workloads and enhance identifi- cation. Manual semiquantitative scores of embolic burden have also been proposed to assess PE severity and reinforce management. Yet, few attempts have been made to couple both. Here, we propose a deep learning-based system for PE detection, which exploits the visual explanations from the detector network to represent and quantize embolic burden. The resulting measurements of embolic burden are used to assess cardiac function, using a univariate logistic regres- sion model. Particularly, we propose to predict right-to-left ventricle diameter (RV/LV) ratio ≥1, a prognostic cardiac feature strongly associated with both embolic burden and ul- timate clinical outcome. The detector network is based on a Squeeze-and-Excitation-ResNet50 and trained on a subset of the RSNA-STR Pulmonary Embolism CT dataset. For the PE detection task, we achieve an accuracy of 0.72, sensitiv- ity of 0.73, and specificity of 0.82 on the test set, which is slightly below the performance of radiologists. As the cardiac assessment directly depends on the detector’s performance, we are currently unable to successfully predict RV/LV ratio ≥ 1. Nevertheless, we believe our system is theoretically feasible and could assist in both PE detection and severity assessment in the future

    A matlab toolbox for image fusion (MATIFUS).

    Get PDF
    The MATIFUS toolbox is presented. It is a collection of functions and furnished with a graphical user interface that supports a range of image fusion operations. Almost all of the toolbox functions are written in the MATLAB language. Implementations of multiresolution schemes are used that are either publicly available or can be purchased as licensed software. MATIFUS can be downloaded from a website and is available under the conditions of an agreement with the Dutch Technology Foundation ST

    Building nonredundant adaptive wavelets by update lifting

    Get PDF
    Adaptive wavelet decompositions appear useful in various applications in image and video processing, such as image analysis, compression, feature extraction, denoising and deconvolution, or optic flow estimation. For such tasks it may be important that the multiresolution representations take into account the characteristics of the underlying signal and do leave intact important signal characteristics such as sharp transitions, edges, singularities or other regions of interest. In this paper, we propose a technique for building adaptive wavelets by means of an extension of the lifting scheme. The classical lifting scheme provides a simple yet flexible method for building new, possibly nonlinear, wavelets from existing ones. It comprises a given wavelet transform, followed by a prediction and an update step. The update step in such a scheme computes a modification of the approximation signal, using information in the detail band. It is obvious that such an operation can be inverted, and therefore the perfect reconstruction property is guaranteed. In this paper we propose a lifting scheme including an adaptive update lifting and a fixed prediction lifting step

    Temporal diffeomorphic Free Form Deformation to quantify changes induced by left and right bundle branch block and pacing

    Get PDF
    International audienceThis paper presents motion and deformation quantification results obtained from synthetic and in vitro phantom data provided by the second cardiac Motion Analysis Challenge at STACOM-MICCAI. We applied the Temporal Diffeomorphic Free Form Deformation (TDFFD) algorithm to the datasets. This algorithm builds upon a diffeomorphic version of the FFD, to provide a 3D + t continuous and differentiable transform. The similarity metric includes a comparison between consecutive images, and between a reference and each of the following images. Motion and strain accuracy were evaluated on synthetic 3D ultrasound sequences with known ground truth motion. Experiments were also conducted on in vitro acquisitions

    Analysis of uncertainty and variability in finite element computational models for biomedical engineering: characterization and propagation

    Get PDF
    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering

    Computational evaluation of cochlear implant surgery outcomes accounting for uncertainty and parameter variability

    Get PDF
    Cochlear implantation (CI) is a complex surgical procedure that restores hearing in patients with severe deafness. The successful outcome of the implanted device relies on a group of factors, some of them unpredictable or difficult to control. Uncertainties on the electrode array position and the electrical properties of the bone make it difficult to accurately compute the current propagation delivered by the implant and the resulting neural activation. In this context, we use uncertainty quantification methods to explore how these uncertainties propagate through all the stages of CI computational simulations. To this end, we employ an automatic framework, encompassing from the finite element generation of CI models to the assessment of the neural response induced by the implant stimulation. To estimate the confidence intervals of the simulated neural response, we propose two approaches. First, we encode the variability of the cochlear morphology among the population through a statistical shape model. This allows us to generate a population of virtual patients using Monte Carlo sampling and to assign to each of them a set of parameter values according to a statistical distribution. The framework is implemented and parallelized in a High Throughput Computing environment that enables to maximize the available computing resources. Secondly, we perform a patient-specific study to evaluate the computed neural response to seek the optimal post-implantation stimulus levels. Considering a single cochlear morphology, the uncertainty in tissue electrical resistivity and surgical insertion parameters is propagated using the Probabilistic Collocation method, which reduces the number of samples to evaluate. Results show that bone resistivity has the highest influence on CI outcomes. In conjunction with the variability of the cochlear length, worst outcomes are obtained for small cochleae with high resistivity values. However, the effect of the surgical insertion length on the CI outcomes could not be clearly observed, since its impact may be concealed by the other considered parameters. Whereas the Monte Carlo approach implies a high computational cost, Probabilistic Collocation presents a suitable trade-off between precision and computational time. Results suggest that the proposed framework has a great potential to help in both surgical planning decisions and in the audiological setting process

    The Normative Implication of the B Corp Movement in the Business and Human Rights Context (abstract)

    Get PDF
    Over the past decades, issues of corporate accountability and social responsibility have risen to the forefront of international debate. The U.N. Guiding Principles on Business and Human Rights (Guiding Principles), endorsed by the U.N. HRC in June 2011, lays out authoritatively the state duty to protect and the corporate responsibility to respect human rights. In an effort to operationalize the Guiding Principles, the U.N. Working Group on Business and Human Rights has called on all states to develop a National Action Plan (NAP) regarding domestic implementation of the Guiding Principles. A key first-step in the creation of a NAP is the completion of a national baseline assessment, a taking of stock of the current conditions affecting the protection and promotion of human rights by the state and businesses alike. With over twenty-five countries now committed to the creation of a NAP, it is increasingly important to evaluate the existing corporate landscape, specifically structures that claim to be socially and ethically motivated. The B Corp movement began in 2006, through the work of California based non-profit B-Lab. A B Corp is a business certified by B-Lab as committed to creating and supporting social and environmental rights. The B Corp movement has grown in size and stature, spreading into over thirty countries and garnering a reputation for excellence. Boosts to the movement have recently come from the certification of large multinational companies, and the interest of others that followed. As the B Corp movement continues to proliferate, it’s normative value on the business and human rights field merits analysis. What are the normative implications of the B Corp movement?—Is it a tool that should be embraced by business and human rights activists or one that undermines the movement by enabling corporations to claim an inability to take into account ethical considerations without adoption of a special corporate form
    • …
    corecore