75 research outputs found
High-density functional diffuse optical tomography based on frequency-domain measurements improves image quality and spatial resolution
Genetic Programming + Unfolding Embryology in Automated Layout Planning
Automated layout planning aims to the implementation of computational methods for the generation and the optimization of floor plans, considering the spatial configuration and the assignment of activities. Sophisticated strategies such as Genetic Algorithms have been implemented as heuristics of good solutions. However, the generative forces that derive from the social structures have been often neglected. This research aims to illustrate that the data that encode the layout’s social and cultural generative forces, can be implemented within an evolutionary system for the design of residential layouts. For that purpose a co-operative system was created, which is composed of a Genetic Programming algorithm and an agent-based unfolding embryology procedure that assigns activities to the spaces generated by the GP algorithm. The assignment of activities is a recursive process which follows instructions encoded as permeability graphs. Furthermore, the Ranking Sum Fitness evaluation method is proposed and applied for the achievement of multi-objective optimization. Its efficiency is tested against the Weighted-Sum Fitness function. The system’s results, both numerical and spatial, are compared to the results of a conventional evolutionary approach. This comparison showed that, in general, the proposed system can yield better solutions
Frequency domain high density diffuse optical tomography for functional brain imaging
Measurements of dynamic near-infrared (NIR) light attenuation across the human head together with model-based image reconstruction algorithms allow the recovery of three-dimensional spatial brain activation maps. Previous studies using high-density diffuse optical tomography (HD-DOT) systems have reported improved image quality over sparse arrays. Modulated NIR light, known as Frequency Domain (FD) NIR, enables measurements of phase shift along with amplitude attenuation.
It is hypothesised that the utilization of these two sets of complementary data (phase and amplitude) for brain activity detection will result in an improvement in reconstructed image quality within HD-DOT. However, parameter recovery in DOT is a computationally expensive algorithm, especially when FD-HD measurements are required over a large and complex volume, as in the case of brain functional imaging. Therefore, computational tools for the light propagation modelling, known as the forward model, and the parameter recovery, known as the inverse problem, have been developed, in order to enable FD-HD-DOT.
The forward model, within a diffusion approximation-based finite-element modelling framework, is accelerated by employing parallelization. A 10-fold speed increase when GPU architectures are available is achieved while maintaining high accuracy. For a very high-resolution finite-element model of the adult human head with ∼600,000 nodes, light propagation can be calculated at ∼0.25s per excitation source. Additionally, a framework for the sparse formulation of the inverse model, incorporating parallel computing, is proposed, achieving a 10-fold speed increase and a 100-fold memory efficiency, whilst maintaining reconstruction quality.
Finally, to evaluate image reconstruction with and without the additional phase information, point spread functions have been simulated across a whole-scalp field of view in 24 subject-specific anatomical models using an experimentally derived noise model. The addition of phase information has shown to improve the image quality by reducing localization error by up to 59%, effective resolution by up to 21%, and depth penetration up to 5mm, as compared to using the intensity attenuation measurements alone. In addition, experimental data collected during a retinotopic experiment reveal that the phase data contains unique information about brain activity and enables images to be resolved for deeper brain regions
The irreducible vectors of a lattice:Some theory and applications
The main idea behind lattice sieving algorithms is to reduce a sufficiently large number of lattice vectors with each other so that a set of short enough vectors is obtained. It is therefore natural to study vectors which cannot be reduced. In this work we give a concrete definition of an irreducible vector and study the properties of the set of all such vectors. We show that the set of irreducible vectors is a subset of the set of Voronoi relevant vectors and study its properties. For extremal lattices this set may contain as many as 2^n vectors, which leads us to define the notion of a complete system of irreducible vectors, whose size can be upperbounded by the kissing number. One of our main results shows thatmodified heuristic sieving algorithms heuristically approximate such a set (modulo sign). We provide experiments in low dimensions which support this theory. Finally we give some applications of this set in the study of lattice problems such as SVP, SIVP and CVPP. The introduced notions, as well as various results derived along the way, may provide further insights into lattice algorithms and motivate new research into understanding these algorithms better
On POD analysis of PIV measurements applied to mixing in a stirred vessel with a shear thinning fluid
P.O.D. technique is applied to 2D P.I.V. data in the field of hydrodynamics in a mixing tank with a Rushton turbine and a shear thinning fluid. Classical eigen-value spectrum is presented and phase portrait of P.O.D. coefficients are plotted and analyzed in terms of trailing vortices. A spectrum of dissipation rate of kinetic energy is introduced and discussed. Length scales associated to each P.O.D. modes are proposed
Fatal Hemoptysis due to Chronic Cavitary Pulmonary Aspergillosis Complicated by Nontuberculous Mycobacterial Tuberculosis
A 51-year-old man,
with a history of severe COPD and bilateral
pneumothorax, who was under treatment for
pulmonary tuberculosis due to mycobacterium
avium, was admitted due to high-grade fever, weight loss, cough,
and production of purulent sputum, for almost
one month without any special improvement
despite adequate antibiotics treatment in
outpatient setting. A CT
scan revealed multiple
consolidations, fibrosis, scaring, and cavitary
lesions in both upper lobes with newly shadows
which were fungus balls inside them. Aspergillus flavius was
isolated in three sputum samples, a diagnosis of
chronic cavitary pulmonary aspergillosis was
made, and treatment with intravenous
amphotericin B was started. An initially
clinical improvement was noted, and a first
episode of minor hemoptysis was treated with
conservative measures. Unfortunately a second
major episode of hemoptysis occurred and he died
almost immediately. Aspergilloma is defined as
the presence of a fungus ball inside a
preexisting pulmonary cavity or dilated airway
and is one of the clinical conditions associated
with the clinical spectrum of pulmonary
colonization. Tuberculosis is the
most common underling disease. Hemoptysis is the
most common symptom. Antifungal antibiotics,
surgical interventions, bronchial arteries
embolization, and intracavity infusion of
antibiotics have been proposed without
always adequate sufficiency
Sieve, Enumerate, Slice, and Lift: Hybrid Lattice Algorithms for SVP via CVPP
Motivated by recent results on solving large batches of closest vector problem (CVP) instances, we study how these techniques can be combined with lattice enumeration to obtain faster methods for solving the shortest vector problem (SVP) on high-dimensional lattices.
Theoretically, under common heuristic assumptions we show how to solve SVP in dimension with a cost proportional to running a sieve in dimension , resulting in a speedup and memory reduction compared to running a full sieve. Combined with techniques from [Ducas, Eurocrypt 2018] we can asymptotically get a total of dimensions \textit{for free} for solving SVP.
Practically, the main obstacles for observing a speedup in moderate dimensions appear to be that the leading constant in the term is rather small; that the overhead of the (batched) slicer may be large; and that competitive enumeration algorithms heavily rely on aggressive pruning techniques, which appear to be incompatible with our algorithms. These obstacles prevented this asymptotic speedup (compared to full sieving) from being observed in our experiments. However, it could be expected to become visible once optimized CVPP techniques are used in higher dimensional experiments
The irreducible vectors of a lattice: Some theory and applications
The main idea behind lattice sieving algorithms is to reduce a sufficiently large number of lattice vectors with each other so that a set of short enough vectors is obtained, including a basis of the lattice. It is therefore natural to study vectors which cannot be reduced. In this work we give a concrete definition of an irreducible vector and study the properties of the set of all such vectors. We show that the set of irreducible vectors is a subset of the set of relevant vectors and study its properties. For extremal lattices this set may contain as many as vectors, which leads us to define the notion of a complete system of irreducible vectors, whose size can be upper-bounded by the kissing number. We study properties of this set and observe a close relation to heuristic sieving algorithms. Finally we briefly examine the use of this set in the study of lattice problems such as SVP, SIVP and CVPP. The introduced notions, as well as various results derived along the way, may provide further insights into lattice algorithms and motivate new research into understanding these algorithms better
- …