35 research outputs found
Π’ΠΈΠΏΠΎΠΌΠΎΡΡΠΈΠ·ΠΌ Ρ Π»ΠΎΡΠΈΡΠΎΠ² Π‘ΡΡ Π°ΡΠΈΠ½ΡΠΊΠΎΠ³ΠΎ ΡΡΠ΄Π½ΠΎΠ³ΠΎ ΠΏΠΎΠ»Ρ
ΠΠ·ΡΡΠ΅Π½Ρ Ρ
Π»ΠΎΡΠΈΡΡ ΠΈΠ· ΡΡΠ΄ ΠΈ ΠΌΠ΅ΡΠ°ΡΠΎΠΌΠ°ΡΠΈΡΠΎΠ² ΡΠΊΠ°ΡΠ½ΠΎΠ²ΠΎ-ΠΌΠ°Π³Π½Π΅ΡΠΈΡΠΎΠ²ΠΎΠ³ΠΎ, Ρ Π½Π°Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠΉ Π·ΠΎΠ»ΠΎΡΠΎ-ΡΡΠ»ΡΡΠΈΠ΄Π½ΠΎΠΉ ΠΌΠΈΠ½Π΅ΡΠ°Π»ΠΈΠ·Π°ΡΠΈΠ΅ΠΉ, Π‘ΡΡ
Π°ΡΠΈΠ½ΡΠΊΠΎΠ³ΠΎ ΡΡΠ΄Π½ΠΎΠ³ΠΎ ΠΏΠΎΠ»Ρ (ΠΠΎΡΠ½Π°Ρ Π¨ΠΎΡΠΈΡ). ΠΡΠ΄Π΅Π»Π΅Π½Ρ Π΄Π²Π΅ ΡΠ°Π·Π½ΠΎΠ²ΠΈΠ΄Π½ΠΎΡΡΠΈ Ρ
Π»ΠΎΡΠΈΡΠΎΠ²: ΠΌΠ΅ΡΠ°ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΈ ΠΏΡΠΎΠΆΠΈΠ»ΠΊΠΎΠ²ΡΠ΅, ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½Ρ Π΄Π°Π½Π½ΡΠ΅ ΠΎΠ± ΠΈΡ
ΡΠΈΠΏΠΎΠΌΠΎΡΡΠ½ΡΡ
ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎΡΡΡΡ
; ΡΡΡΠ°Π½ΠΎΠ²Π»Π΅Π½Π° Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΡ ΠΆΠ΅Π»Π΅Π·ΠΈΡΡΠΎΡΡΠΈ ΠΌΠ΅ΡΠ°ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ Ρ
Π»ΠΎΡΠΈΡΠ° ΠΎΡ ΡΠΎΡΡΠ°Π²Π° Π·Π°ΠΌΠ΅ΡΠ°Π΅ΠΌΡΡ
ΠΌΠΈΠ½Π΅ΡΠ°Π»ΠΎΠ²; ΡΡΡΠ°Π½ΠΎΠ²Π»Π΅Π½ΠΎ Π²ΠΎΠ·ΡΠ°ΡΡΠ°Π½ΠΈΠ΅ ΠΆΠ΅Π»Π΅Π·ΠΈΡΡΠΎΡΡΠΈ Π²ΡΠ΅Ρ
ΡΠΈΠΏΠΎΠ² Ρ
Π»ΠΎΡΠΈΡΠΎΠ² ΠΏΠΎ ΠΌΠ΅ΡΠ΅ ΡΠ΄Π°Π»Π΅Π½ΠΈΡ ΠΎΡ Π’Π΅Π»ΡΠ±Π΅ΡΡΠΊΠΎΠ³ΠΎ Π³ΡΠ°Π½ΠΈΡΠΎΠΈΠ΄Π½ΠΎΠ³ΠΎ ΠΌΠ°ΡΡΠΈΠ²Π°, ΡΡΠΎ ΡΠΊΠ°Π·ΡΠ²Π°Π΅Ρ Π½Π° ΠΏΠ°ΡΠ°Π³Π΅Π½Π΅ΡΠΈΡΠ΅ΡΠΊΡΡ ΡΠ²ΡΠ·Ρ Π³ΠΈΠ΄ΡΠΎΡΠ΅ΡΠΌΠ°Π»ΡΠ½ΠΎΠΉ ΠΌΠΈΠ½Π΅ΡΠ°Π»ΠΈΠ·Π°ΡΠΈΠΈ Ρ Π³ΡΠ°Π½ΠΈΡΠΎΠΈΠ΄Π½ΡΠΌ ΠΌΠ°Π³ΠΌΠ°ΡΠΈΠ·ΠΌΠΎΠΌ
ΠΡΠΎΠ±Π»Π΅ΠΌΠ° ΡΠΎΡΠΈΠ°Π»ΡΠ½ΡΡ ΠΊΠΎΠΌΠΌΡΠ½ΠΈΠΊΠ°ΡΠΈΠΉ Π² ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠ΅ ΡΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ ΠΌΠ΅ΠΆΠ΄ΡΠ½Π°ΡΠΎΠ΄Π½ΡΡ ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΠΉ
ΠΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½Π° ΡΠΎΠ»Ρ ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ Π°ΡΠΏΠ΅ΠΊΡΠ° ΡΠΎΡΠΈΠ°Π»ΡΠ½ΡΡ
ΠΊΠΎΠΌΠΌΡΠ½ΠΈΠΊΠ°ΡΠΈΠΉ Π² ΡΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΡΡ
ΠΌΠ΅ΠΆΠ΄ΡΠ½Π°ΡΠΎΠ΄Π½ΡΡ
ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΡΡ
. ΠΠΎΠΊΠ°Π·Π°Π½ΠΎ, ΡΡΠΎ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΠ΅ ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ, ΡΡΡΡΠ°Π½ΠΈΠ² Π±Π°ΡΡΠ΅ΡΡ ΠΊΠΎΠΌΠΌΡΠ½ΠΈΠΊΠ°ΡΠΈΠΉ, ΠΊΠ°ΠΊ Π²ΠΎ Π²Π½ΡΡΡΠ΅Π½Π½Π΅ΠΉ, ΡΠ°ΠΊ ΠΈ ΠΌΠ΅ΠΆΠ΄ΡΠ½Π°ΡΠΎΠ΄Π½ΠΎΠΉ ΠΆΠΈΠ·Π½ΠΈ, ΡΠ΄Π΅Π»Π°Π»ΠΈ Π²Π½Π΅ΡΠ½ΡΡ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΡ ΠΊΡΡΠΏΠ½ΡΡ
Π³ΠΎΡΡΠ΄Π°ΡΡΡΠ² Π±ΠΎΠ»Π΅Π΅ ΡΠ΄Π΅ΡΠΆΠ°Π½Π½ΠΎΠΉ ΠΈ ΠΎΡΠ²Π΅ΡΡΡΠ²Π΅Π½Π½ΠΎΠΉ
ΠΠ»ΠΈΡΠ½ΠΈΠ΅ Π²ΡΡΠΎΠΊΠΎΠ΄ΠΈΡΠΏΠ΅ΡΡΠ½ΡΡ Π½Π°ΠΏΠΎΠ»Π½ΠΈΡΠ΅Π»Π΅ΠΉ Π½Π° ΡΠ΅ΡΠΌΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΈ ΠΌΠ΅Ρ Π°Π½ΠΈΡΠ΅ΡΠΊΠΈΠ΅ Ρ Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊΠΈ ΡΠΏΠΎΠΊΡΠΈΠ΄Π½ΡΡ ΠΊΠΎΠΌΠΏΠΎΠ·ΠΈΡΠΎΠ²
ΠΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΠ΅ ΡΠ΅ΡΠΌΠΈΡΠ΅ΡΠΊΠΎΠΉ ΡΡΠ°Π±ΠΈΠ»ΡΠ½ΠΎΡΡΠΈ, Π³ΠΎΡΡΡΠ΅ΡΡΠΈ ΠΈ ΠΌΠ΅Ρ
Π°Π½ΠΈΡΠ΅ΡΠΊΠΎΠΉ ΠΏΡΠΎΡΠ½ΠΎΡΡΠΈ ΡΠΏΠΎΠΊΡΠΈΠ΄Π½ΡΡ
ΠΊΠΎΠΌΠΏΠΎΠ·ΠΈΡΠΎΠ² ΠΏΡΠΈ Π²Π²Π΅Π΄Π΅Π½ΠΈΠΈ Π² ΡΠΏΠΎΠΊΡΠΈΠ΄Π½ΡΡ ΡΠΌΠΎΠ»Ρ Π·Π°ΠΌΠ΅Π΄Π»ΠΈΡΠ΅Π»Π΅ΠΉ Π³ΠΎΡΠ΅Π½ΠΈΡ Π² ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ Π½Π°ΠΏΠΎΠ»Π½ΠΈΡΠ΅Π»Π΅ΠΉ Π² Π²ΡΡΠΎΠΊΠΎΠ΄ΠΈΡΠΏΠ΅ΡΡΠ½ΠΎΠΌ ΡΠΎΡΡΠΎΡΠ½ΠΈΠΈ.Investigation of the thermal stability, flammability and mechanical strength of epoxy composites when flame retardants are added to the epoxy resin as fillers in a highly dispersed state
High and Low Molecular Weight Fluorescein Isothiocyanate (FITC)βDextrans to Assess Blood-Brain Barrier Disruption: Technical Considerations
This note is to report how histological preparation techniques influence the extravasation pattern of the different molecular sizes of fluorescein isothiocyanate (FITC)βdextrans, typically used as markers for blood-brain barrier leakage. By using appropriate preparation methods, false negative results can be minimized. Wistar rats underwent a 2-h middle cerebral artery occlusion and magnetic resonance imaging. After the last imaging scan, Evans blue and FITCβdextrans of 4, 40, and 70Β kDa molecular weight were injected. Different histological preparation methods were used. Sites of blood-brain barrier leakage were analyzed by fluorescence microscopy. Extravasation of Evans blue and high molecular FITCβdextrans (40 and 70Β kDa) in the infarcted region could be detected with all preparation methods used. If exposed directly to saline, the signal intensity of these FITCβdextrans decreased. Extravasation of the 4-kDa low molecular weight FITCβdextran could only be detected using freshly frozen tissue sections. Preparations involving paraformaldehyde and sucrose resulted in the 4-kDa FITCβdextran dissolving in these reactants and being washed out, giving the false negative result of no extravasation. FITCβdextrans represent a valuable tool to characterize altered blood-brain barrier permeability in animal models. Diffusion and washout of low molecular weight FITCβdextran can be avoided by direct immobilization through immediate freezing of the tissue. This pitfall needs to be known to avoid the false impression that there was no extravasation of low molecular weight FITCβdextrans
HΓΆherdimensionale Modelle zur Segmentierung biologischer Strukturen
Many tasks in medical image processing require the robust segmentation of images. Information on the position and contour of objects allows the subsequent extraction of relevant quantitative information. This task is difficult due to actual imaging modalities that provide multi-dimensional (volumetric, time-variable, and multichannel) images. A newly formulated model is able to segment objects with arbitrary occurrence in images of any dimension. Model based segmentation methods are categorized. Subsequently, it is possible to formulate specifications that a model must meet for the robust segmentation of medical images. According to these specifications, a balloon-model is introduced. Objects are represented by a simplicial complex. Using mechanic simulations, this model is deformed to adapt to significant structures in an image. For the computation of image influences in single- and multichannel images, subsets of the same dimension as the image space itself are taken into account. The balloon-model is combined with a shape-based model. Shape knowledge from an automatically generated point distribution model is used to compute directed shape forces. The combination of all forces results in a segmentation result even if an initial contour is not given. The intersection of simplexes forms an inconsistency of the contour. This frequent problem for active contours is solved by methods that detect and correct such intersections. If necessary, these methods adaptively change the topology of objects. Further methods were developed to allow the transfer into clinical routine. The required parameter setting can be trained based on an exemplary segmentation. For heterogeneous image sets, more than one exemplary segmentation can be given. Then, an individual parameter set is computed for each image using global texture features and their similarity to prototype images. Non-contextual experiments on synthetic image material quantify the quality of segmentations for varying image properties and the dependency of the model on parameter choices. For contextual tests on medical images, usually no valid reference segmentation is known. Therefore, a silver-standard method to create synthetic images with realistic textures and contours was developed. The model was exemplary applied to immunohistochemically stained micrographs of neurons, CTs of vertebrae following prolaps of intervertebral discs, a MR of the beating heart, and laryngoscopic color video sequences. The robustness of segmentations was quantified in all applications
<title>Automatic parameter setting for balloon models</title>
We describe a "learning-from-examples"-method to automatically adjust parameters for a balloon model. Our goal is to segment arbitrarily shaped objects in medical images with as little human interaction as possible. For our model, we identified six significant parameters that are adjusted with respect to certain applications. These parameters are computed from one manual segmentation drawn by a physician. (1) The maximal edge length is derived from a polygon-approximation of the manual segmentation. (2) The size of the image subset that exerts external influences on edges is set according to the scale of gradients normal to the contour. (3) The offset of the assignment from greylevels to image potentials is adjusted such that the propulsive pressure overcomes image potentials in homogeneous parts of the image. (4) The gain of this assignment is tuned to stop the contour at the border of objects of interest. (5) The strength of deformation force is computed to balance the contour at edges with ambiguous image information. (6) These parameters are computed for both, positive and negative pressure. The variation that gives the best segmentation result is chosen. The analytically derived adjustments are optimized with a genetic algorithm that evolutionarily reduces the number of misdetected pixels. The method is used on a series of histochemically stained cells. Similar segmentation quality is obtained applying both, manual and automatic parameter setting. We further use the method on laryngoscopic color image sequences, where, even for experts, the manual adjustment of parameters is not applicable