553 research outputs found

    Karakterizacija predkliničnega tumorskega ksenograftnega modela z uporabo multiparametrične MR

    Full text link
    Introduction: In small animal studies multiple imaging modalities can be combined to complement each other in providing information on anatomical structure and function. Non-invasive imaging studies on animal models are used to monitor progressive tumor development. This helps to better understand the efficacy of new medicines and prediction of the clinical outcome. The aim was to construct a framework based on longitudinal multi-modal parametric in vivo imaging approach to perform tumor tissue characterization in mice. Materials and Methods: Multi-parametric in vivo MRI dataset consisted of T1-, T2-, diffusion and perfusion weighted images. Image set of mice (n=3) imaged weekly for 6 weeks was used. Multimodal image registration was performed based on maximizing mutual information. Tumor region of interested was delineated in weeks 2 to 6. These regions were stacked together, and all modalities combined were used in unsupervised segmentation. Clustering methods, such as K-means and Fuzzy C-means together with blind source separation technique of non-negative matrix factorization were tested. Results were visually compared with histopathological findings. Results: Clusters obtained with K-means and Fuzzy C-means algorithm coincided with T2 and ADC maps per levels of intensity observed. Fuzzy C-means clusters and NMF abundance maps reported most promising results compared to histological findings and seem as a complementary way to asses tumor microenvironment. Conclusions: A workflow for multimodal MR parametric map generation, image registration and unsupervised tumor segmentation was constructed. Good segmentation results were achieved, but need further extensive histological validation.Uvod Eden izmed pomembnih stebrov znanstvenih raziskav v medicinski diagnostiki predstavljajo eksperimenti na živalih v sklopu predkliničnih študij. V teh študijah so eksperimenti izvedeni za namene odkrivanja in preskušanja novih terapevtskih metod za zdravljenje človeških bolezni. Rak jajčnikov je eden izmed glavnih vzrokov smrti kot posledica rakavih obolenj. Potreben je razvoj novih, učinkovitejših metod, da bi lahko uspešneje kljubovali tej bolezni. Časovno okno aplikacije novih terapevtikov je ključni dejavnik uspeha raziskovane terapije. Tumorska fiziologija se namreč razvija med napredovanjem bolezni. Eden izmed ciljev predkliničnih študij je spremljanje razvoja tumorskega mikro-okolja in tako določiti optimalno časovno okno za apliciranje razvitega terapevtika z namenom doseganja maksimalne učinkovitosti. Slikovne modalitete so kot raziskovalno orodje postale izjemno popularne v biomedicinskih in farmakoloških raziskavah zaradi svoje neinvazivne narave. Predklinične slikovne modalitete imajo nemalo prednosti pred tradicionalnim pristopom. Skladno z raziskovalno regulativo, tako za spremljanje razvoja tumorja skozi daljši čas ni potrebno žrtvovati živali v vmesnih časovnih točkah. Sočasno lahko namreč s svojim nedestruktivnim in neinvazivnim pristopom poleg anatomskih informacij podajo tudi molekularni in funkcionalni opis preučevanega subjekta. Za dosego slednjega so običajno uporabljene različne slikovne modalitete. Pogosto se uporablja kombinacija več slikovnih modalitet, saj so medsebojno komplementarne v podajanju željenih informacij. V sklopu te naloge je predstavljeno ogrodje za procesiranje različnih modalitet magnetno resonančnih predkliničnih modelov z namenom karakterizacije tumorskega tkiva. Metodologija V študiji Belderbos, Govaerts, Croitor Sava in sod. [1] so z uporabo magnetne resonance preučevali določitev optimalnega časovnega okna za uspešno aplikacijo novo razvitega terapevtika. Poleg konvencionalnih magnetno resonančnih slikovnih metod (T1 in T2 uteženo slikanje) sta bili uporabljeni tudi perfuzijsko in difuzijsko uteženi tehniki. Zajem slik je potekal tedensko v obdobju šest tednov. Podatkovni seti, uporabljeni v predstavljenem delu, so bili pridobljeni v sklopu omenjene raziskave. Ogrodje za procesiranje je narejeno v okolju Matlab (MathWorks, verzija R2019b) in omogoča tako samodejno kot ročno procesiranje slikovnih podatkov. V prvem koraku je pred generiranjem parametričnih map uporabljenih modalitet, potrebno izluščiti parametre uporabljenih protokolov iz priloženih tekstovnih datotek in zajete slike pravilno razvrstiti glede na podano anatomijo. Na tem mestu so slike tudi filtrirane in maskirane. Filtriranje je koristno za izboljšanje razmerja med koristnim signalom (slikanim živalskim modelom) in ozadjem, saj je skener za zajem slik navadno podvržen različnim izvorom slikovnega šuma. Uporabljen je bil filter ne-lokalnih povprečij Matlab knjižnice za procesiranje slik. Prednost maskiranja se potrdi v naslednjem koraku pri generiranju parametričnih map, saj se ob primerno maskiranem subjektu postopek bistveno pospeši z mapiranjem le na želenem področju. Za izdelavo parametričnih map je uporabljena metoda nelinearnih najmanjših kvadratov. Z modeliranjem fizikalnih pojavov uporabljenih modalitet tako predstavimo preiskovan živalski model z biološkimi parametri. Le-ti se komplementarno dopolnjujejo v opisu fizioloških lastnosti preučevanega modela na ravni posameznih slikovnih elementov. Ključen gradnik v uspešnem dopolnjevanju informacij posameznih modalitet je ustrezna poravnava parametričnih map. Posamezne modalitete so zajete zaporedno, ob različnih časih. Skeniranje vseh modalitet posamezne živali skupno traja več kot eno uro. Med zajemom slik tako navkljub uporabi anestetikov prihaja do majhnih premikov živali. V kolikor ti premiki niso pravilno upoštevani, prihaja do napačnih interpretacij skupnih informacij večih modalitet. Premiki živali znotraj modalitet so bili modelirani kot toge, med različnimi modalitetami pa kot afine preslikave. Poravnava slik je izvedena z lastnimi Matlab funkcijami ali z uporabo funkcij iz odprtokodnega ogrodja za procesiranje slik Elastix. Z namenom karakterizacije tumorskega tkiva so bile uporabljene metode nenadzorovanega razčlenjevanja. Bistvo razčlenjevanja je v združevanju posameznih slikovnih elementov v segmente. Elementi si morajo biti po izbranem kriteriju dovolj medsebojno podobni in se hkrati razlikovati od elementov drugih segmentov. Za razgradnjo so bile izbrane tri metode: metoda K-tih povprečij, kot ena izmed enostavnejšihmetoda mehkih C-tih povprečij, s prednostjo mehke razčlenitvein kot zadnja, nenegativna matrična faktorizacija. Slednja ponuja pogled na razčlenitev tkiva kot produkt tipičnih več-modalnih značilk in njihove obilice za vsak posamezni slikovni element. Za potrditev izvedenega razčlenjevanja z omenjenimi metodami je bila izvedena vizualna primerjava z rezultati histopatološke analize. Rezultati Na ustvarjene parametrične mape je imela poravnava slik znotraj posameznih modalitet velik vpliv. Zaradi dolgotrajnega zajema T1 uteženih slik nemalokrat prihaja do premikov živali, kar brez pravilne poravnave slik negativno vpliva na mapiranje modalitet in kasnejšo segmentacijo slik. Generirane mape imajo majhno odstopanje od tistih, narejenih s standardno uporabljenimi odprtokodnimi programi. Klastri pridobljeni z metodama K-tih in mehkih C-tih povprečij dobro sovpadajo z razčlenbami glede na njihovo inteziteto pri T2 in ADC mapah. Najobetavnejše rezultate po primerjavi s histološkimi izsledki podajata metoda mehkih C-povprečij in nenegativna matrična faktorizacija. Njuni segmentaciji se dopolnjujeta v razlagi tumorskega mikro-okolja. Zaključek Z izgradnjo ogrodja za procesiranje slik magnetne resonance in segmentacijo tumorskega tkiva je bil cilj magistrske naloge dosežen. Zasnova ogrodja omogoča poljubno dodajanje drugih modalitet in uporabo drugih živalskih modelov. Rezultati razčlenitve tumorskega tkiva so obetavni, vendar je potrebna nadaljna primerjava z rezultati histopatološke analize. Možna nadgradnja je izboljšanje robustnosti poravnave slik z uporabo modela netoge (elastične) preslikave. Prav tako je smiselno preizkusiti dodatne metode nenadzorovane segmentacije in dobljene rezultate primerjati s tukaj predstavljenimi

    Laplacian Mixture Modeling for Network Analysis and Unsupervised Learning on Graphs

    Full text link
    Laplacian mixture models identify overlapping regions of influence in unlabeled graph and network data in a scalable and computationally efficient way, yielding useful low-dimensional representations. By combining Laplacian eigenspace and finite mixture modeling methods, they provide probabilistic or fuzzy dimensionality reductions or domain decompositions for a variety of input data types, including mixture distributions, feature vectors, and graphs or networks. Provable optimal recovery using the algorithm is analytically shown for a nontrivial class of cluster graphs. Heuristic approximations for scalable high-performance implementations are described and empirically tested. Connections to PageRank and community detection in network analysis demonstrate the wide applicability of this approach. The origins of fuzzy spectral methods, beginning with generalized heat or diffusion equations in physics, are reviewed and summarized. Comparisons to other dimensionality reduction and clustering methods for challenging unsupervised machine learning problems are also discussed.Comment: 13 figures, 35 reference

    Acoustic data optimisation for seabed mapping with visual and computational data mining

    Get PDF
    Oceans cover 70% of Earth’s surface but little is known about their waters. While the echosounders, often used for exploration of our oceans, have developed at a tremendous rate since the WWII, the methods used to analyse and interpret the data still remain the same. These methods are inefficient, time consuming, and often costly in dealing with the large data that modern echosounders produce. This PhD project will examine the complexity of the de facto seabed mapping technique by exploring and analysing acoustic data with a combination of data mining and visual analytic methods. First we test the redundancy issues in multibeam echosounder (MBES) data by using the component plane visualisation of a Self Organising Map (SOM). A total of 16 visual groups were identified among the 132 statistical data descriptors. The optimised MBES dataset had 35 attributes from 16 visual groups and represented a 73% reduction in data dimensionality. A combined Principal Component Analysis (PCA) + k-means was used to cluster both the datasets. The cluster results were visually compared as well as internally validated using four different internal validation methods. Next we tested two novel approaches in singlebeam echosounder (SBES) data processing and clustering – using visual exploration for outlier detection and direct clustering of time series echo returns. Visual exploration identified further outliers the automatic procedure was not able to find. The SBES data were then clustered directly. The internal validation indices suggested the optimal number of clusters to be three. This is consistent with the assumption that the SBES time series represented the subsurface classes of the seabed. Next the SBES data were joined with the corresponding MBES data based on identification of the closest locations between MBES and SBES. Two algorithms, PCA + k-means and fuzzy c-means were tested and results visualised. From visual comparison, the cluster boundary appeared to have better definitions when compared to the clustered MBES data only. The results seem to indicate that adding SBES did in fact improve the boundary definitions. Next the cluster results from the analysis chapters were validated against ground truth data using a confusion matrix and kappa coefficients. For MBES, the classes derived from optimised data yielded better accuracy compared to that of the original data. For SBES, direct clustering was able to provide a relatively reliable overview of the underlying classes in survey area. The combined MBES + SBES data provided by far the best accuracy for mapping with almost a 10% increase in overall accuracy compared to that of the original MBES data. The results proved to be promising in optimising the acoustic data and improving the quality of seabed mapping. Furthermore, these approaches have the potential of significant time and cost saving in the seabed mapping process. Finally some future directions are recommended for the findings of this research project with the consideration that this could contribute to further development of seabed mapping problems at mapping agencies worldwide

    Extracting takagi-sugeno fuzzy rules with interpretable submodels via regularization of linguistic modifiers

    Get PDF
    In this paper, a method for constructing Takagi-Sugeno (TS) fuzzy system from data is proposed with the objective of preserving TS submodel comprehensibility, in which linguistic modifiers are suggested to characterize the fuzzy sets. A good property held by the proposed linguistic modifiers is that they can broaden the cores of fuzzy sets while contracting the overlaps of adjoining membership functions (MFs) during identification of fuzzy systems from data. As a result, the TS submodels identified tend to dominate the system behaviors by automatically matching the global model (GM) in corresponding subareas, which leads to good TS model interpretability while producing distinguishable input space partitioning. However, the GM accuracy and model interpretability are two conflicting modeling objectives, improving interpretability of fuzzy models generally degrades the GM performance of fuzzy models, and vice versa. Hence, one challenging problem is how to construct a TS fuzzy model with not only good global performance but also good submodel interpretability. In order to achieve a good tradeoff between GM performance and submodel interpretability, a regularization learning algorithm is presented in which the GM objective function is combined with a local model objective function defined in terms of an extended index of fuzziness of identified MFs. Moreover, a parsimonious rule base is obtained by adopting a QR decomposition method to select the important fuzzy rules and reduce the redundant ones. Experimental studies have shown that the TS models identified by the suggested method possess good submodel interpretability and satisfactory GM performance with parsimonious rule bases. © 2006 IEEE

    Robust techniques and applications in fuzzy clustering

    Get PDF
    This dissertation addresses issues central to frizzy classification. The issue of sensitivity to noise and outliers of least squares minimization based clustering techniques, such as Fuzzy c-Means (FCM) and its variants is addressed. In this work, two novel and robust clustering schemes are presented and analyzed in detail. They approach the problem of robustness from different perspectives. The first scheme scales down the FCM memberships of data points based on the distance of the points from the cluster centers. Scaling done on outliers reduces their membership in true clusters. This scheme, known as the Mega-clustering, defines a conceptual mega-cluster which is a collective cluster of all data points but views outliers and good points differently (as opposed to the concept of Dave\u27s Noise cluster). The scheme is presented and validated with experiments and similarities with Noise Clustering (NC) are also presented. The other scheme is based on the feasible solution algorithm that implements the Least Trimmed Squares (LTS) estimator. The LTS estimator is known to be resistant to noise and has a high breakdown point. The feasible solution approach also guarantees convergence of the solution set to a global optima. Experiments show the practicability of the proposed schemes in terms of computational requirements and in the attractiveness of their simplistic frameworks. The issue of validation of clustering results has often received less attention than clustering itself. Fuzzy and non-fuzzy cluster validation schemes are reviewed and a novel methodology for cluster validity using a test for random position hypothesis is developed. The random position hypothesis is tested against an alternative clustered hypothesis on every cluster produced by the partitioning algorithm. The Hopkins statistic is used as a basis to accept or reject the random position hypothesis, which is also the null hypothesis in this case. The Hopkins statistic is known to be a fair estimator of randomness in a data set. The concept is borrowed from the clustering tendency domain and its applicability to validating clusters is shown here. A unique feature selection procedure for use with large molecular conformational datasets with high dimensionality is also developed. The intelligent feature extraction scheme not only helps in reducing dimensionality of the feature space but also helps in eliminating contentious issues such as the ones associated with labeling of symmetric atoms in the molecule. The feature vector is converted to a proximity matrix, and is used as an input to the relational fuzzy clustering (FRC) algorithm with very promising results. Results are also validated using several cluster validity measures from literature. Another application of fuzzy clustering considered here is image segmentation. Image analysis on extremely noisy images is carried out as a precursor to the development of an automated real time condition state monitoring system for underground pipelines. A two-stage FCM with intelligent feature selection is implemented as the segmentation procedure and results on a test image are presented. A conceptual framework for automated condition state assessment is also developed

    The Fuzziness in Molecular, Supramolecular, and Systems Chemistry

    Get PDF
    Fuzzy Logic is a good model for the human ability to compute words. It is based on the theory of fuzzy set. A fuzzy set is different from a classical set because it breaks the Law of the Excluded Middle. In fact, an item may belong to a fuzzy set and its complement at the same time and with the same or different degree of membership. The degree of membership of an item in a fuzzy set can be any real number included between 0 and 1. This property enables us to deal with all those statements of which truths are a matter of degree. Fuzzy logic plays a relevant role in the field of Artificial Intelligence because it enables decision-making in complex situations, where there are many intertwined variables involved. Traditionally, fuzzy logic is implemented through software on a computer or, even better, through analog electronic circuits. Recently, the idea of using molecules and chemical reactions to process fuzzy logic has been promoted. In fact, the molecular word is fuzzy in its essence. The overlapping of quantum states, on the one hand, and the conformational heterogeneity of large molecules, on the other, enable context-specific functions to emerge in response to changing environmental conditions. Moreover, analog input–output relationships, involving not only electrical but also other physical and chemical variables can be exploited to build fuzzy logic systems. The development of “fuzzy chemical systems” is tracing a new path in the field of artificial intelligence. This new path shows that artificially intelligent systems can be implemented not only through software and electronic circuits but also through solutions of properly chosen chemical compounds. The design of chemical artificial intelligent systems and chemical robots promises to have a significant impact on science, medicine, economy, security, and wellbeing. Therefore, it is my great pleasure to announce a Special Issue of Molecules entitled “The Fuzziness in Molecular, Supramolecular, and Systems Chemistry.” All researchers who experience the Fuzziness of the molecular world or use Fuzzy logic to understand Chemical Complex Systems will be interested in this book

    The Second Hungarian Workshop on Image Analysis : Budapest, June 7-9, 1988.

    Get PDF
    corecore