11,422 research outputs found
Annual SHOT Report 2018
SHOT is affiliated to the Royal College of PathologistsAll NHS organisations must move away from a blame culture towards a just and learning culture. All clinical and laboratory staff should be encouraged to become familiar with human factors and ergonomics concepts. All transfusion decisions must be made after carefully assessing the risks and benefits of transfusion therapy. Collaboration and co-ordination among staff is vital
Computertomographie-basierte Bestimmung von Aortenklappenkalk und seine Assoziation mit Komplikationen nach interventioneller Aortenklappenimplantation (TAVI)
Background: Severe aortic valve calcification (AVC) has generally been recognized as a key factor in the occurrence of adverse events after transcatheter aortic valve implantation (TAVI). To date, however, a consensus on a standardized calcium detection threshold for aortic valve calcium quantification in contrast-enhanced computed tomography angiography (CTA) is still lacking. The present thesis aimed at comparing two different approaches for quantifying AVC in CTA scans based on their predictive power for adverse events and survival after a TAVI procedure.
Methods: The extensive dataset of this study included 198 characteristics for each of the 965 prospectively included patients who had undergone TAVI between November 2012 and December 2019 at the German Heart Center Berlin (DHZB). AVC quantification in CTA scans was performed at a fixed Hounsfield Unit (HU) threshold of 850 HU (HU 850 approach) and at a patient-specific threshold, where the HU threshold was set by multiplying the mean luminal attenuation of the ascending aorta by 2 (+100 % HUAorta approach). The primary endpoint of this study consisted of a combination of post-TAVI outcomes (paravalvular leak ≥ mild, implant-related conduction disturbances, 30-day mortality, post-procedural stroke, annulus rupture, and device migration). The Akaike information criterion was used to select variables for the multivariable regression model. Multivariable analysis was carried out to determine the predictive power of the investigated approaches.
Results: Multivariable analyses showed that a fixed threshold of 850 HU (calcium volume cut-off 146 mm3) was unable to predict the composite clinical endpoint post-TAVI (OR=1.13, 95 % CI 0.87 to 1.48, p=0.35). In contrast, the +100 % HUAorta approach (calcium volume cut-off 1421 mm3) enabled independent prediction of the composite clinical endpoint post-TAVI (OR=2, 95 % CI 1.52 to 2.64, p=9.2x10-7). No significant difference in the Kaplan-Meier survival analysis was observed for either of the approaches.
Conclusions: The patient-specific calcium detection threshold +100 % HUAorta is more predictive of post-TAVI adverse events included in the combined clinical endpoint than the fixed HU 850 approach. For the +100 % HUAorta approach, a calcium volume cut-off of 1421 mm3 of the aortic valve had the highest predictive value.Hintergrund: Ein wichtiger Auslöser von Komplikationen nach einer Transkatheter-Aortenklappen-Implantation (TAVI) sind ausgeprägte Kalkablagerung an der Aortenklappe. Dennoch erfolgte bisher keine Einigung auf ein standardisiertes Messverfahren zur Quantifizierung der Kalklast der Aortenklappe in einer kontrastverstärkten dynamischen computertomographischen Angiographie (CTA). Die vorliegende Dissertation untersucht, inwieweit die Wahl des Analyseverfahrens zur Quantifizierung von Kalkablagerungen in der Aortenklappe die Prognose von Komplikationen und der Überlebensdauer nach einer TAVI beeinflusst.
Methodik: Der Untersuchung liegt ein umfangreicher Datensatz von 965 Patienten mit 198 Merkmalen pro Patienten zugrunde, welche sich zwischen 2012 und 2019 am Deutschen Herzzentrum Berlin einer TAVI unterzogen haben. Die Quantifizierung der Kalkablagerung an der Aortenklappe mittels CTA wurde einerseits mit einem starren Grenzwert von 850 Hounsfield Einheiten (HU) (HU 850 Verfahren) und andererseits anhand eines individuellen Grenzwertes bemessen. Letzterer ergibt sich aus der HU-Dämpfung in dem Lumen der Aorta ascendens multipliziert mit 2 (+100 % HUAorta Verfahren). Der primäre klinische Endpunkt dieser Dissertation besteht aus einem aus sechs Variablen zusammengesetzten klinischen Endpunkt, welcher ungewünschte Ereignisse nach einer TAVI abbildet (paravalvuläre Leckage ≥mild, Herzrhythmusstörungen nach einer TAVI, Tod innerhalb von 30 Tagen, post-TAVI Schlaganfall, Ruptur des Annulus und Prothesendislokation). Mögliche Störfaktoren, die auf das Eintreten der Komplikationen nach TAVI Einfluss haben, wurden durch den Einsatz des Akaike Informationskriterium ermittelt. Um die Vorhersagekraft von Komplikationen nach einer TAVI durch beide Verfahren zu ermitteln, wurde eine multivariate Regressionsanalyse durchgeführt.
Ergebnisse: Die multivariaten logistischen Regressionen zeigen, dass die Messung der Kalkablagerungen anhand der HU 850 Messung (Kalklast Grenzwert von 146 mm3) die Komplikationen und die Überlebensdauer nicht vorhersagen konnten (OR=1.13, 95 % CI 0.87 bis 1.48, p=0.35). Die nach dem +100 % HUAorta Verfahren (Kalklast Grenzwert von 1421 mm3) individualisierte Kalkmessung erwies sich hingegen als sehr aussagekräftig, da hiermit Komplikationen nach einer TAVI signifikant vorhergesagt werden konnten (OR=2, 95 % CI 1.52 bis 2.64, p=9.2x10-7). In Hinblick auf die postoperative Kaplan-Meier Überlebenszeitanalyse kann auch mit dem +100 % HUAorta Verfahren keine Vorhersage getroffen werden.
Fazit: Aus der Dissertation ergibt sich die Empfehlung, die Messung von Kalkablagerungen nach dem +100 % HUAorta Verfahren vorzunehmen, da Komplikationen wesentlich besser und zuverlässiger als nach der gängigen HU 850 Messmethode vorhergesagt werden können. Für das +100 % HUAorta Verfahren lag der optimale Kalklast Grenzwert bei 1421 mm3
Examples of works to practice staccato technique in clarinet instrument
Klarnetin staccato tekniğini güçlendirme aşamaları eser çalışmalarıyla uygulanmıştır. Staccato
geçişlerini hızlandıracak ritim ve nüans çalışmalarına yer verilmiştir. Çalışmanın en önemli amacı
sadece staccato çalışması değil parmak-dilin eş zamanlı uyumunun hassasiyeti üzerinde de
durulmasıdır. Staccato çalışmalarını daha verimli hale getirmek için eser çalışmasının içinde etüt
çalışmasına da yer verilmiştir. Çalışmaların üzerinde titizlikle durulması staccato çalışmasının ilham
verici etkisi ile müzikal kimliğe yeni bir boyut kazandırmıştır. Sekiz özgün eser çalışmasının her
aşaması anlatılmıştır. Her aşamanın bir sonraki performans ve tekniği güçlendirmesi esas alınmıştır.
Bu çalışmada staccato tekniğinin hangi alanlarda kullanıldığı, nasıl sonuçlar elde edildiği bilgisine
yer verilmiştir. Notaların parmak ve dil uyumu ile nasıl şekilleneceği ve nasıl bir çalışma disiplini
içinde gerçekleşeceği planlanmıştır. Kamış-nota-diyafram-parmak-dil-nüans ve disiplin
kavramlarının staccato tekniğinde ayrılmaz bir bütün olduğu saptanmıştır. Araştırmada literatür
taraması yapılarak staccato ile ilgili çalışmalar taranmıştır. Tarama sonucunda klarnet tekniğin de
kullanılan staccato eser çalışmasının az olduğu tespit edilmiştir. Metot taramasında da etüt
çalışmasının daha çok olduğu saptanmıştır. Böylelikle klarnetin staccato tekniğini hızlandırma ve
güçlendirme çalışmaları sunulmuştur. Staccato etüt çalışmaları yapılırken, araya eser çalışmasının
girmesi beyni rahatlattığı ve istekliliği daha arttırdığı gözlemlenmiştir. Staccato çalışmasını yaparken
doğru bir kamış seçimi üzerinde de durulmuştur. Staccato tekniğini doğru çalışmak için doğru bir
kamışın dil hızını arttırdığı saptanmıştır. Doğru bir kamış seçimi kamıştan rahat ses çıkmasına
bağlıdır. Kamış, dil atma gücünü vermiyorsa daha doğru bir kamış seçiminin yapılması gerekliliği
vurgulanmıştır. Staccato çalışmalarında baştan sona bir eseri yorumlamak zor olabilir. Bu açıdan
çalışma, verilen müzikal nüanslara uymanın, dil atış performansını rahatlattığını ortaya koymuştur.
Gelecek nesillere edinilen bilgi ve birikimlerin aktarılması ve geliştirici olması teşvik edilmiştir.
Çıkacak eserlerin nasıl çözüleceği, staccato tekniğinin nasıl üstesinden gelinebileceği anlatılmıştır.
Staccato tekniğinin daha kısa sürede çözüme kavuşturulması amaç edinilmiştir. Parmakların
yerlerini öğrettiğimiz kadar belleğimize de çalışmaların kaydedilmesi önemlidir. Gösterilen azmin ve
sabrın sonucu olarak ortaya çıkan yapıt başarıyı daha da yukarı seviyelere çıkaracaktır
Machine Learning Research Trends in Africa: A 30 Years Overview with Bibliometric Analysis Review
In this paper, a critical bibliometric analysis study is conducted, coupled
with an extensive literature survey on recent developments and associated
applications in machine learning research with a perspective on Africa. The
presented bibliometric analysis study consists of 2761 machine learning-related
documents, of which 98% were articles with at least 482 citations published in
903 journals during the past 30 years. Furthermore, the collated documents were
retrieved from the Science Citation Index EXPANDED, comprising research
publications from 54 African countries between 1993 and 2021. The bibliometric
study shows the visualization of the current landscape and future trends in
machine learning research and its application to facilitate future
collaborative research and knowledge exchange among authors from different
research institutions scattered across the African continent
Identifying and responding to people with mild learning disabilities in the probation service
It has long been recognised that, like many other individuals, people with learningdisabilities find their way into the criminal justice system. This fact is not disputed. Whathas been disputed, however, is the extent to which those with learning disabilities arerepresented within the various agencies of the criminal justice system and the ways inwhich the criminal justice system (and society) should address this. Recently, social andlegislative confusion over the best way to deal with offenders with learning disabilities andmental health problems has meant that the waters have become even more muddied.Despite current government uncertainty concerning the best way to support offenders withlearning disabilities, the probation service is likely to continue to play a key role in thesupervision of such offenders. The three studies contained herein aim to clarify the extentto which those with learning disabilities are represented in the probation service, toexamine the effectiveness of probation for them and to explore some of the ways in whichprobation could be adapted to fit their needs.Study 1 and study 2 showed that around 10% of offenders on probation in Kent appearedto have an IQ below 75, putting them in the bottom 5% of the general population. Study 3was designed to assess some of the support needs of those with learning disabilities in theprobation service, finding that many of the materials used by the probation service arelikely to be too complex for those with learning disabilities to use effectively. To addressthis, a model for service provision is tentatively suggested. This is based on the findings ofthe three studies and a pragmatic assessment of what the probation service is likely to becapable of achieving in the near future
Learning disentangled speech representations
A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody.
The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions.
In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks.
This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically
Estudo da remodelagem reversa miocárdica através da análise proteómica do miocárdio e do líquido pericárdico
Valve replacement remains as the standard therapeutic option for aortic
stenosis patients, aiming at abolishing pressure overload and triggering
myocardial reverse remodeling. However, despite the instant hemodynamic
benefit, not all patients show complete regression of myocardial hypertrophy,
being at higher risk for adverse outcomes, such as heart failure. The current
comprehension of the biological mechanisms underlying an incomplete reverse
remodeling is far from complete. Furthermore, definitive prognostic tools and
ancillary therapies to improve the outcome of the patients undergoing valve
replacement are missing. To help abridge these gaps, a combined myocardial
(phospho)proteomics and pericardial fluid proteomics approach was followed,
taking advantage of human biopsies and pericardial fluid collected during
surgery and whose origin anticipated a wealth of molecular information
contained therein.
From over 1800 and 750 proteins identified, respectively, in the myocardium
and in the pericardial fluid of aortic stenosis patients, a total of 90 dysregulated
proteins were detected. Gene annotation and pathway enrichment analyses,
together with discriminant analysis, are compatible with a scenario of increased
pro-hypertrophic gene expression and protein synthesis, defective ubiquitinproteasome system activity, proclivity to cell death (potentially fed by
complement activity and other extrinsic factors, such as death receptor
activators), acute-phase response, immune system activation and fibrosis.
Specific validation of some targets through immunoblot techniques and
correlation with clinical data pointed to complement C3 β chain, Muscle Ring
Finger protein 1 (MuRF1) and the dual-specificity Tyr-phosphorylation
regulated kinase 1A (DYRK1A) as potential markers of an incomplete
response. In addition, kinase prediction from phosphoproteome data suggests
that the modulation of casein kinase 2, the family of IκB kinases, glycogen
synthase kinase 3 and DYRK1A may help improve the outcome of patients
undergoing valve replacement. Particularly, functional studies with DYRK1A+/-
cardiomyocytes show that this kinase may be an important target to treat
cardiac dysfunction, provided that mutant cells presented a different response
to stretch and reduced ability to develop force (active tension).
This study opens many avenues in post-aortic valve replacement reverse
remodeling research. In the future, gain-of-function and/or loss-of-function
studies with isolated cardiomyocytes or with animal models of aortic bandingdebanding will help disclose the efficacy of targeting the surrogate therapeutic
targets. Besides, clinical studies in larger cohorts will bring definitive proof of
complement C3, MuRF1 and DYRK1A prognostic value.A substituição da válvula aórtica continua a ser a opção terapêutica de
referência para doentes com estenose aórtica e visa a eliminação da
sobrecarga de pressão, desencadeando a remodelagem reversa miocárdica.
Contudo, apesar do benefício hemodinâmico imediato, nem todos os pacientes
apresentam regressão completa da hipertrofia do miocárdio, ficando com maior
risco de eventos adversos, como a insuficiência cardíaca. Atualmente, os
mecanismos biológicos subjacentes a uma remodelagem reversa incompleta
ainda não são claros. Além disso, não dispomos de ferramentas de
prognóstico definitivos nem de terapias auxiliares para melhorar a condição
dos pacientes indicados para substituição da válvula. Para ajudar a resolver
estas lacunas, uma abordagem combinada de (fosfo)proteómica e proteómica
para a caracterização, respetivamente, do miocárdio e do líquido pericárdico
foi seguida, tomando partido de biópsias e líquidos pericárdicos recolhidos em
ambiente cirúrgico.
Das mais de 1800 e 750 proteínas identificadas, respetivamente, no miocárdio
e no líquido pericárdico dos pacientes com estenose aórtica, um total de 90
proteínas desreguladas foram detetadas. As análises de anotação de genes,
de enriquecimento de vias celulares e discriminativa corroboram um cenário de
aumento da expressão de genes pro-hipertróficos e de síntese proteica, um
sistema ubiquitina-proteassoma ineficiente, uma tendência para morte celular
(potencialmente acelerada pela atividade do complemento e por outros fatores
extrínsecos que ativam death receptors), com ativação da resposta de fase
aguda e do sistema imune, assim como da fibrose.
A validação de alguns alvos específicos através de immunoblot e correlação
com dados clínicos apontou para a cadeia β do complemento C3, a Muscle
Ring Finger protein 1 (MuRF1) e a dual-specificity Tyr-phosphoylation
regulated kinase 1A (DYRK1A) como potenciais marcadores de uma resposta
incompleta. Por outro lado, a predição de cinases a partir do fosfoproteoma,
sugere que a modulação da caseína cinase 2, a família de cinases do IκB, a
glicogénio sintase cinase 3 e da DYRK1A pode ajudar a melhorar a condição
dos pacientes indicados para intervenção. Em particular, a avaliação funcional
de cardiomiócitos DYRK1A+/- mostraram que esta cinase pode ser um alvo
importante para tratar a disfunção cardíaca, uma vez que os miócitos mutantes
responderam de forma diferente ao estiramento e mostraram uma menor
capacidade para desenvolver força (tensão ativa).
Este estudo levanta várias hipóteses na investigação da remodelagem reversa.
No futuro, estudos de ganho e/ou perda de função realizados em
cardiomiócitos isolados ou em modelos animais de banding-debanding da
aorta ajudarão a testar a eficácia de modular os potenciais alvos terapêuticos
encontrados. Além disso, estudos clínicos em coortes de maior dimensão
trarão conclusões definitivas quanto ao valor de prognóstico do complemento
C3, MuRF1 e DYRK1A.Programa Doutoral em Biomedicin
Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond
[ES] Esta tesis se enmarca en la intersección entre las técnicas modernas de Machine Learning, como las Redes Neuronales Profundas, y el modelado probabilístico confiable. En muchas aplicaciones, no solo nos importa la predicción hecha por un modelo (por ejemplo esta imagen de pulmón presenta cáncer) sino también la confianza que tiene el modelo para hacer esta predicción (por ejemplo esta imagen de pulmón presenta cáncer con 67% probabilidad). En tales aplicaciones, el modelo ayuda al tomador de decisiones (en este caso un médico) a tomar la decisión final. Como consecuencia, es necesario que las probabilidades proporcionadas por un modelo reflejen las proporciones reales presentes en el conjunto al que se ha asignado dichas probabilidades; de lo contrario, el modelo es inútil en la práctica. Cuando esto sucede, decimos que un modelo está perfectamente calibrado.
En esta tesis se exploran tres vias para proveer modelos más calibrados. Primero se muestra como calibrar modelos de manera implicita, que son descalibrados por técnicas de aumentación de datos. Se introduce una función de coste que resuelve esta descalibración tomando como partida las ideas derivadas de la toma de decisiones con la regla de Bayes. Segundo, se muestra como calibrar modelos utilizando una etapa de post calibración implementada con una red neuronal Bayesiana. Finalmente, y en base a las limitaciones estudiadas en la red neuronal Bayesiana, que hipotetizamos que se basan en un prior mispecificado, se introduce un nuevo proceso estocástico que sirve como distribución a priori en un problema de inferencia Bayesiana.[CA] Aquesta tesi s'emmarca en la intersecció entre les tècniques modernes de Machine Learning, com ara les Xarxes Neuronals Profundes, i el modelatge probabilístic fiable. En moltes aplicacions, no només ens importa la predicció feta per un model (per ejemplem aquesta imatge de pulmó presenta càncer) sinó també la confiança que té el model per fer aquesta predicció (per exemple aquesta imatge de pulmó presenta càncer amb 67% probabilitat). En aquestes aplicacions, el model ajuda el prenedor de decisions (en aquest cas un metge) a prendre la decisió final. Com a conseqüència, cal que les probabilitats proporcionades per un model reflecteixin les proporcions reals presents en el conjunt a què s'han assignat aquestes probabilitats; altrament, el model és inútil a la pràctica. Quan això passa, diem que un model està perfectament calibrat.
En aquesta tesi s'exploren tres vies per proveir models més calibrats. Primer es mostra com calibrar models de manera implícita, que són descalibrats per tècniques d'augmentació de dades. S'introdueix una funció de cost que resol aquesta descalibració prenent com a partida les idees derivades de la presa de decisions amb la regla de Bayes. Segon, es mostra com calibrar models utilitzant una etapa de post calibratge implementada amb una xarxa neuronal Bayesiana. Finalment, i segons les limitacions estudiades a la xarxa neuronal Bayesiana, que es basen en un prior mispecificat, s'introdueix un nou procés estocàstic que serveix com a distribució a priori en un problema d'inferència Bayesiana.[EN] This thesis is framed at the intersection between modern Machine Learning techniques, such as Deep Neural Networks, and reliable probabilistic modeling. In many machine learning applications, we do not only care about the prediction made by a model (e.g. this lung image presents cancer) but also in how confident is the model in making this prediction (e.g. this lung image presents cancer with 67% probability). In such applications, the model assists the decision-maker (in this case a doctor) towards making the final decision. As a consequence, one needs that the probabilities provided by a model reflects the true underlying set of outcomes, otherwise the model is useless in practice. When this happens, we say that a model is perfectly calibrated.
In this thesis three ways are explored to provide more calibrated models. First, it is shown how to calibrate models implicitly, which are decalibrated by data augmentation techniques. A cost function is introduced that solves this decalibration taking as a starting point the ideas derived from decision making with Bayes' rule. Second, it shows how to calibrate models using a post-calibration stage implemented with a Bayesian neural network. Finally, and based on the limitations studied in the Bayesian neural network, which we hypothesize that came from a mispecified prior, a new stochastic process is introduced that serves as a priori distribution in a Bayesian inference problem.Maroñas Molano, J. (2022). Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181582TESI
Immunoinformatics and Computer-Aided Drug Design as New Approaches against Emerging and Re-Emerging Infectious Diseases
Infectious diseases are initiated by small pathogenic living germs that are transferred from person to person by direct or indirect contact. Recently, different newly emerging and reemerging infectious viral diseases have become greater threats to human health and global stability. Investigators can anticipate epidemics through the advent of numerous mathematical tools that can predict specific pathogens and identify potential targets for vaccine and drug design and will help to fight against these challenges. Currently, computational approaches that include mathematical and essential tools have unfolded the way for a better understanding of newly originated emerging and re-emerging infectious disease, pathogenesis, diagnosis, and treatment option of specific diseases more easily, where immunoinformatics plays a crucial role in the discovery of novel peptides and vaccine candidates against the different viruses within a short time. Computational approaches include immunoinformatics, and computer-aided drug design (CADD)-based model trained biomolecules that offered reasonable and quick implementation approaches for the modern discovery of effective viral therapies. The essence of this review is to give insight into the multiple approaches not only for the detection of infectious diseases but also profound how people can pick appropriate models for the detection of viral therapeutics through computational approaches
Development of in-vitro in-silico technologies for modelling and analysis of haematological malignancies
Worldwide, haematological malignancies are responsible for roughly 6% of all the cancer-related deaths. Leukaemias are one of the most severe types of cancer, as only about 40% of the patients have an overall survival of 10 years or more. Myelodysplastic Syndrome (MDS), a pre-leukaemic condition, is a blood disorder characterized by the presence of dysplastic, irregular, immature cells, or blasts, in the peripheral blood (PB) and in the bone marrow (BM), as well as multi-lineage cytopenias.
We have created a detailed, lineage-specific, high-fidelity in-silico erythroid model that incorporates known biological stimuli (cytokines and hormones) and a competing diseased haematopoietic population, correctly capturing crucial biological checkpoints (EPO-dependent CFU-E differentiation) and replicating the in-vivo erythroid differentiation dynamics. In parallel, we have also proposed a long-term, cytokine-free 3D cell culture system for primary MDS cells, which was firstly optimized using easily-accessible healthy controls. This system enabled long-term (24-day) maintenance in culture with high (>75%) cell viability, promoting spontaneous expansion of erythroid phenotypes (CD71+/CD235a+) without the addition of any exogenous cytokines. Lastly, we have proposed a novel in-vitro in-silico framework using GC-MS metabolomics for the metabolic profiling of BM and PB plasma, aiming not only to discretize between haematological conditions but also to sub-classify MDS patients, potentially based on candidate biomarkers. Unsupervised multivariate statistical analysis showed clear intra- and inter-disease separation of samples of 5 distinct haematological malignancies, demonstrating the potential of this approach for disease characterization.
The work herein presented paves the way for the development of in-vitro in-silico technologies to better, characterize, diagnose, model and target haematological malignancies such as MDS and AML.Open Acces
- …