767,331 research outputs found
Characterization of Multiple Groups of Data
In this paper we propose a new approach for computing characterizations of sets of data by means of partially defined Boolean functions. The main objective is to provide minimal sets of characters that allows the user to discriminate groups of Boolean data representing individuals described by means of presence or absence of characters. Compared to previous approaches, our algorithms are more efficient and are able to compute complete sets of solutions, which may be useful according to our underlying application domain in plant biology
Automated analysis of 16-color polychromatic flow cytometry data maps immune cell populations and reveals a distinct inhibitory receptor signature in systemic sclerosis
Background. The phenotypic profiles of both peripheral blood and tissue-resident immune cells have been linked to the health status of individuals with infectious and autoimmune diseases, as well as cancer. In light of the promising clinical trial results of agents that block the Inhibitory Receptor (IR) Programmed Death 1 (PD-1) axis, novel flow cytometric panels that simultaneously measure multiple IRs on several immune cell subsets could provide the distinct IR signatures to target in combinational therapies for many disease states. Also, due to the paucity of human samples, larger (14+ color) ‘1-tube’ panels for immune cell characterization ex vivo are of a high value in translational studies. Development of fluorescent-based panels offer several advantages as compared with analogous mass cytometric methods, including the ability to sort multiple populations of interest from the sample for further study. However, automated platforms of multi-dimensional single cell analysis that allow objective and comprehensive population characterization are severely underutilized on data generated from large polychromatic panels. Methods. A 16-color flow cytometry (FCM) panel was developed and optimized for the simultaneous characterization and purification of multiple human immune cell populations on a 4- laser BD FACSARIA II cell sorter. FCM data of samples obtained from healthy subjects and individuals with systemic sclerosis (SSc) were loaded into Cytobank cloud, then compensated and analyzed with SPADE clustering algorithm. The viSNE algorithm was also employed to compress the data into a 2D map of phenotypic space that was subsequently clustered using SPADE. For comparison, the FCM data were also analyzed manually using FlowJo software. Results. Our novel 16-color panel recognizes CD3, CD4, CD8, CD45RO, CD25, CD127, CD16, CD56, γδTCR, vα24, PD-1, LAG-3, CTLA-4, and TIM-3; it also contains a CD1d-tetramer and a live-dead dye (with CD19 and CD14 included as a combined dump channel). This panel allows combinational IR signatures to be determined from CD4+ T, CD8+ T, Natural Killer (NK), invariant Natural Killer (iNKT), and gamma delta (γδ) immune cell subsets within one sample. We have successfully identified all subsets of interest using automatic SPADE and viSNE algorithms integrated into Cytobank services, and demonstrated a distinctive phenotype of IR distribution on healthy versus systemic sclerosis subject groups. Conclusions. Methods of automatic analysis that were originally developed for processing multi-dimensional mass cytometry can be applied to polychromatic FCM datasets and provide robust results, including subset identification and distinct IR signatures in healthy compared to diseased subject groups
Recommended from our members
Archiving and disseminating integrative structure models.
Limitations in the applicability, accuracy, and precision of individual structure characterization methods can sometimes be overcome via an integrative modeling approach that relies on information from all available sources, including all available experimental data and prior models. The open-source Integrative Modeling Platform (IMP) is one piece of software that implements all computational aspects of integrative modeling. To maximize the impact of integrative structures, the coordinates should be made publicly available, as is already the case for structures based on X-ray crystallography, NMR spectroscopy, and electron microscopy. Moreover, the associated experimental data and modeling protocols should also be archived, such that the original results can easily be reproduced. Finally, it is essential that the integrative structures are validated as part of their publication and deposition. A number of research groups have already developed software to implement integrative modeling and have generated a number of structures, prompting the formation of an Integrative/Hybrid Methods Task Force. Following the recommendations of this task force, the existing PDBx/mmCIF data representation used for atomic PDB structures has been extended to address the requirements for archiving integrative structural models. This IHM-dictionary adds a flexible model representation, including coarse graining, models in multiple states and/or related by time or other order, and multiple input experimental information sources. A prototype archiving system called PDB-Dev ( https://pdb-dev.wwpdb.org ) has also been created to archive integrative structural models, together with a Python library to facilitate handling of integrative models in PDBx/mmCIF format
Achievement of multiple therapeutic targets for cardiovascular disease prevention. Retrospective analysis of real practice in Italy
Background: Pharmacological therapy in patients at high cardiovascular (CV) risk should be tailored to achieve recommended therapeutic targets. Hypothesis: To evaluate individual global CV risk profile and to estimate the control rates of multiple therapeutic targets for in adult outpatients followed in real practice in Italy. Methods: Data extracted from a cross-sectional, national medical database of adult outpatients in real practice in Italy were analyzed for global CV risk assessment and rates of control of major CV risk factors, including hypertension, dyslipidemia, diabetes, and obesity. CV risk characterization was based on the European SCORE equation and the study population stratified into 3 groups: low risk ( 40 (males)/>50 (females) mg/dL (OR: 0.926, 95% CI: 0.895–0.958), triglycerides <160 mg/dL (OR: 0.925, 95% CI: 0.895–0.957), and BMI <25 kg/m2(OR: 0.888, 95% CI: 0.851–0.926), even after correction for diabetes, renal function, pharmacological therapy, and referring physicians (P < 0.001). Conclusions: Despite low prevalence and optimal medical therapy, individuals with high to very high SCORE risk did not achieve recommended therapeutic targets in a real-world practice
Identification of Sensory Processing and Integration Symptom Clusters: A Preliminary Study
Rationale. This study explored subtypes of sensory processing disorder (SPD) by examining the clinical presentations of cluster groups that emerged from scores of children with SPD on the Sensory Processing 3-Dimension (SP-3D) Inventory. Method. A nonexperimental design was used involving data extraction from the records of 252 children with SPD. Exploratory cluster analyses were conducted with scores from the SP-3D Inventory which measures sensory overresponsivity (SOR), sensory underresponsivity (SUR), sensory craving (SC), postural disorder, dyspraxia, and sensory discrimination. Scores related to adaptive behavior, social-emotional functioning, and attention among children with different sensory modulation patterns were then examined and compared. Results. Three distinct cluster groups emerged from the data: High SOR only, High SUR with SOR, and High SC with SOR. All groups showed low performance within multiple domains of adaptive behavior. Atypical behaviors associated with social-emotional functioning and attention varied among the groups. Implications. The SP-3D Inventory shows promise as a tool for assisting in identifying patterns of sensory dysfunction and for guiding intervention. Better characterization can guide intervention precision and facilitate homogenous samples for research
Logical characterization of groups of data: a comparative study
This paper presents an approach for characterizing groups of data represented by Boolean vectors. The purpose is to find minimal set of attributes that allow to distinguish data from different groups. In this work, we precisely defined the multiple characterization problem and the algorithms that can be used to solve its different variants. Our data characterization approach can be related to Logical Analysis of Data and we propose thus a comparison between these two methodologies. The purpose of this paper is also to precisely study the properties of the solutions that are computed with regards to the topological properties of the instances. Experiments are thus conducted on real biological data
Uncertainty in Signals of Large-Scale Climate Variations in Radiosonde and Satellite Upper-Air Temperature Datasets
There is no single reference dataset of long-term global upper-air temperature observations, although several
groups have developed datasets from radiosonde and satellite observations for climate-monitoring purposes. The
existence of multiple data products allows for exploration of the uncertainty in signals of climate variations and
change. This paper examines eight upper-air temperature datasets and quantifies the magnitude and uncertainty
of various climate signals, including stratospheric quasi-biennial oscillation (QBO) and tropospheric ENSO
signals, stratospheric warming following three major volcanic eruptions, the abrupt tropospheric warming of
1976–77, and multidecadal temperature trends. Uncertainty estimates are based both on the spread of signal
estimates from the different observational datasets and on the inherent statistical uncertainties of the signal in
any individual dataset.
The large spread among trend estimates suggests that using multiple datasets to characterize large-scale upperair
temperature trends gives a more complete characterization of their uncertainty than reliance on a single
dataset. For other climate signals, there is value in using more than one dataset, because signal strengths vary.
However, the purely statistical uncertainty of the signal in individual datasets is large enough to effectively
encompass the spread among datasets. This result supports the notion of an 11th climate-monitoring principle,
augmenting the 10 principles that have now been generally accepted (although not generally implemented) by
the climate community. This 11th principle calls for monitoring key climate variables with multiple, independent
observing systems for measuring the variable, and multiple, independent groups analyzing the data
Uncertainty in Signals of Large-Scale Climate Variations in Radiosonde and Satellite Upper-Air Temperature Datasets
There is no single reference dataset of long-term global upper-air temperature observations, although several
groups have developed datasets from radiosonde and satellite observations for climate-monitoring purposes. The
existence of multiple data products allows for exploration of the uncertainty in signals of climate variations and
change. This paper examines eight upper-air temperature datasets and quantifies the magnitude and uncertainty
of various climate signals, including stratospheric quasi-biennial oscillation (QBO) and tropospheric ENSO
signals, stratospheric warming following three major volcanic eruptions, the abrupt tropospheric warming of
1976–77, and multidecadal temperature trends. Uncertainty estimates are based both on the spread of signal
estimates from the different observational datasets and on the inherent statistical uncertainties of the signal in
any individual dataset.
The large spread among trend estimates suggests that using multiple datasets to characterize large-scale upperair
temperature trends gives a more complete characterization of their uncertainty than reliance on a single
dataset. For other climate signals, there is value in using more than one dataset, because signal strengths vary.
However, the purely statistical uncertainty of the signal in individual datasets is large enough to effectively
encompass the spread among datasets. This result supports the notion of an 11th climate-monitoring principle,
augmenting the 10 principles that have now been generally accepted (although not generally implemented) by
the climate community. This 11th principle calls for monitoring key climate variables with multiple, independent
observing systems for measuring the variable, and multiple, independent groups analyzing the data
- …