729 research outputs found

    High efficiency transfection of thymic epithelial cell lines and primary thymic epithelial cells by Nucleofection

    Get PDF
    Thymic epithelial cells (TECs) are required for the development and differentiation of T cells and are sufficient for the positive and negative selection of developing T cells. Although TECs play a critical role in T cell biology, simple, efficient and readily scalable methods for the transfection of TEC lines and primary TECs have not been described. We tested the efficiency of Nucleofection for the transfection of 4 different mouse thymic epithelial cell lines that had been derived from cortical or medullary epithelium. We also tested primary mouse thymic epithelial cells isolated from fetal and postnatal stages. We found that Nucleofection was highly efficient for the transfection of thymic epithelial cells, with transfection efficiencies of 30-70% for the cell lines and 15-35% for primary TECs with low amounts of cell death. Efficient transfection by Nucleofection can be performed with established cortical and medullary thymic epithelial cell lines as well as primary TECs isolated from E15.5 day fetal thymus or postnatal day 3 or 30 thymus tissue. The high efficiency of Nucleofection for TEC transfection will enable the use of TEC lines in high throughput transfection studies and simplifies the transfection of primary TECs for in vitro or in vivo analysis

    Database Learning: Toward a Database that Becomes Smarter Every Time

    Full text link
    In today's databases, previous query answers rarely benefit answering future queries. For the first time, to the best of our knowledge, we change this paradigm in an approximate query processing (AQP) context. We make the following observation: the answer to each query reveals some degree of knowledge about the answer to another query because their answers stem from the same underlying distribution that has produced the entire dataset. Exploiting and refining this knowledge should allow us to answer queries more analytically, rather than by reading enormous amounts of raw data. Also, processing more queries should continuously enhance our knowledge of the underlying distribution, and hence lead to increasingly faster response times for future queries. We call this novel idea---learning from past query answers---Database Learning. We exploit the principle of maximum entropy to produce answers, which are in expectation guaranteed to be more accurate than existing sample-based approximations. Empowered by this idea, we build a query engine on top of Spark SQL, called Verdict. We conduct extensive experiments on real-world query traces from a large customer of a major database vendor. Our results demonstrate that Verdict supports 73.7% of these queries, speeding them up by up to 23.0x for the same accuracy level compared to existing AQP systems.Comment: This manuscript is an extended report of the work published in ACM SIGMOD conference 201

    VerdictDB: Universalizing Approximate Query Processing

    Full text link
    Despite 25 years of research in academia, approximate query processing (AQP) has had little industrial adoption. One of the major causes of this slow adoption is the reluctance of traditional vendors to make radical changes to their legacy codebases, and the preoccupation of newer vendors (e.g., SQL-on-Hadoop products) with implementing standard features. Additionally, the few AQP engines that are available are each tied to a specific platform and require users to completely abandon their existing databases---an unrealistic expectation given the infancy of the AQP technology. Therefore, we argue that a universal solution is needed: a database-agnostic approximation engine that will widen the reach of this emerging technology across various platforms. Our proposal, called VerdictDB, uses a middleware architecture that requires no changes to the backend database, and thus, can work with all off-the-shelf engines. Operating at the driver-level, VerdictDB intercepts analytical queries issued to the database and rewrites them into another query that, if executed by any standard relational engine, will yield sufficient information for computing an approximate answer. VerdictDB uses the returned result set to compute an approximate answer and error estimates, which are then passed on to the user or application. However, lack of access to the query execution layer introduces significant challenges in terms of generality, correctness, and efficiency. This paper shows how VerdictDB overcomes these challenges and delivers up to 171×\times speedup (18.45×\times on average) for a variety of existing engines, such as Impala, Spark SQL, and Amazon Redshift, while incurring less than 2.6% relative error. VerdictDB is open-sourced under Apache License.Comment: Extended technical report of the paper that appeared in Proceedings of the 2018 International Conference on Management of Data, pp. 1461-1476. ACM, 201

    Antineutrinos from Earth: A reference model and its uncertainties

    Full text link
    We predict geoneutrino fluxes in a reference model based on a detailed description of Earth's crust and mantle and using the best available information on the abundances of uranium, thorium, and potassium inside Earth's layers. We estimate the uncertainties of fluxes corresponding to the uncertainties of the element abundances. In addition to distance integrated fluxes, we also provide the differential fluxes as a function of distance from several sites of experimental interest. Event yields at several locations are estimated and their dependence on the neutrino oscillation parameters is discussed. At Kamioka we predict N(U+Th)=35 +- 6 events for 10^{32} proton yr and 100% efficiency assuming sin^2(2theta)=0.863 and delta m^2 = 7.3 X 10^{-5} eV^2. The maximal prediction is 55 events, obtained in a model with fully radiogenic production of the terrestrial heat flow.Comment: 24 pages, ReVTeX4, plus 7 postscript figures; minor formal changes to match version to be published in PR

    Developing core sets for persons following amputation based on the International Classification of Functioning, Disability and Health as a way to specify functioning

    Get PDF
    Amputation is a common late stage sequel of peripheral vascular disease and diabetes or a sequel of accidental trauma, civil unrest and landmines. The functional impairments affect many facets of life including but not limited to: Mobility; activities of daily living; body image and sexuality. Classification, measurement and comparison of the consequences of amputations has been impeded by the limited availability of internationally, multiculturally standardized instruments in the amputee setting. The introduction of the International Classification of Functioning, Disability and Health (ICF) by the World Health Assembly in May 2001 provides a globally accepted framework and classification system to describe, assess and compare function and disability. In order to facilitate the use of the ICF in everyday clinical practice and research, ICF core sets have been developed that focus on specific aspects of function typically associated with a particular disability. The objective of this paper is to outline the development process for the ICF core sets for persons following amputation. The ICF core sets are designed to translate the benefits of the ICF into clinical routine. The ICF core sets will be defined at a Consensus conference which will integrate evidence from preparatory studies, namely: (a) a systematic literature review regarding the outcome measures of clinical trails and observational studies, (b) semi-structured patient interviews, (c) international experts participating in an internet-based survey, and (d) cross-sectional, multi-center studies for clinical applicability. To validate the ICF core sets field-testing will follow. Invitation for participation: The development of ICF Core Sets is an inclusive and open process. Anyone who wishes to actively participate in this process is invited to do so

    Homozygosity for a missense mutation in the 67 kDa isoform of glutamate decarboxylase in a family with autosomal recessive spastic cerebral palsy: parallels with Stiff-Person Syndrome and other movement disorders

    Get PDF
    Background Cerebral palsy (CP) is an heterogeneous group of neurological disorders of movement and/or posture, with an estimated incidence of 1 in 1000 live births. Non-progressive forms of symmetrical, spastic CP have been identified, which show a Mendelian autosomal recessive pattern of inheritance. We recently described the mapping of a recessive spastic CP locus to a 5 cM chromosomal region located at 2q24-31.1, in rare consanguineous families. Methods Here we present data that refine this locus to a 0.5 cM region, flanked by the microsatellite markers D2S2345 and D2S326. The minimal region contains the candidate gene GAD1, which encodes a glutamate decarboxylase isoform (GAD67), involved in conversion of the amino acid and excitatory neurotransmitter glutamate to the inhibitory neurotransmitter γ-aminobutyric acid (GABA). Results A novel amino acid mis-sense mutation in GAD67 was detected, which segregated with CP in affected individuals. Conclusions This result is interesting because auto-antibodies to GAD67 and the more widely studied GAD65 homologue encoded by the GAD2 gene, are described in patients with Stiff-Person Syndrome (SPS), epilepsy, cerebellar ataxia and Batten disease. Further investigation seems merited of the possibility that variation in the GAD1 sequence, potentially affecting glutamate/GABA ratios, may underlie this form of spastic CP, given the presence of anti-GAD antibodies in SPS and the recognised excitotoxicity of glutamate in various contexts
    corecore