146 research outputs found
Hes1 Notch Response Elements and their Roles in RBPJ-K Binding
The Notch signaling pathway is a one of few fundamentally conserved signal transduction pathways critical for metazoan cellular development. Upon ligand activation, the Notch intracellular domain (NICD) translocates to the nucleus and forms a transcription complex with C-binding promoter factor-1 (RBPJ-k/CBF-1/Suppressor of Hairless) and Mastermind-like protein (MAM). The DNA-binding factor, RBPJ-k, binds to a response element containing a consensus sequence of RTGRGAR (where R is G or A). When RBPJ-k interacts with the NICD and MAM, Notch target genes are activated. The most well-characterized gene for Notch is Hes1. Hes1 contains four Notch response elements (NREs), labeled NRE 1-4. Of the four, NRE 2 and NRE 4 form what has been termed a sequence-paired site (SPS), identified as critical for transcription of Notch-dependent genes. Not all NREs are formed into an SPS, and it is hypothesized that a different transcriptional cofactor is recruited to the NREs versus the SPS to stabilize protein-DNA complexes. Base pair mutations in the Hes1 promoter were tested for binding to nuclear extract and purified RBPJ-k using an electrophoretic mobility shift assay (EMSA). Results were insufficient to determine Notch complex binding; however, the internal guanines were determined critical for RBPJ-k binding to NRE 2 and NRE 4. Additionally, despite its context in Hes1, NRE 3 also showed binding to RBPJ-k. Taken together, these results confirm that the NREs in the SPS are required for RBPJ-k structure formation and raise questions about the roles of the other single NREs
Computational Electromagnetics (CEM) Laboratory: Simulation Planning Guide
The simulation process, milestones and inputs are unknowns to first-time users of the CEM Laboratory. The Simulation Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their engineering personnel in simulation planning and execution. Material covered includes a roadmap of the simulation process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, facility interfaces, and inputs necessary to define scope, cost, and schedule are included as an appendix to the guide
Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals
Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both
Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations
In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element
Data Preservation, Information Preservation, and Lifecyle of Information Management at NASA GES DISC
Data lifecycle management awareness is common today; planners are more likely to consider lifecycle issues at mission start. NASA remote sensing missions are typically subject to life cycle management plans of the Distributed Active Archive Center (DAAC), and NASA invests in these national centers for the long-term safeguarding and benefit of future generations. As stewards of older missions, it is incumbent upon us to ensure that a comprehensive enough set of information is being preserved to prevent the risk for information loss. This risk is greater when the original data experts have moved on or are no longer available. Preservation of items like documentation related to processing algorithms, pre-flight calibration data, or input-output configuration parameters used in product generation, are examples of digital artifacts that are sometimes not fully preserved. This is the grey area of information preservation; the importance of these items is not always clear and requires careful consideration. Missing important metadata about intermediate steps used to derive a product could lead to serious challenges in the reproducibility of results or conclusions. Organizations are rapidly recognizing that the focus of life-cycle preservation needs to be enlarged from the strict raw data to the more encompassing arena of information lifecycle management. By understanding what constitutes information, and the complexities involved, we are better equipped to deliver longer lasting value about the original data and derived knowledge (information) from them. The NASA Earth Science Data Preservation Content Specification is an attempt to define the content necessary for long-term preservation. It requires new lifecycle infrastructure approach along with content repositories to accommodate artifacts other than just raw data. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) setup an open-source Preservation System capable of long-term archive of digital content to augment its raw data holding. This repository is being used for such missions as HIRDLS, UARS, TOMS, OMI, among others. We will provide a status of this implementation; report on challenges, lessons learned, and detail our plans for future evolution to include other missions and services
Earth-observation-based estimation and forecasting of particulate matter impact on solar energy in Egypt
This study estimates the impact of dust aerosols on surface solar radiation and solar energy in Egypt based on Earth Observation (EO) related techniques. For this purpose, we exploited the synergy of monthly mean and daily post processed satellite remote sensing observations from the MODerate resolution Imaging Spectroradiometer (MODIS), radiative transfer model (RTM) simulations utilizing machine learning, in conjunction with 1-day forecasts from the Copernicus Atmosphere Monitoring Service (CAMS). As cloudy conditions in this region are rare, aerosols in particular dust, are the most common sources of solar irradiance attenuation, causing performance issues in the photovoltaic (PV) and concentrated solar power (CSP) plant installations. The proposed EO-based methodology is based on the solar energy nowcasting system (SENSE) that quantifies the impact of aerosol and dust on solar energy potential by using the aerosol optical depth (AOD) in terms of climatological values and day-to-day monitoring and forecasting variability from MODIS and CAMS, respectively. The forecast accuracy was evaluated at various locations in Egypt with substantial PV and CSP capacity installed and found to be within 5–12% of that obtained from the satellite observations, highlighting the ability to use such modelling approaches for solar energy management and planning (M&P). Particulate matter resulted in attenuation by up to 64–107 kWh/m2 for global horizontal irradiance (GHI) and 192–329 kWh/m2 for direct normal irradiance (DNI) annually. This energy reduction is climatologically distributed between 0.7% and 12.9% in GHI and 2.9% to 41% in DNI with the maximum values observed in spring following the frequent dust activity of Khamaseen. Under extreme dust conditions the AOD is able to exceed 3.5 resulting in daily energy losses of more than 4 kWh/m2 for a 10 MW system. Such reductions are able to cause financial losses that exceed the daily revenue values. This work aims to show EO capabilities and techniques to be incorporated and utilized in solar energy studies and applications in sun-privileged locations with permanent aerosol sources such as Egypt
Imaging of Glioma Tumor with Endogenous Fluorescence Tomography
Tomographic imaging of a glioma tumor with endogenous fluorescence is demonstrated using a noncontact single-photon counting fan-beam acquisition system interfaced with microCT imaging. The fluorescence from protoporphyrin IX (PpIX) was found to be detectable, and allowed imaging of the tumor from within the cranium, even though the tumor presence was not visible in the microCT image. The combination of single-photon counting detection and normalized fluorescence to transmission detection at each channel allowed robust imaging of the signal. This demonstrated use of endogenous fluorescence stimulation from aminolevulinic acid (ALA) and provides the first in vivo demonstration of deep tissue tomographic imaging with protoporphyrin IX. Fluorescence tomography provides a tool for preclinical molecular contrast agent assessment in oncology.1, 2, 3, 4 Systems have advanced in complexity to where noncontact imaging,5 automated boundary recovery,6 and sophisticated internal tissue shapes can be included in the recovered images. The translation of this work to humans will require molecular contrast agents that are amenable to regulatory approval and maintain tumor specificity in humans, where often nonspecific uptake of molecular imaging agents can decrease their utility. In this study, a new fluorescence tomography system coupled to microCT7 was used to illustrate diagnostic detection of orthotopic glioma tumors that were not apparent in the microCT images, using endogenous fluorescent contrast from protoporphyrin IX (PpIX). Glioma tumors provide significant endogenous fluorescence from PpIX,8, 9, 10, 11 and this is enhanced when the subject imaged has been administered aminolevulinic acid (ALA). The endogenous production process of PpIX is known to stem from the administered, ALA bypassing the regulatory inhibition of ALA synthase, allowing the heme synthesis pathway to proceed uninhibited. Since there is a limited supply of iron in the body, this process produces overabundance of PpIX rather than heme, and many tumors have been shown to have high yields of PpIX. Clinical trials with PpIX fluorescence guided resection of tumors have shown significant promise,12 and yet deep tissue imaging with PpIX fluorescence has not been exploited in clinical use. Early studies have shown that detection of these tumors with PpIX is feasible,13, 14 but no tomographic imaging has been used. This limitation in development has largely been caused by problems in wavelength filtering and low signal intensity, as well as background fluorescence from the skin limiting sensitivity to deeper structures. In the system developed and used here, this feasibility is demonstrated by imaging a human xenograft glioma model. To solve the sensitivity problem and study the ability to diagnostically image PpIX in vivo, time-correlated single-photon counting was used in the fluorescence tomography system, which provides maximum sensitivity. Figure 1a shows the system designed to match up with a microCT, allowing both x-ray structural and optical functional imaging sequentially. Lens-coupled detection of signals is acquired from the mouse using five time-resolved photomultiplier tubes (H7422P-50, Hamamatsu, Japan) with single-photon counting electronics (SPC-134 modules, Becker and Hickl GmbH, Germany). The system has fan-beam transmission geometry similar to a standard CT scanner, with single source delivery of a1-mW role= presentation \u3e1-mW
pulsed diode laser light at 635nm role= presentation \u3e635nm , collimated to a 1-mm role= presentation \u3e1-mm effective area on the animal. The five detection lenses were arranged in an arc, each with 22.5-deg role= presentation \u3e22.5-deg angular separation, centered directly on the opposite side of the animal with long working distance pickup,7 allowing noncontact measurement of the diffuse light through the animal. The diffuse intensity signals collected at each of the five channels were then translated via 400-μm role= presentation \u3e400-μm fibers and split using beamsplitters to be directed toward the fluorescence (95%) and transmission (5%) channel detectors. A 650-nm role= presentation \u3e650-nm long-pass filter was used in the fluorescence channels to isolate the signal, and in the transmitted intensity signals, a neutral density filter (2 OD) was used to attenuate the signals. This latter filtering was necessary to ensure that the fluorescence and transmission. Intensity signals fell within the same dynamic range, allowing a single 1s role= presentation \u3e1s acquisition for each detector. Scans were then performed by rotating the fan-beam around the specimen to 32 locations. A GE eXplore Locus SP scanner (GE Healthcare, London, Ontario, Canada) that incorporated a detector with 94-micronpixel role= presentation \u3e94-micronpixel resolution, a 80-kV role= presentation \u3e80-kV peak voltage, and a tube current of 450μAs role= presentation \u3e450μAs , was used in acquiring the microCT data, as displayed in Fig. 2 . In this example, since soft tissue was being imaged, the CT data was largely used to image the exterior of the animal, although in future studies, it could be used to isolate the cranium region as well
Treatment of autosomal dominant hypocalcemia Type 1 with the calcilytic NPSP795 (SHP635)
Autosomal dominant hypocalcemia type 1 (ADH1) is a rare form of hypoparathyroidism caused by heterozygous, gain‐of‐function mutations of the calcium‐sensing receptor gene (CAR). Individuals are hypocalcemic with inappropriately low parathyroid hormone (PTH) secretion and relative hypercalciuria. Calcilytics are negative allosteric modulators of the extracellular calcium receptor (CaR) and therefore may have therapeutic benefits in ADH1. Five adults with ADH1 due to 4 distinct CAR mutations received escalating doses of the calcilytic compound NPSP795 (SHP635) on 3 consecutive days. Pharmacokinetics, pharmacodynamics, efficacy, and safety were assessed. Parallel in vitro testing with subject CaR mutations assessed the effects of NPSP795 on cytoplasmic calcium concentrations (Ca2+i), and ERK and p38MAPK phosphorylation. These effects were correlated with clinical responses to administration of NPSP795. NPSP795 increased plasma PTH levels in a concentration‐dependent manner up to 129% above baseline (p=0.013) at the highest exposure levels. Fractional excretion of calcium (FECa) trended down but not significantly so. Blood ionized calcium levels remained stable during NPSP795 infusion despite fasting, no calcitriol and little calcium supplementation. NPSP795 was generally safe and well‐tolerated. There was significant variability in response clinically across genotypes. In vitro, all mutant CaRs were half‐maximally activated (EC50) at lower concentrations of extracellular calcium (Ca2+o) compared to wild type (WT) CaR; NPSP795 exposure increased the EC50 for all CaR activity readouts. However, the in vitro responses to NPSP795 did not correlate with any clinical parameters. NPSP795 increased plasma PTH levels in subjects with ADH1 in a dose‐dependent manner, and thus, serves as proof‐of‐concept that calcilytics could be an effective treatment for ADH1. Albeit all mutations appear to be activating at the CaR, in vitro observations were not predictive of the in vivo phenotype, or the response to calcilytics, suggesting that other parameters impact the response to the drug
A Universal Next-Generation Sequencing Protocol To Generate Noninfectious Barcoded cDNA Libraries from High-Containment RNA Viruses
ABSTRACT Several biosafety level 3 and/or 4 (BSL-3/4) pathogens are high-consequence, single-stranded RNA viruses, and their genomes, when introduced into permissive cells, are infectious. Moreover, many of these viruses are select agents (SAs), and their genomes are also considered SAs. For this reason, cDNAs and/or their derivatives must be tested to ensure the absence of infectious virus and/or viral RNA before transfer out of the BSL-3/4 and/or SA laboratory. This tremendously limits the capacity to conduct viral genomic research, particularly the application of next-generation sequencing (NGS). Here, we present a sequence-independent method to rapidly amplify viral genomic RNA while simultaneously abolishing both viral and genomic RNA infectivity across multiple single-stranded positive-sense RNA (ssRNA+) virus families. The process generates barcoded DNA amplicons that range in length from 300 to 1,000 bp, which cannot be used to rescue a virus and are stable to transport at room temperature. Our barcoding approach allows for up to 288 barcoded samples to be pooled into a single library and run across various NGS platforms without potential reconstitution of the viral genome. Our data demonstrate that this approach provides full-length genomic sequence information not only from high-titer virion preparations but it can also recover specific viral sequence from samples with limited starting material in the background of cellular RNA, and it can be used to identify pathogens from unknown samples. In summary, we describe a rapid, universal standard operating procedure that generates high-quality NGS libraries free of infectious virus and infectious viral RNA. IMPORTANCE This report establishes and validates a standard operating procedure (SOP) for select agents (SAs) and other biosafety level 3 and/or 4 (BSL-3/4) RNA viruses to rapidly generate noninfectious, barcoded cDNA amenable for next-generation sequencing (NGS). This eliminates the burden of testing all processed samples derived from high-consequence pathogens prior to transfer from high-containment laboratories to lower-containment facilities for sequencing. Our established protocol can be scaled up for high-throughput sequencing of hundreds of samples simultaneously, which can dramatically reduce the cost and effort required for NGS library construction. NGS data from this SOP can provide complete genome coverage from viral stocks and can also detect virus-specific reads from limited starting material. Our data suggest that the procedure can be implemented and easily validated by institutional biosafety committees across research laboratories
- …