128 research outputs found

    Future challenges in biologics cell culture engineering

    Get PDF
    The biotechnology industry is at an important transition point. Over the last 30 years much has been learned about development and industrialization of Cell Culture processes largely from broad spectrum development of monoclonal antibodies in CHO cells. However, important learning has also been taken from expression and culture of complex proteins. Now we look to a future where designer molecules will replace the standard monoclonal modalities in new product development and competition and changing regulatory and economic paradigms will drive the need for unprecedented titers, product quality control and speed to market. Fortunately, we face these new challenges armed not only with historical knowledge, but with a new spectrum of molecular engineering, process modeling and analytical tools that promise unprecedented productivity combined with metabolic and product quality control. This talk will outline the opportunities of the future and highlight the technology developments that position the industry to meet these challenges

    Learning faces from variability

    Get PDF
    Research on face learning has tended to use sets of images that vary systematically on dimensions such as pose and illumination. In contrast, we have proposed that exposure to naturally varying images of a person may be a critical part of the familiarization process. Here, we present two experiments investigating face learning with “ambient images”—relatively unconstrained photos taken from internet searches. Participants learned name and face associations for unfamiliar identities presented in high or low within-person variability—that is, images of the same person returned by internet search on their name (high variability) versus different images of the same person taken from the same event (low variability). In Experiment 1 we show more accurate performance on a speeded name verification task for identities learned in high than in low variability, when the test images are completely novel photos. In Experiment 2 we show more accurate performance on a face matching task for identities previously learned in high than in low variability. The results show that exposure to a large range of within-person variability leads to enhanced learning of new identities

    10^{-7} contrast ratio at 4.5Lambda/D: New results obtained in laboratory experiments using nano-fabricated coronagraph and multi-Gaussian shaped pupil masks

    Full text link
    We present here new experimental results on high contrast imaging of 10^{-7} at 4.5Lambda/D (Lambda = 0.820 microns) by combining a circular focal plane mask (coronagraph) of 2.5Lambda/D diameter and a multi-Gaussian pupil plane mask. Both the masks were fabricated on very high surface quality (Lambda/30) BK7 optical substrates using nano-fabrication techniques of photolithography and metal lift-off. This process ensured that the shaped masks have a useable edge roughness better than Lambda/4 (rms error better than 0.2 microns), a specification that is necessary to realize the predicted theoretical limits of any mask design. Though a theoretical model predicts a contrast level of 10^{-12}, the background noise of the observed images was speckle dominated which reduced the contrast level to 4x10^{-7} at 4.5Lambda/D. The optical setup was built on the University of Illinois Seeing Improvement System (UnISIS) optics table which is at the Coude focus of the 2.5-m telescope of the Mt. Wilson Observatory. We used a 0.820 micron laser source coupled with a 5 micron single-mode fiber to simulate an artificial star on the optical test bench of UnISIS.Comment: 9 pages including Figures, published in the OpticsExpress journal (see http://www.opticsexpress.org

    NDM-523: USE OF AN UNMANNED AERIAL VEHICLE (UAV) TO ASSESS TRANSPORTATION INFRASTRUCTURE, IMMEDIATELY AFTER A CATASTROPHIC STORM EVENT

    Get PDF
    From September 29 to October 1, 2015 over 200 mm of rain deluged parts of southern New Brunswick. The catastrophic rain event washed away bridge size culverts and conventional bridges, including the surrounding soil and asphaltic concrete pavement. Also erosion encroached on the driving lane of road and highway embankments at over 100 locations. Several homes and businesses were left stranded. A fast and efficient means was required to assess the impact on infrastructure after the storm. This paper presents the procedure and outcomes of using digital imagery captured with Unmanned Aerial Vehicles (UAVs) for post-disaster assessment. The use of a UAV to gather site images, at hard to access locations, allowed for the timely prioritization of needs and allocation of limited resources to areas most urgently in demand of emergency repairs. High quality aerial images were processed using commercial software specifically designed for the creation of 3D models and orthomosaics from aerial photos. This information, along with ground-level panoramas communicated the current condition of assets and roads. It provided engineers with the ability to complete initial assessment, create 3D models for design, and provide highly qualitative evaluation records. The successful use of a UAV for this storm event was preceded by other uses of UAVs for asset management within the New Brunswick Department of Transportation and Infrastructure

    A Tentative Detection of a Starspot During Consecutive Transits of an Extrasolar Planet from the Ground: No Evidence of a Double Transiting Planet System Around TrES-1

    Full text link
    There have been numerous reports of anomalies during transits of the planet TrES-1b. Recently, Rabus and coworkers' analysis of HST observations lead them to claim brightening anomalies during transit might be caused by either a second transiting planet or a cool starspot. Observations of two consecutive transits are presented here from the University of Arizona's 61-inch Kuiper Telescope on May 12 and May 15, 2008 UT. A 5.4 +/- 1.7 mmag (0.54 +/- 0.17%) brightening anomaly was detected during the first half of the transit on May 12 and again in the second half of the transit on May 15th. We conclude that this is a tentative detection of a r greater than or equal to 6 earth radii starspot rotating on the surface of the star. We suggest that all evidence to date suggest TrES-1 has a spotty surface and there is no need to introduce a second transiting planet in this system to explain these anomalies. We are only able to constrain the rotational period of the star to 40.2 +22.9 -14.6 days, due to previous errors in measuring the alignment of the stellar spin axis with the planetary orbital axis. This is consistent with the previously observed P_obs = 33.2 +22.3 -14.3 day period. We note that this technique could be applied to other transiting systems for which starspots exist on the star in the transit path of the planet in order to constrain the rotation rate of the star. (abridged)Comment: 21 pages, 3 tables, 6 figures, Accepted to Ap

    Developing the host for targeted integration cell line development

    Get PDF
    Unlike the conventional random integration (RI) cell line development (CLD), the targeted integration (TI) CLD introduces the transgene at a predetermined “hot-spot” in the CHO genome with a defined copy number (1-2 copies). Given the low copy number and the pretested integration site, TI cell lines likely exhibit better stability compared to RI cell lines. In this study, we performed a genome wide screening using transposon based cassette integration and established a TI host (255-3) that has a single landing cassette inserted in its genome. Host 255-3 was able to support the CLD for three test molecules with product titers similar to those of the corresponding RI cell lines. For two regular antibody test cases, the top four TI cell lines achieved ~4-5g/L. For a proven difficult to express antibody, the top four TI lines achieved ~1-1.2g/L. The product titer for this hard to express molecule was increased 3-fold with additional vector improvement. Moreover, the timeline for CLD was shortened by ~2 weeks and resources required per cell line were substantially reduced using the TI method. Together these data indicate that the TI host we developed can be a suitable host to support our clinical / commercial CLD

    Performance of the Near-infrared coronagraphic imager on Gemini-South

    Full text link
    We present the coronagraphic and adaptive optics performance of the Gemini-South Near-Infrared Coronagraphic Imager (NICI). NICI includes a dual-channel imager for simultaneous spectral difference imaging, a dedicated 85-element curvature adaptive optics system, and a built-in Lyot coronagraph. It is specifically designed to survey for and image large extra-solar gaseous planets on the Gemini Observatory 8-meter telescope in Chile. We present the on-sky performance of the individual subsystems along with the end-to-end contrast curve. These are compared to our model predictions for the adaptive optics system, the coronagraph, and the spectral difference imaging.Comment: Proc. SPIE, Vol. 7015, 70151V (2008

    The new paradigm and mental models

    Get PDF
    In a recent article in this journal, Johnson–Laird and colleagues argue that mental models theory (MMT) can integrate logical and probabilistic reasoning [1]. We argue that Johnson-Laird and colleagues make a radical revision of MMT, but to ill effect. This can best be seen in what they say about truth and validity (Box 1). Formerly ([2], p. 651), in MMT p ∨ q (p or q) ‘... is true provided that at least one of its two disjuncts is true; otherwise, it is false.’ Thus p ∨ q is true provided that one of three possibilities is true: p & not-q, not-p & q, p & q. However, Johnson-Laird et al. claim, ‘The disjunction is true provided that each of these three cases [p & not-q, not-p & q, p & q] is possible.’ However, these three cases are always possible for jointly contingent statements: that is why they are rows of the truth table for p ∨ q. This new definition makes almost every disjunction true. An example of a disjunction that it does not make true is p ∨ not-p. This tautology fails to be true for their account because p & not-p is not possible

    Tools and methods for providing assurance of clonality for legacy cell lines

    Get PDF
    Over the last several years demonstration of cell line clonality has been a topic of many industry and regulatory presentations and papers. Guidance has been provided by the regulatory authorities, especially the FDA, on a path forward for providing evidence of clonality with high probability. It has been recommended that two-rounds of limiting dilution cloning (LDC) at sufficiently low seeding densities (≤0.5 cells/well) provides sufficient evidence that a cell line is clonal. Furthermore, one-round of LDC may also suffice if supplemental data from a characterized FACS or plate-imaging workflow are also included in the package. Cell lines generated by methods that do not demonstrate high probability of clonal derivation, including legacy cell lines, may require additional studies to provide assurance and/or process control strategies to satisfy regulatory expectations. Within the Biologics function of the IQ Consortium the “Clonality” Working Group is focusing on methods and tools which could be utilized to provide a high assurance of clonality for legacy cell lines. The presentation will outline a three tier approach to address legacy cell line clonality assurance: standard practices already used in industry to support limit of in vitro cell age studies, enhanced control strategies to ensure process consistency, and emerging technologies that could be used to further support cell line clonality

    Probabilistic single function dual process theory and logic programming as approaches to non-monotonicity in human vs. artificial reasoning

    Get PDF
    In this paper, it is argued that single function dual process theory is a more credible psychological account of non-monotonicity in human conditional reasoning than recent attempts to apply logic programming (LP) approaches in artificial intelligence to these data. LP is introduced and among other critiques, it is argued that it is psychologically unrealistic in a similar way to hash coding in the classicism vs. connectionism debate. Second, it is argued that causal Bayes nets provide a framework for modelling probabilistic conditional inference in System 2 that can deal with patterns of inference LP cannot. Third, we offer some speculations on how the cognitive system may avoid problems for System 1 identified by Fodor in 1983. We conclude that while many problems remain, the probabilistic single function dual processing theory is to be preferred over LP as an account of the non-monotonicity of human reasoning
    corecore