1,302 research outputs found

    Distinguishing discrete and gradient category structure in language: Insights from verb-particle constructions

    Get PDF
    The current work uses memory errors to examine the mental representation of verb-particle constructions (VPCs; e.g., make up the story, cut up the meat). Some evidence suggests that VPCs are represented by a cline in which the relationship between the VPC and its component elements ranges from highly transparent (cut up) to highly idiosyncratic (make up). Other evidence supports a multiple class representation, characterizing VPCs as belonging to discretely separated classes differing in semantic and syntactic structure. We outline a novel paradigm to investigate the representation of VPCs in which we elicit illusory conjunctions, or memory errors sensitive to syntactic structure. We then use a novel application of piecewise regression to demonstrate that the resulting error pattern follows a cline rather than discrete classes. A preregistered replication verifies these findings, and a final preregistered study verifies that these errors reflect syntactic structure. This provides evidence for gradient rather than discrete representations across levels of representation in language processing

    PIPS: A parallel planning model of sentence production

    Get PDF
    Subject–verb agreement errors are common in sentence production. Many studies have used experimental paradigms targeting the production of subject–verb agreement from a sentence preamble (The key to the cabinets) and eliciting verb errors (… *were shiny). Through reanalysis of previous data (50 experiments; 102,369 observations), we show that this paradigm also results in many errors in preamble repetition, particularly of local noun number (The key to the *cabinet). We explore the mechanisms of both errors in parallelism in producing syntax (PIPS), a model in the Gradient Symbolic Computation framework. PIPS models sentence production using a continuous-state stochastic dynamical system that optimizes grammatical constraints (shaped by previous experience) over vector representations of symbolic structures. At intermediate stages in the computation, grammatical constraints allow multiple competing parses to be partially activated, resulting in stable but transient conjunctive blend states. In the context of the preamble completion task, memory constraints reduce the strength of the target structure, allowing for co-activation of non-target parses where the local noun controls the verb (notional agreement and locally agreeing relative clauses) and non-target parses that include structural constituents with contrasting number specifications (e.g., plural instead of singular local noun). Simulations of the preamble completion task reveal that these partially activated non-target parses, as well the need to balance accurate encoding of lexical and syntactic aspects of the prompt, result in errors. In other words: Because sentence processing is embedded in a processor with finite memory and prior experience with production, interference from non-target production plans causes errors

    In My View

    Get PDF

    A decade in review: use of data analytics within the biopharmaceutical sector

    Get PDF
    There are large amounts of data generated within the biopharmaceutical sector. Traditionally, data analysis methods labelled as multivariate data analysis have been the standard statistical technique applied to interrogate these complex data sets. However, more recently there has been a surge in the utilisation of a broader set of machine learning algorithms to further exploit these data. In this article, the adoption of data analysis techniques within the biopharmaceutical sector is evaluated through a review of journal articles and patents published within the last ten years. The papers objectives are to identify the most dominant algorithms applied across different applications areas within the biopharmaceutical sector and to explore whether there is a trend between the size of the data set and the algorithm adopted

    Addressing Food Insecurity in College: Mapping a Shared Conceptual Framework for Campus Pantries in Michigan

    Full text link
    The first known university food pantry started at Michigan State University in 1993. Since then, campus food pantries are more widespread, although little is known about them. The current study examined how college pantries best serve students and foster their success. Twentyâ eight food pantry directors and staff from across sixteen Michigan college campuses engaged in concept mapping, a technique used to examine the interrelationships among concepts understood by stakeholders. Analyses identified six concepts, examined importance of each concept as assigned by participants, and evaluated variation among institutions. Implications for findings and future research directions are discussed.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/147046/1/asap12161_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/147046/2/asap12161.pd

    Advanced multivariate data analysis to determine the root cause of trisulfide bond formation in a novel antibody-peptide fusion

    Get PDF
    Product quality heterogeneities, such as a trisulfide bond (TSB) formation, can be influenced by multiple interacting process parameters. Identifying their root cause is a major challenge in biopharmaceutical production. To address this issue, this paper describes the novel application of advanced multivariate data analysis (MVDA) techniques to identify the process parameters influencing TSB formation in a novel recombinant antibody-peptide fusion expressed in mammalian cell culture. The screening dataset was generated with a high-throughput (HT) micro-bioreactor system (Ambr(TM) 15) using a design of experiments (DoE) approach. The complex dataset was firstly analyzed through the development of a multiple linear regression model focusing solely on the DoE inputs and identified the temperature, pH and initial nutrient feed day as important process parameters influencing this quality attribute. To further scrutinize the dataset, a partial least squares model was subsequently built incorporating both on-line and off-line process parameters and enabled accurate predictions of the TSB concentration at harvest. Process parameters identified by the models to promote and suppress TSB formation were implemented on five 7 L bioreactors and the resultant TSB concentrations were comparable to the model predictions. This study demonstrates the ability of MVDA to enable predictions of the key performance drivers influencing TSB formation that are valid also upon scale-up

    The influence of lexical selection disruptions on articulation

    No full text
    Interactive models of language production predict that it should be possible to observe long-distance interactions; effects that arise at one level of processing influence multiple subsequent stages of representation and processing. We examine the hypothesis that disruptions arising in nonform-based levels of planning—specifically, lexical selection—should modulate articulatory processing. A novel automatic phonetic analysis method was used to examine productions in a paradigm yielding both general disruptions to formulation processes and, more specifically, overt errors during lexical selection. This analysis method allowed us to examine articulatory disruptions at multiple levels of analysis, from whole words to individual segments. Baseline performance by young adults was contrasted with young speakers’ performance under time pressure (which previous work has argued increases interaction between planning and articulation) and performance by older adults (who may have difficulties inhibiting nontarget representations, leading to heightened interactive effects). The results revealed the presence of interactive effects. Our new analysis techniques revealed these effects were strongest in initial portions of responses, suggesting that speech is initiated as soon as the first segment has been planned. Interactive effects did not increase under response pressure, suggesting interaction between planning and articulation is relatively fixed. Unexpectedly, lexical selection disruptions appeared to yield some degree of facilitation in articulatory processing (possibly reflecting semantic facilitation of target retrieval) and older adults showed weaker, not stronger interactive effects (possibly reflecting weakened connections between lexical and form-level representations)

    Predicting performance of constant flow depth filtration using constant pressure filtration data

    Get PDF
    This paper describes a method of predicting constant flow filtration capacities using constant pressure datasets collected during the purification of several monoclonal antibodies through depth filtration. The method required characterisation of the fouling mechanism occurring in constant pressure filtration processes by evaluating the best fit of each of the classic and combined theoretical fouling models. The optimised coefficients of the various models were correlated with the corresponding capacities achieved during constant flow operation at the specific pressures performed during constant pressure operation for each centrate. Of the classic and combined fouling models investigated, the Cake-Adsorption fouling model was found to best describe the fouling mechanisms observed for each centrate at the various different pressures investigated. A linear regression model was generated with these coefficients and was shown to predict accurately the capacities at constant flow operation at each pressure. This model was subsequently validated using an additional centrate and accurately predicted the constant flow capacities at three different pressures (0.69, 1.03 and 1.38 bar). The model used the optimised Cake-Adsorption model coefficients that best described the flux decline during constant pressure operation. The proposed method of predicting depth filtration performance proved to be faster than the traditional approach whilst requiring significantly less material, making it particularly attractive for early process development activities

    A quantitative PCR method to detect blood microRNAs associated with tumorigenesis in transgenic mice

    Get PDF
    MicroRNA (miRNA) dysregulation frequently occurs in cancer. Analysis of whole blood miRNA in tumor models has not been widely reported, but could potentially lead to novel assays for early detection and monitoring of cancer. To determine whether miRNAs associated with malignancy could be detected in the peripheral blood, we used real-time reverse transcriptase-PCR to determine miRNA profiles in whole blood obtained from transgenic mice with c-MYC-induced lymphoma, hepatocellular carcinoma and osteosarcoma. The PCR-based assays used in our studies require only 10 nanograms of total RNA, allowing serial mini-profiles (20 – 30 miRNAs) to be carried out on individual animals over time. Blood miRNAs were measured from mice at different stages of MYC-induced lymphomagenesis and regression. Unsupervised hierarchical clustering of the data identified specific miRNA expression profiles that correlated with tumor type and stage. The miRNAs found to be altered in the blood of mice with tumors frequently reverted to normal levels upon tumor regression. Our results suggest that specific changes in blood miRNA can be detected during tumorigenesis and tumor regression

    Advanced control strategies for bioprocess chromatography: Challenges and opportunities for intensified processes and next generation products

    Get PDF
    Recent advances in process analytical technologies and modelling techniques present opportunities to improve industrial chromatography control strategies to enhance process robustness, increase productivity and move towards real-time release testing. This paper provides a critical overview of batch and continuous industrial chromatography control systems for therapeutic protein purification. Firstly, the limitations of conventional industrial fractionation control strategies using in-line UV spectroscopy and on-line HPLC are outlined. Following this, an evaluation of monitoring and control techniques showing promise within research, process development and manufacturing is provided. These novel control strategies combine rapid in-line data capture (e.g. NIR, MALS and variable pathlength UV) with enhanced process understanding obtained from mechanistic and empirical modelling techniques. Finally, a summary of the future states of industrial chromatography control systems is proposed, including strategies to control buffer formulation, product fractionation, column switching and column fouling. The implementation of these control systems improves process capabilities to fulfil product quality criteria as processes are scaled, transferred and operated, thus fast tracking the delivery of new medicines to market
    corecore