208 research outputs found

    investr: An R Package for Inverse Estimation

    Get PDF
    Inverse estimation is a classical and well-known problem in regression. In simple terms, it involves the use of an observed value of the response to make inference on the corresponding unknown value of the explanatory variable. To our knowledge, however, statistical software is somewhat lacking the capabilities for analyzing these types of problems. In this paper, we introduce investr (which stands for inverse estimation in R), a package for solving inverse estimation problems in both linear and nonlinear regression models

    Poly(2-cyclopropyl-2-oxazoline): from rate acceleration by Cyclopropyl to Thermoresponsive properties

    Get PDF
    The synthesis and microwave-assisted living cationic ring-opening polymerization of 2-cyclopropyl-2-oxazoline is reported revealing the fastest polymerization for an aliphatic substituted 2-oxazoline to date, which is ascribed to the electron withdrawing effect of the cyclopropyl group. The poly(2-cyclopropyl-2-oxazoline) (pCPropOx) represents an alternative thermo-responsive poly(2-oxazoline) with a reversible critical temperature close to body temperature. The cloud point (CP) of the obtained pCPropOx in aqueous solution was evaluated in detail by turbidimetry, dynamic light scattering (DLS) and viscosity measurements. pCPropOx is amorphous with a significantly higher glass transition temperature (T(g) similar to 80 degrees C) compared to the amorphous poly(2-n-propyl-2-oxazoline) (pnPropOx) (T(g) similar to 40 degrees C), while poly(2-isopropyl-2-oxazoline) piPropOx is semicrystalline. In addition, a pCPropOx comb polymer was prepared by methacrylic acid end-capping of the living cationic species followed by RAFT polymerization of the macromonomer. The polymer architecture does not influence the concentration dependence of the CP, however, both the CP and T(g) of the comb polymer are lower due to the increased number of hydrophobic end groups

    Moving from Knowledge to Action: Blogging research and outcome highlights

    Get PDF
    This blog book tells the story of CCAFS research theme ‘Linking Knowledge with Action’; its approach and lessons learned throughout the years, especially 2014, while illustrating its many achievements through blogs and photos

    The Effects of Individual Differences, Non‐Stationarity, and The Importance of Data Partitioning Decisions for Training and Testing of EEG Cross‐Participant Models

    Get PDF
    EEG-based deep learning models have trended toward models that are designed to perform classification on any individual (cross-participant models). However, because EEG varies across participants due to non-stationarity and individual differences, certain guidelines must be followed for partitioning data into training, validation, and testing sets, in order for cross-participant models to avoid overestimation of model accuracy. Despite this necessity, the majority of EEG-based cross-participant models have not adopted such guidelines. Furthermore, some data repositories may unwittingly contribute to the problem by providing partitioned test and non-test datasets for reasons such as competition support. In this study, we demonstrate how improper dataset partitioning and the resulting improper training, validation, and testing of a cross-participant model leads to overestimated model accuracy. We demonstrate this mathematically, and empirically, using five publicly available datasets. To build the cross-participant models for these datasets, we replicate published results and demonstrate how the model accuracies are significantly reduced when proper EEG cross-participant model guidelines are followed. Our empirical results show that by not following these guidelines, error rates of cross-participant models can be underestimated between 35% and 3900%. This misrepresentation of model performance for the general population potentially slows scientific progress toward truly high-performing classification models

    A Proposed Methodology to Characterize the Accuracy of Life Cycle Cost Estimates for DoD Programs

    Get PDF
    For decades, the DoD has employed numerous reporting and monitoring tools for characterizing the acquisition cost of its major programs. These tools have resulted in dozens of studies thoroughly documenting the magnitude and extent of DoD acquisition cost growth. Curiously, though, there have been extremely few studies regarding the behavior of the other cost component of a system\u27s life cycle: Operating and Support (O&S) costs. This is particularly strange considering that O&S costs tend to dominate the total life cycle cost (LCC) of a program, and that LCCs are widely regarded as the preferred metric for assessing actual program value. The upshot for not examining such costs is that the DoD has little knowledge of how LCC estimates behave over time, and virtually no insights regarding their accuracy. In recent years, however, enough quality LCC data has amassed to conduct a study to address these deficiencies. This paper describes a method for conducting such a study, and represents (to the authors’ knowledge) the first broad-based attempt to do so. The results not only promise insights into the nature of current LCC estimates, but also suggest the possibility of improving the accuracy of DoD LCC estimates via a stochastically-based model

    Extending Critical Infrastructure Element Longevity using Constellation-based ID Verification

    Get PDF
    This work supports a technical cradle-to-grave protection strategy aimed at extending the useful lifespan of Critical Infrastructure (CI) elements. This is done by improving mid-life operational protection measures through integration of reliable physical (PHY) layer security mechanisms. The goal is to improve existing protection that is heavily reliant on higher-layer mechanisms that are commonly targeted by cyberattack. Relative to prior device ID discrimination works, results herein reinforce the exploitability of constellation-based PHY layer features and the ability for those features to be practically implemented to enhance CI security. Prior work is extended by formalizing a device ID verification process that enables rogue device detection demonstration under physical access attack conditions that include unauthorized devices mimicking bit-level credentials of authorized network devices. The work transitions from distance-based to probability-based measures of similarity derived from empirical Multivariate Normal Probability Density Function (MVNPDF) statistics of multiple discriminant analysis radio frequency fingerprint projections. Demonstration results for Constellation-Based Distinct Native Attribute (CB-DNA) fingerprinting of WirelessHART adapters from two manufacturers includes 1) average cross-class percent correct classification of %C \u3e 90% across 28 different networks comprised of six authorized devices, and 2) average rogue rejection rate of 83.4% ≀ RRR ≀ 99.9% based on two held-out devices serving as attacking rogue devices for each network (a total of 120 individual rogue attacks). Using the MVNPDF measure proved most effective and yielded nearly 12% RRR improvement over a Euclidean distance measure

    Lessons in Theory of Change from the Introductory Training on Theories of Change, Impact Pathways and Monitoring & Evaluation

    Get PDF
    The CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) is using theory of change (TOC) planning to specify research outputs, the partnerships needed to produce the outputs, and a plausible hypothesis on how these outputs will contribute to CCAFS and CGIAR development outcomes. This brief reports back from the first training workshop with the team on developing TOCs, Impact Pathways and Monitoring and Evaluation

    Deep Long Short-term Memory Structures Model Temporal Dependencies Improving Cognitive Workload Estimation

    Get PDF
    Using deeply recurrent neural networks to account for temporal dependence in electroencephalograph (EEG)-based workload estimation is shown to considerably improve day-to-day feature stationarity resulting in significantly higher accuracy (p \u3c .0001) than classifiers which do not consider the temporal dependence encoded within the EEG time-series signal. This improvement is demonstrated by training several deep Recurrent Neural Network (RNN) models including Long Short-Term Memory (LSTM) architectures, a feedforward Artificial Neural Network (ANN), and Support Vector Machine (SVM) models on data from six participants who each perform several Multi-Attribute Task Battery (MATB) sessions on five separate days spread out over a month-long period. Each participant-specific classifier is trained on the first four days of data and tested using the fifth’s. Average classification accuracy of 93.0% is achieved using a deep LSTM architecture. These results represent a 59% decrease in error compared to the best previously published results for this dataset. This study additionally evaluates the significance of new features: all combinations of mean, variance, skewness, and kurtosis of EEG frequency-domain power distributions. Mean and variance are statistically significant features, while skewness and kurtosis are not. The overall performance of this approach is high enough to warrant evaluation for inclusion in operational systems

    Taming the Hurricane of Acquisition Cost Growth – Or At Least Predicting It

    Get PDF
    Cost growth is a persistent adversary to efficient budgeting in the Department of Defense. Despite myriad studies to uncover causes of this cost growth, few of the proposed remedies have made a meaningful impact. A key reason may be that DoD cost estimates are formulated using the highly unrealistic assumption that a program’s current baseline characteristics will not change in the future. Using a weather forecasting analogy, the authors demonstrate how a statistical approach may be used to account for these inevitable baseline changes and identify related cost growth trends. These trends are then used to reduce the error in initial acquisition cost estimates by over one third for major defense acquisition programs, representing a more efficient allocation of $6 billion annually

    Tryptophan Degradation in Women with Breast Cancer: A Pilot Study

    Get PDF
    Altered tryptophan metabolism and indoleamine 2,3-dioxygenase activity are linked to cancer development and progression. In addition, these biological factors have been associated with the development and severity of neuropsychiatric syndromes, including major depressive disorder. However, this biological mechanism associated with both poor disease outcomes and adverse neuropsychiatric symptoms has received little attention in women with breast cancer. Therefore, a pilot study was undertaken to compare levels of tryptophan and other proteins involved in tryptophan degradation in women with breast cancer to women without cancer, and secondarily, to examine levels in women with breast caner over the course of chemotherapy
    • 

    corecore