8,336 research outputs found

    Analyses in Biology: an analytical alternative to traditional research projects

    Get PDF

    Reweighted nuclear norm regularization: A SPARSEVA approach

    Full text link
    The aim of this paper is to develop a method to estimate high order FIR and ARX models using least squares with re-weighted nuclear norm regularization. Typically, the choice of the tuning parameter in the reweighting scheme is computationally expensive, hence we propose the use of the SPARSEVA (SPARSe Estimation based on a VAlidation criterion) framework to overcome this problem. Furthermore, we suggest the use of the prediction error criterion (PEC) to select the tuning parameter in the SPARSEVA algorithm. Numerical examples demonstrate the veracity of this method which has close ties with the traditional technique of cross validation, but using much less computations.Comment: This paper is accepted and will be published in The Proceedings of the 17th IFAC Symposium on System Identification (SYSID 2015), Beijing, China, 201

    The Neural Representation Benchmark and its Evaluation on Brain and Machine

    Get PDF
    A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible to directly test representational learning algorithms directly against the representations contained in neural systems. Here, we propose a new benchmark for visual representations on which we have directly tested the neural representation in multiple visual cortical areas in macaque (utilizing data from [Majaj et al., 2012]), and on which any computer vision algorithm that produces a feature space can be tested. The benchmark measures the effectiveness of the neural or machine representation by computing the classification loss on the ordered eigendecomposition of a kernel matrix [Montavon et al., 2011]. In our analysis we find that the neural representation in visual area IT is superior to visual area V4. In our analysis of representational learning algorithms, we find that three-layer models approach the representational performance of V4 and the algorithm in [Le et al., 2012] surpasses the performance of V4. Impressively, we find that a recent supervised algorithm [Krizhevsky et al., 2012] achieves performance comparable to that of IT for an intermediate level of image variation difficulty, and surpasses IT at a higher difficulty level. We believe this result represents a major milestone: it is the first learning algorithm we have found that exceeds our current estimate of IT representation performance. We hope that this benchmark will assist the community in matching the representational performance of visual cortex and will serve as an initial rallying point for further correspondence between representations derived in brains and machines.Comment: The v1 version contained incorrectly computed kernel analysis curves and KA-AUC values for V4, IT, and the HT-L3 models. They have been corrected in this versio

    Healthcare Price Transparency: Policy Approaches and Estimated Impacts on Spending

    Get PDF
    Healthcare price transparency discussions typically focus on increasing patients' access to information about their out-of-pocket costs, but that focus is too narrow and should include other audiences -- physicians, employers, health plans and policymakers -- each with distinct needs and uses for healthcare price information. Greater price transparency can reduce U.S. healthcare spending.For example, an estimated 100billioncouldbesavedoverthenext10yearsifthreeselectinterventionswereundertaken.However,mostoftheprojectedsavingscomefrommakingpriceinformationavailabletoemployersandphysicians,accordingtoananalysisbyresearchersattheformerCenterforStudyingHealthSystemChange(HSC).Basedonthecurrentavailabilityandmodestimpactofplan−basedtransparencytools,requiringallprivateplanstoprovidepersonalizedout−of−pocketpricedatatoenrolleeswouldreducetotalhealthspendingbyanestimated100 billion could be saved over the next 10 years if three select interventions were undertaken. However, most of the projected savings come from making price information available to employers and physicians, according to an analysis by researchers at the former Center for Studying Health System Change (HSC). Based on the current availability and modest impact of plan-based transparency tools, requiring all private plans to provide personalized out-of-pocket price data to enrollees would reduce total health spending by an estimated 18 billion over the next decade. While 18billionisasubstantialdollaramount,itislessthanatenthofapercentofthe18 billion is a substantial dollar amount, it is less than a tenth of a percent of the 40 trillionin total projected health spending over the same period. In contrast, using state all-payer claims databases to gather and report hospital-specific prices might reduce spending by an estimated $61 billion over 10 years.The effects of price transparency depend critically on the intended audience, the decision-making context and how prices are presented. And the impact of price transparency can be greatly amplified if target audiences are able and motivated to act on the information. Simply providing prices is insufficient to control spending without other shifts in healthcare financing, including changes in benefit design to make patients more sensitive to price differences among providers and alternative treatments. Other reforms that can amplify the impact of price transparency include shifting from fee-for-service payments that reward providers for volume to payment methods that put providers at risk for spending for episodes of care or defined patient populations. While price transparency alone seems unlikely to transform the healthcare system, it can play a needed role in enabling effective reforms in value-based benefit design and provider payment
    • …
    corecore