1,134 research outputs found

    SLOPE - Adaptive variable selection via convex optimization

    Get PDF
    We introduce a new estimator for the vector of coefficients β\beta in the linear model y=Xβ+zy=X\beta+z, where XX has dimensions n×pn\times p with pp possibly larger than nn. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minbRp12yXb22+λ1b(1)+λ2b(2)++λpb(p),\min_{b\in\mathbb{R}^p}\frac{1}{2}\Vert y-Xb\Vert _{\ell_2}^2+\lambda_1\vert b\vert _{(1)}+\lambda_2\vert b\vert_{(2)}+\cdots+\lambda_p\vert b\vert_{(p)}, where λ1λ2λp0\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_p\ge0 and b(1)b(2)b(p)\vert b\vert_{(1)}\ge\vert b\vert_{(2)}\ge\cdots\ge\vert b\vert_{(p)} are the decreasing absolute values of the entries of bb. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical 1\ell_1 procedures such as the Lasso. Here, the regularizer is a sorted 1\ell_1 norm, which penalizes the regression coefficients according to their rank: the higher the rank - that is, stronger the signal - the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289-300] procedure (BH) which compares more significant pp-values with more stringent thresholds. One notable choice of the sequence {λi}\{\lambda_i\} is given by the BH critical values λBH(i)=z(1iq/2p)\lambda_{\mathrm {BH}}(i)=z(1-i\cdot q/2p), where q(0,1)q\in(0,1) and z(α)z(\alpha) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH\lambda_{\mathrm{BH}} provably controls FDR at level qq. Moreover, it also appears to have appreciable inferential properties under more general designs XX while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.Comment: Published at http://dx.doi.org/10.1214/15-AOAS842 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Transformations of High-Level Synthesis Codes for High-Performance Computing

    Full text link
    Specialized hardware architectures promise a major step in performance and energy efficiency over the traditional load/store devices currently employed in large scale computing systems. The adoption of high-level synthesis (HLS) from languages such as C/C++ and OpenCL has greatly increased programmer productivity when designing for such platforms. While this has enabled a wider audience to target specialized hardware, the optimization principles known from traditional software design are no longer sufficient to implement high-performance codes. Fast and efficient codes for reconfigurable platforms are thus still challenging to design. To alleviate this, we present a set of optimizing transformations for HLS, targeting scalable and efficient architectures for high-performance computing (HPC) applications. Our work provides a toolbox for developers, where we systematically identify classes of transformations, the characteristics of their effect on the HLS code and the resulting hardware (e.g., increases data reuse or resource consumption), and the objectives that each transformation can target (e.g., resolve interface contention, or increase parallelism). We show how these can be used to efficiently exploit pipelining, on-chip distributed fast memory, and on-chip streaming dataflow, allowing for massively parallel architectures. To quantify the effect of our transformations, we use them to optimize a set of throughput-oriented FPGA kernels, demonstrating that our enhancements are sufficient to scale up parallelism within the hardware constraints. With the transformations covered, we hope to establish a common framework for performance engineers, compiler developers, and hardware developers, to tap into the performance potential offered by specialized hardware architectures using HLS

    Functional Regression

    Full text link
    Functional data analysis (FDA) involves the analysis of data whose ideal units of observation are functions defined on some continuous domain, and the observed data consist of a sample of functions taken from some population, sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the development of this field, which has accelerated in the past 10 years to become one of the fastest growing areas of statistics, fueled by the growing number of applications yielding this type of data. One unique characteristic of FDA is the need to combine information both across and within functions, which Ramsay and Silverman called replication and regularization, respectively. This article will focus on functional regression, the area of FDA that has received the most attention in applications and methodological development. First will be an introduction to basis functions, key building blocks for regularization in functional regression methods, followed by an overview of functional regression methods, split into three types: [1] functional predictor regression (scalar-on-function), [2] functional response regression (function-on-scalar) and [3] function-on-function regression. For each, the role of replication and regularization will be discussed and the methodological development described in a roughly chronological manner, at times deviating from the historical timeline to group together similar methods. The primary focus is on modeling and methodology, highlighting the modeling structures that have been developed and the various regularization approaches employed. At the end is a brief discussion describing potential areas of future development in this field

    Communication channel analysis and real time compressed sensing for high density neural recording devices

    Get PDF
    Next generation neural recording and Brain- Machine Interface (BMI) devices call for high density or distributed systems with more than 1000 recording sites. As the recording site density grows, the device generates data on the scale of several hundred megabits per second (Mbps). Transmitting such large amounts of data induces significant power consumption and heat dissipation for the implanted electronics. Facing these constraints, efficient on-chip compression techniques become essential to the reduction of implanted systems power consumption. This paper analyzes the communication channel constraints for high density neural recording devices. This paper then quantifies the improvement on communication channel using efficient on-chip compression methods. Finally, This paper describes a Compressed Sensing (CS) based system that can reduce the data rate by > 10x times while using power on the order of a few hundred nW per recording channel

    MetaMesh: A hierarchical computational model for design and fabrication of biomimetic armored surfaces

    Get PDF
    Many exoskeletons exhibit multifunctional performance by combining protection from rigid ceramic components with flexibility through articulated interfaces. Structure-to-function relationships of these natural bioarmors have been studied extensively, and initial development of structural (load-bearing) bioinspired armor materials, most often nacre-mimetic laminated composites, has been conducted. However, the translation of segmented and articulated armor to bioinspired surfaces and applications requires new computational constructs. We propose a novel hierarchical computational model, MetaMesh, that adapts a segmented fish scale armor system to fit complex “host surfaces”. We define a “host” surface as the overall geometrical form on top of which the scale units are computed. MetaMesh operates in three levels of resolution: (i) locally—to construct unit geometries based on shape parameters of scales as identified and characterized in the Polypterus senegalus exoskeleton, (ii) regionally—to encode articulated connection guides that adapt units with their neighbors according to directional schema in the mesh, and (iii) globally—to generatively extend the unit assembly over arbitrarily curved surfaces through global mesh optimization using a functional coefficient gradient. Simulation results provide the basis for further physiological and kinetic development. This study provides a methodology for the generation of biomimetic protective surfaces using segmented, articulated components that maintain mobility alongside full body coverage.Massachusetts Institute of Technology. Institute for Soldier Nanotechnologies (Contract No. W911NF-13-D-0001)United States. Army Research Office (Institute for Collaborative Biotechnologies (ICB), contract no. W911NF-09-D-0001)United States. Department of Defense (National Security Science and Engineering Faculty Fellowship Program (Grant No. N00244-09-1-0064)

    A Multi-Faceted Approach to Enabling Large-Scale Science in a Microsat Constellation

    Get PDF
    The Polarimeter to UNify the Corona and Heliosphere (PUNCH) mission is a constellation of microsatellites that combines advances in several areas of technology enabling the use of simple imaging instrumentation to measure, to-date, inaccessible aspects of the outer corona and solar wind. The primary PUNCH measurement is brightness and polarization state of light scattered by electrons entrained in solar wind features. This measurement is made possible in the context of a small explorer budget by leveraging a combination of three key elements: (a) a constellation of four small satellites conducting synchronized observations, (b) availability of low-cost off-the-shelf components, and (c) advanced and rigorous science data processing that enables the four microsats to produce 3D images as a single virtual observatory. This paper will discuss the contribution of each of these key enablers, and present the overall status of this NASA Small Explorer mission scheduled for launch in 2025
    corecore