2,424 research outputs found

    Does generalization performance of lql^q regularization learning depend on qq? A negative example

    Full text link
    lql^q-regularization has been demonstrated to be an attractive technique in machine learning and statistical modeling. It attempts to improve the generalization (prediction) capability of a machine (model) through appropriately shrinking its coefficients. The shape of a lql^q estimator differs in varying choices of the regularization order qq. In particular, l1l^1 leads to the LASSO estimate, while l2l^{2} corresponds to the smooth ridge regression. This makes the order qq a potential tuning parameter in applications. To facilitate the use of lql^{q}-regularization, we intend to seek for a modeling strategy where an elaborative selection on qq is avoidable. In this spirit, we place our investigation within a general framework of lql^{q}-regularized kernel learning under a sample dependent hypothesis space (SDHS). For a designated class of kernel functions, we show that all lql^{q} estimators for 0<q<0< q < \infty attain similar generalization error bounds. These estimated bounds are almost optimal in the sense that up to a logarithmic factor, the upper and lower bounds are asymptotically identical. This finding tentatively reveals that, in some modeling contexts, the choice of qq might not have a strong impact in terms of the generalization capability. From this perspective, qq can be arbitrarily specified, or specified merely by other no generalization criteria like smoothness, computational complexity, sparsity, etc..Comment: 35 pages, 3 figure

    A new efficient hyperelastic finite element model for graphene and its application to carbon nanotubes and nanocones

    Full text link
    A new hyperelastic material model is proposed for graphene-based structures, such as graphene, carbon nanotubes (CNTs) and carbon nanocones (CNC). The proposed model is based on a set of invariants obtained from the right surface Cauchy-Green strain tensor and a structural tensor. The model is fully nonlinear and can simulate buckling and postbuckling behavior. It is calibrated from existing quantum data. It is implemented within a rotation-free isogeometric shell formulation. The speedup of the model is 1.5 relative to the finite element model of Ghaffari et al. [1], which is based on the logarithmic strain formulation of Kumar and Parks [2]. The material behavior is verified by testing uniaxial tension and pure shear. The performance of the material model is illustrated by several numerical examples. The examples include bending, twisting, and wall contact of CNTs and CNCs. The wall contact is modeled with a coarse grained contact model based on the Lennard-Jones potential. The buckling and post-buckling behavior is captured in the examples. The results are compared with reference results from the literature and there is good agreement

    Fexprs as the basis of Lisp function application; or, $vau: the ultimate abstraction

    Get PDF
    Abstraction creates custom programming languages that facilitate programming for specific problem domains. It is traditionally partitioned according to a two-phase model of program evaluation, into syntactic abstraction enacted at translation time, and semantic abstraction enacted at run time. Abstractions pigeon-holed into one phase cannot interact freely with those in the other, since they are required to occur at logically distinct times. Fexprs are a Lisp device that subsumes the capabilities of syntactic abstraction, but is enacted at run-time, thus eliminating the phase barrier between abstractions. Lisps of recent decades have avoided fexprs because of semantic ill-behavedness that accompanied fexprs in the dynamically scoped Lisps of the 1960s and 70s. This dissertation contends that the severe difficulties attendant on fexprs in the past are not essential, and can be overcome by judicious coordination with other elements of language design. In particular, fexprs can form the basis for a simple, well-behaved Scheme-like language, subsuming traditional abstractions without a multi-phase model of evaluation. The thesis is supported by a new Scheme-like language called Kernel, created for this work, in which each Scheme-style procedure consists of a wrapper that induces evaluation of operands, around a fexpr that acts on the resulting arguments. This arrangement enables Kernel to use a simple direct style of selectively evaluating subexpressions, in place of most Lisps\u27 indirect quasiquotation style of selectively suppressing subexpression evaluation. The semantics of Kernel are treated through a new family of formal calculi, introduced here, called vau calculi. Vau calculi use direct subexpression-evaluation style to extend lambda calculus, eliminating a long-standing incompatibility between lambda calculus and fexprs that would otherwise trivialize their equational theories. The impure vau calculi introduce non-functional binding constructs and unconventional forms of substitution. This strategy avoids a difficulty of Felleisen\u27s lambda-v-CS calculus, which modeled impure control and state using a partially non-compatible reduction relation, and therefore only approximated the Church-Rosser and Plotkin\u27s Correspondence Theorems. The strategy here is supported by an abstract class of Regular Substitutive Reduction Systems, generalizing Klop\u27s Regular Combinatory Reduction Systems

    On essentially non-oscillatory schemes on unstructured meshes: Analysis and implementation

    Get PDF
    A few years ago, the class of Essentially Non-Oscillatory Schemes for the numerical simulation of hyperbolic equations and systems was constructed. Since then, some extensions have been made to multidimensional simulations of compressible flows, mainly in the context of very regular structured meshes. In this paper, we first recall and improve the results of an earlier paper about non-oscillatory reconstruction on unstructured meshes, emphasizing the effective calculation of the reconstruction. Then we describe a class of numerical schemes on unstructured meshes and give some applications for its third order version. This demonstrates that a higher order of accuracy is indeed obtained, even on very irregular meshes

    Developing an Automatic Generation Tool for Cryptographic Pairing Functions

    Get PDF
    Pairing-Based Cryptography is receiving steadily more attention from industry, mainly because of the increasing interest in Identity-Based protocols. Although there are plenty of applications, efficiently implementing the pairing functions is often difficult as it requires more knowledge than previous cryptographic primitives. The author presents a tool for automatically generating optimized code for the pairing functions which can be used in the construction of such cryptographic protocols. In the following pages I present my work done on the construction of pairing function code, its optimizations and how their construction can be automated to ease the work of the protocol implementer. Based on the user requirements and the security level, the created cryptographic compiler chooses and constructs the appropriate elliptic curve. It identifies the supported pairing function: the Tate, ate, R-ate or pairing lattice/optimal pairing, and its optimized parameters. Using artificial intelligence algorithms, it generates optimized code for the final exponentiation and for hashing a point to the required group using the parametrisation of the chosen family of curves. Support for several multi-precision libraries has been incorporated: Magma, MIRACL and RELIC are already included, but more are possible
    corecore