56,202 research outputs found

    3D printing of gas jet nozzles for laser-plasma accelerators

    Full text link
    Recent results on laser wakefield acceleration in tailored plasma channels have underlined the importance of controlling the density profile of the gas target. In particular it was reported that appropriate density tailoring can result in improved injection, acceleration and collimation of laser-accelerated electron beams. To achieve such profiles innovative target designs are required. For this purpose we have reviewed the usage of additive layer manufacturing, commonly known as 3D printing, in order to produce gas jet nozzles. Notably we have compared the performance of two industry standard techniques, namely selective laser sintering (SLS) and stereolithography (SLA). Furthermore we have used the common fused deposition modeling (FDM) to reproduce basic gas jet designs and used SLA and SLS for more sophisticated nozzle designs. The nozzles are characterized interferometrically and used for electron acceleration experiments with the Salle Jaune terawatt laser at Laboratoire d'Optique Appliqu\'ee

    Towards Vulnerability Discovery Using Staged Program Analysis

    Full text link
    Eliminating vulnerabilities from low-level code is vital for securing software. Static analysis is a promising approach for discovering vulnerabilities since it can provide developers early feedback on the code they write. But, it presents multiple challenges not the least of which is understanding what makes a bug exploitable and conveying this information to the developer. In this paper, we present the design and implementation of a practical vulnerability assessment framework, called Melange. Melange performs data and control flow analysis to diagnose potential security bugs, and outputs well-formatted bug reports that help developers understand and fix security bugs. Based on the intuition that real-world vulnerabilities manifest themselves across multiple parts of a program, Melange performs both local and global analyses. To scale up to large programs, global analysis is demand-driven. Our prototype detects multiple vulnerability classes in C and C++ code including type confusion, and garbage memory reads. We have evaluated Melange extensively. Our case studies show that Melange scales up to large codebases such as Chromium, is easy-to-use, and most importantly, capable of discovering vulnerabilities in real-world code. Our findings indicate that static analysis is a viable reinforcement to the software testing tool set.Comment: A revised version to appear in the proceedings of the 13th conference on Detection of Intrusions and Malware & Vulnerability Assessment (DIMVA), July 201

    Self Piercing Riveting for Metal-Polymer Joints

    Get PDF
    Self-Piercing Riveting (SPR) is a sheet metal joining technique based on the insertion of a rivet into two or more sheets, with no preparatory hole. This process has gained wide diffusion in the automotive industry, due to the increasing use of materials alternative to steel, that are difficult or impossible to join with traditional techniques. In particular, polymeric materials are becoming increasingly used, due to their favorable weight/strength ratio. This paper reports the results of experimental investigations, aimed at identifying the variables affecting the mechanical characteristics of mixed metal-plastic joints. A statistic model for the optimization of the geometrical parameters has been computed. The paper demonstrates that self-piercing riveting appears competitive for metal/polymer junction. The results analyzed in light of statistical techniques show that some geometrical parameters affect joint performance more than others and can therefore be used as independent variables for joint performance optimizatio

    A deep level set method for image segmentation

    Full text link
    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types of medical imaging data (liver CT and left ven-tricle MRI data), we show that the integrated method achieves goodperformance even when little training data is available, outperformingthe FCN or the level set model alone

    Search strategies of Wikipedia readers

    Get PDF
    The quest for information is one of the most common activity of human beings. Despite the the impressive progress of search engines, not to miss the needed piece of information could be still very tough, as well as to acquire specific competences and knowledge by shaping and following the proper learning paths. Indeed, the need to find sensible paths in information networks is one of the biggest challenges of our societies and, to effectively address it, it is important to investigate the strategies adopted by human users to cope with the cognitive bottleneck of finding their way in a growing sea of information. Here we focus on the case of Wikipedia and investigate a recently released dataset about users’ click on the English Wikipedia, namely the English Wikipedia Clickstream. We perform a semantically charged analysis to uncover the general patterns followed by information seekers in the multi-dimensional space of Wikipedia topics/categories. We discover the existence of well defined strategies in which users tend to start from very general, i.e., semantically broad, pages and progressively narrow down the scope of their navigation, while keeping a growing semantic coherence. This is unlike strategies associated to tasks with predefined search goals, namely the case of the Wikispeedia game. In this case users first move from the ‘particular’ to the ‘universal’ before focusing down again to the required target. The clear picture offered here represents a very important stepping stone towards a better design of information networks and recommendation strategies, as well as the construction of radically new learning paths

    How large should whales be?

    Full text link
    The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them.Comment: 7 pages, 3 figures, 2 data table
    • …
    corecore