1,016 research outputs found

    Evasion-resistant network scan detection

    Get PDF

    Medical image segmentation using GPU-accelerated variational level set methods

    Get PDF
    Medical imaging techniques such as CT, MRI and x-ray imaging are a crucial component of modern diagnostics and treatment. As a result, many automated methods involving digital image processing have been developed for the medical field. Image segmentation is the process of finding the boundaries of one or more objects or regions of interest in an image. This thesis focuses on accelerating image segmentation for the localization of cancerous lung nodules in two-dimensional radiographs. This process is used during radiation treatment, to minimize radiation exposure to healthy tissue. The variational level set method is used to segment out the lung nodules. This method represents an evolving segmentation boundary as the zero level set of a function on a two-dimensional grid. The calculus of variations is employed to minimize a set of energy equations and find the nodule\u27s boundary. Although this approach is flexible, it comes at significant computational cost, and is not able to run in real time on a general purpose workstation. Modern graphics processing units offer a high performance platform for accelerating the variational level set method, which, in its simplest sense, consists of a large number of parallel computations over a grid. NVIDIA\u27s CUDA framework for general purpose computation on GPUs was used in conjunction with three different NVIDIA GPUs to reduce processing time by 11x--20x. This speedup was sufficient to allow real-time segmentation at moderate cost

    Identification of Stellar Flares Using Differential Evolution Template Optimization

    Get PDF
    We explore methods for the identification of stellar flare events in irregularly sampled data of ground-based time domain surveys. In particular, we describe a new technique for identifying flaring stars, which we have implemented in a publicly available Python module called "PyVAN". The approach uses the Differential Evolution algorithm to optimize parameters of empirically derived light-curve templates for different types of stars to fit a candidate light-curve. The difference of the likelihoods that these best-fit templates produced the observed data is then used to delineate targets that are well explained by a flare template but simultaneously poorly explained by templates of common contaminants. By testing on light-curves of known identity and morphology, we show that our technique is capable of recovering flaring status in 69%69\% of all light-curves containing a flare event above thresholds drawn to include <1%\lt1\% of any contaminant population. By applying to Palomar Transient Factory data, we show consistency with prior samples of flaring stars, and identify a small selection of candidate flaring G-type stars for possible follow-up.Comment: 15 figures, 24 page

    Attacker Behavior-Based Metric for Security Monitoring Applied to Darknet Analysis

    Get PDF
    International audienceNetwork traffic monitoring is primordial for network operations and management for many purposes such as Quality-of-Service or security. One major difficulty when dealing with network traffic data (packets, flows...) is the poor semantic of individual attributes (number of bytes, packets, IP addresses, protocol, TCP/UDP port number...). Many attributes can be represented as numerical values but cannot be mapped to a meaningful metric space. Most notably are application port numbers. They are numerical but comparing them as integers is meaningless. In this paper, we propose a fine grained attacker behavior-based network port similarity metric allowing traffic analysis to take into account semantic relations between port numbers. The behavior of attackers is derived from passive observation of a Darknet or telescope, aggregated in a graph model, from which a semantic dissimilarity function is defined. We demonstrate the veracity of this function with real world network data in order to pro-actively block 99% of TCP scans

    Paving the Path for Heterogeneous Memory Adoption in Production Systems

    Full text link
    Systems from smartphones to data-centers to supercomputers are increasingly heterogeneous, comprising various memory technologies and core types. Heterogeneous memory systems provide an opportunity to suitably match varying memory access pat- terns in applications, reducing CPU time thus increasing performance per dollar resulting in aggregate savings of millions of dollars in large-scale systems. However, with increased provisioning of main memory capacity per machine and differences in memory characteristics (for example, bandwidth, latency, cost, and density), memory management in such heterogeneous memory systems poses multi-fold challenges on system programmability and design. In this thesis, we tackle memory management of two heterogeneous memory systems: (a) CPU-GPU systems with a unified virtual address space, and (b) Cloud computing platforms that can deploy cheaper but slower memory technologies alongside DRAMs to reduce cost of memory in data-centers. First, we show that operating systems do not have sufficient information to optimally manage pages in bandwidth-asymmetric systems and thus fail to maximize bandwidth to massively-threaded GPU applications sacrificing GPU throughput. We present BW-AWARE placement/migration policies to support OS to make optimal data management decisions. Second, we present a CPU-GPU cache coherence design where CPU and GPU need not implement same cache coherence protocol but provide cache-coherent memory interface to the programmer. Our proposal is first practical approach to provide a unified, coherent CPU–GPU address space without requiring hardware cache coherence, with a potential to enable an explosion in algorithms that leverage tightly coupled CPU–GPU coordination. Finally, to reduce the cost of memory in cloud platforms where the trend has been to map datasets in memory, we make a case for a two-tiered memory system where cheaper (per bit) memories, such as Intel/Microns 3D XPoint, will be deployed alongside DRAM. We present Thermostat, an application-transparent huge-page-aware software mechanism to place pages in a dual-technology hybrid memory system while achieving both the cost advantages of two-tiered memory and performance advantages of transparent huge pages. With Thermostat’s capability to control the application slowdown on a per application basis, cloud providers can realize cost savings from upcoming cheaper memory technologies by shifting infrequently accessed cold data to slow memory, while satisfying throughput demand of the customers.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137052/1/nehaag_1.pd
    • …
    corecore