1,016 research outputs found
Medical image segmentation using GPU-accelerated variational level set methods
Medical imaging techniques such as CT, MRI and x-ray imaging are a crucial component of modern diagnostics and treatment. As a result, many automated methods involving digital image processing have been developed for the medical field. Image segmentation is the process of finding the boundaries of one or more objects or regions of interest in an image. This thesis focuses on accelerating image segmentation for the localization of cancerous lung nodules in two-dimensional radiographs. This process is used during radiation treatment, to minimize radiation exposure to healthy tissue. The variational level set method is used to segment out the lung nodules. This method represents an evolving segmentation boundary as the zero level set of a function on a two-dimensional grid. The calculus of variations is employed to minimize a set of energy equations and find the nodule\u27s boundary. Although this approach is flexible, it comes at significant computational cost, and is not able to run in real time on a general purpose workstation. Modern graphics processing units offer a high performance platform for accelerating the variational level set method, which, in its simplest sense, consists of a large number of parallel computations over a grid. NVIDIA\u27s CUDA framework for general purpose computation on GPUs was used in conjunction with three different NVIDIA GPUs to reduce processing time by 11x--20x. This speedup was sufficient to allow real-time segmentation at moderate cost
Identification of Stellar Flares Using Differential Evolution Template Optimization
We explore methods for the identification of stellar flare events in
irregularly sampled data of ground-based time domain surveys. In particular, we
describe a new technique for identifying flaring stars, which we have
implemented in a publicly available Python module called "PyVAN". The approach
uses the Differential Evolution algorithm to optimize parameters of empirically
derived light-curve templates for different types of stars to fit a candidate
light-curve. The difference of the likelihoods that these best-fit templates
produced the observed data is then used to delineate targets that are well
explained by a flare template but simultaneously poorly explained by templates
of common contaminants. By testing on light-curves of known identity and
morphology, we show that our technique is capable of recovering flaring status
in of all light-curves containing a flare event above thresholds drawn
to include of any contaminant population. By applying to Palomar
Transient Factory data, we show consistency with prior samples of flaring
stars, and identify a small selection of candidate flaring G-type stars for
possible follow-up.Comment: 15 figures, 24 page
Recommended from our members
LIDF: Layered intrusion detection framework for ad-hoc networks
As ad-hoc networks have different characteristics from a wired network, the intrusion detection techniques used for wired networks are no longer sufficient and effective when adapted directly to a wireless ad-hoc network. In this article, first Ď„he security challenges in intrusion detection for ad-hoc networks are identified and the related work for anomaly detection is discussed. We then propose a layered intrusion detection framework, which consists of collection, detection and alert modules that are handled by local agents. The collection, detection and alert modules are uniquely enabled with the main operations of ad-hoc networking, which are found at the OSI link and network layers. The proposed modules are based on interpolating polynomials and linear threshold schemes. An experimental evaluation of these modules shows their efficiency for several attack scenarios, such as route logic compromise, traffic patterns distortion and denial of service attacks
Attacker Behavior-Based Metric for Security Monitoring Applied to Darknet Analysis
International audienceNetwork traffic monitoring is primordial for network operations and management for many purposes such as Quality-of-Service or security. One major difficulty when dealing with network traffic data (packets, flows...) is the poor semantic of individual attributes (number of bytes, packets, IP addresses, protocol, TCP/UDP port number...). Many attributes can be represented as numerical values but cannot be mapped to a meaningful metric space. Most notably are application port numbers. They are numerical but comparing them as integers is meaningless. In this paper, we propose a fine grained attacker behavior-based network port similarity metric allowing traffic analysis to take into account semantic relations between port numbers. The behavior of attackers is derived from passive observation of a Darknet or telescope, aggregated in a graph model, from which a semantic dissimilarity function is defined. We demonstrate the veracity of this function with real world network data in order to pro-actively block 99% of TCP scans
Recommended from our members
Pedestrian localisation for indoor environments
Ubiquitous computing systems aim to assist us as we go about our daily lives, whilst at the same time fading into the background so that we do not notice their presence. To do this they need to be able to sense their surroundings and infer context about the state of the world. Location has proven to be an important source of contextual information for such systems. If a device can determine its own location then it can infer its surroundings and adapt accordingly.
Of particular interest for many ubiquitous computing systems is the ability to track people in indoor environments. This interest has led to the development of many indoor location systems based on a range of technologies including infra-red light, ultrasound and radio. Unfortunately existing systems that achieve the kind of sub-metre accuracies desired by many location-aware applications require large amounts of infrastructure to be installed into the environment.
This thesis investigates an alternative approach to indoor pedestrian tracking that uses on-body inertial sensors rather than relying on fixed infrastructure. It is demonstrated that general purpose inertial navigation algorithms are unsuitable for pedestrian tracking due to the rapid accumulation of errors in the tracked position. In practice it is necessary to frequently correct such algorithms using additional measurements or constraints. An extended Kalman filter
is developed for this purpose and is applied to track pedestrians using foot-mounted inertial sensors. By detecting when the foot is stationary and applying zero velocity corrections a pedestrian’s relative movements can be tracked far more accurately than is possible using uncorrected inertial navigation.
Having developed an effective means of calculating a pedestrian’s relative movements, a localisation filter is developed that combines relative movement measurements with environmental constraints derived from a map of the environment. By enforcing constraints such as impassable walls and floors the filter is able to narrow down the absolute position of a pedestrian as they move through an indoor environment. Once the user’s position has been uniquely determined the same filter is demonstrated to track the user’s absolute position to sub-metre accuracy.
The localisation filter in its simplest form is computationally expensive. Furthermore symmetry exhibited by the environment may delay or prevent the filter from determining the user’s position. The final part of this thesis describes the concept of assisted localisation, in which additional measurements are used to solve both of these problems. The use of sparsely deployed WiFi access points is discussed in detail.
The thesis concludes that inertial sensors can be used to track pedestrians in indoor environments. Such an approach is suited to cases in which it is impossible or impractical to install large amounts of fixed infrastructure into the environment in advance
Paving the Path for Heterogeneous Memory Adoption in Production Systems
Systems from smartphones to data-centers to supercomputers are increasingly heterogeneous, comprising various memory technologies and core types. Heterogeneous memory systems provide an opportunity to suitably match varying memory access pat- terns in applications, reducing CPU time thus increasing performance per dollar resulting in aggregate savings of millions of dollars in large-scale systems. However, with increased provisioning of main memory capacity per machine and differences in memory characteristics (for example, bandwidth, latency, cost, and density), memory management in such heterogeneous memory systems poses multi-fold challenges on system programmability and design.
In this thesis, we tackle memory management of two heterogeneous memory systems: (a) CPU-GPU systems with a unified virtual address space, and (b) Cloud computing platforms that can deploy cheaper but slower memory technologies alongside DRAMs to reduce cost of memory in data-centers. First, we show that operating systems do not have sufficient information to optimally manage pages in bandwidth-asymmetric systems and thus fail to maximize bandwidth to massively-threaded GPU applications sacrificing GPU throughput. We present BW-AWARE placement/migration policies to support OS to make optimal data management decisions. Second, we present a CPU-GPU cache coherence design where CPU and GPU need not implement same cache coherence protocol but provide cache-coherent memory interface to the programmer. Our proposal is first practical approach to provide a unified, coherent CPU–GPU address space without requiring hardware cache coherence, with a potential to enable an explosion in algorithms that leverage tightly coupled CPU–GPU coordination.
Finally, to reduce the cost of memory in cloud platforms where the trend has been to map datasets in memory, we make a case for a two-tiered memory system where cheaper (per bit) memories, such as Intel/Microns 3D XPoint, will be deployed alongside DRAM. We present Thermostat, an application-transparent huge-page-aware software mechanism to place pages in a dual-technology hybrid memory system while achieving both the cost advantages of two-tiered memory and performance advantages of transparent huge pages. With Thermostat’s capability to control the application slowdown on a per application basis, cloud providers can realize cost savings from upcoming cheaper memory technologies by shifting infrequently accessed cold data to slow memory, while satisfying throughput demand of the customers.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137052/1/nehaag_1.pd
- …