2,266 research outputs found
Balancing Interactive Performance and Budgeted Resources in Mobile Computing.
In this dissertation, we explore the various limited resources involved in mobile applications --- battery energy, cellular data usage, and, critically, user attention --- and we devise principled methods for managing the tradeoffs involved in creating a good user experience. Building quality mobile applications requires developers to understand complex interactions between network usage, performance, and resource consumption. Because of this
difficulty, developers commonly choose simple but suboptimal approaches that strictly prioritize performance or resource conservation.
These extremes are symptoms of a lack of system-provided abstractions for managing the complexity inherent in managing performance/resource tradeoffs. By providing abstractions that help applications manage these tradeoffs, mobile systems can significantly improve user-visible performance without exhausting resource budgets. This dissertation explores three such abstractions in detail. We first present Intentional
Networking, a system that provides synchronization primitives and intelligent scheduling for multi-network traffic. Next, we present Informed Mobile Prefetching, a system that helps applications decide when to prefetch data and how aggressively to spend limited battery energy and cellular data resources toward that end. Finally, we present Meatballs, a library that helps applications consider the cloudy nature of predictions when making decisions, selectively employing redundancy to mitigate uncertainty and provide more
reliable performance. Overall, experiments show that these abstractions can significantly reduce interactive delay without overspending the available energy and data resources.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/108956/1/brettdh_1.pd
Influence Maximization in Social Networks: A Survey
Online social networks have become an important platform for people to
communicate, share knowledge and disseminate information. Given the widespread
usage of social media, individuals' ideas, preferences and behavior are often
influenced by their peers or friends in the social networks that they
participate in. Since the last decade, influence maximization (IM) problem has
been extensively adopted to model the diffusion of innovations and ideas. The
purpose of IM is to select a set of k seed nodes who can influence the most
individuals in the network.
In this survey, we present a systematical study over the researches and
future directions with respect to IM problem. We review the information
diffusion models and analyze a variety of algorithms for the classic IM
algorithms. We propose a taxonomy for potential readers to understand the key
techniques and challenges. We also organize the milestone works in time order
such that the readers of this survey can experience the research roadmap in
this field. Moreover, we also categorize other application-oriented IM studies
and correspondingly study each of them. What's more, we list a series of open
questions as the future directions for IM-related researches, where a potential
reader of this survey can easily observe what should be done next in this
field
Automatiserad behandlingsplanering inom högintensivt fokuserat ultraljud guidat av magnetresonanstermometri
Högintensivt fokuserat ultraljud guidat av magnetresonanstermometri (MR-HIFU) är en ickeinvasiv medicinsk metod för att åtstadkomma lokal uppvärmning i vävnad, vilket tillämpas främst för behandling av tumörer. Tekniken utnyttjar fokuserat ultraljud för att lokalt höja temperaturen i tumörvävnaden vilket resulterar nekros. För att orsaka ablation i hela tumören krävs det att flera av dessa celler sonikeras. Att manuellt planera hur dessa celler skall placeras, medan behandlingens samtliga säkerhetsaspekter tas i beaktande, är en tidskrävande och monoton process som samtidigt kräver expertis och precision. Dessutom, på grund av behandlingens mångfacetterade karaktär är den svår att optimera manuellt.
Syftet med detta arbete var att utforma en algoritm för automatisk behandlingsplanering för MR-HIFU för att förbättra arbetsflödet i planeringsprocessen, samt att producera en prototyp av en dylik algoritm. Den presenterade algoritmen är en stegvis process. Först producerar algoritmen en grupp av positioner som kan sonikeras på ett säkert sätt. Därefter finner algoritmen den optimala undergruppen av dessa positioner. Slutligen optimerar algoritmen resten av de relevanta behandlingsparametrarna. Behandlingen kan optimeras antingen genom att maximera volymen som utsätts för ablation eller genom att minimera tiden som behandlingen kräver. Den presenterade algoritmen är tillräckligt generell för att kunna anpassas till samtliga ablationstillämpningar av MR-HIFU. Den har en modulstruktur vilket förenklar uppgradering, och den kan använda information om hur behandlingen framskrider för att reglera och uppdatera planen. Detta är den första publicerade algoritmen för behandlingsplanering inom MRHIFU som kan optimera behandlingen samt använda återkoppling för att reglera planen. Prototypen testades i två konstgjorda fall samt i ett äkta kliniskt fall vilket dess genomförbarhet.Magnetic Resonance guided High Intensity Focused Ultrasound (MR-HIFU) is a noninvasive medical procedure for localized tissue heating, used mostly in treatment of tumours. The modality utilizes focused ultrasound to raise the temperature of the tumour tissue in small localized volumes, resulting in necrosis. To ablate the whole tumour, several of these sonication cells are need. Planning the positions of the cells, while taking into consideration all safety aspects of the treatment, is a time consuming and monotonous task, but requires at the same time expertise and precision. Furthermore, due to the complex characteristics of a MR-HIFU treatment, it is difficult to optimize manually.
The aim of the thesis was to design an outline for an automated treatment planning algorithm for MR-HIFU, and to produce a prototype of such an algorithm. The presented algorithm relies on a step-wise process. First, a set of positions is produced that can be sonicated safely. Then, an optimal subset of those positions is selected. Finally, the remaining treatment parameters are optimized. The treatment can either be optimized for maximum coverage or minimum total treatment time. The proposed algorithm is general enough to be adaptable to all ablation applications of MR-HIFU. It has a modular structure for easy updating, and it is able to improve on the plan during the treatment based on feedback from already delivered cells. This is the first published treatment planning algorithm for MR-HIFU that optimizes the treatment and has the ability to update the plan based on feedback. The prototype was tested in two artificial test cases and one real clinical case, proving its feasibility
Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified
Specifications and programs for computer software validation
Three software products developed during the study are reported and include: (1) FORTRAN Automatic Code Evaluation System, (2) the Specification Language System, and (3) the Array Index Validation System
Security, trust and cooperation in wireless sensor networks
Wireless sensor networks are a promising technology for many real-world applications such as critical infrastructure monitoring, scientific data gathering, smart buildings, etc.. However, given the typically unattended and potentially unsecured operation environment, there has been an increased number of security threats to sensor networks. In addition, sensor networks have very constrained resources, such as limited energy, memory, computational power, and communication bandwidth. These unique challenges call for new security mechanisms and algorithms. In this dissertation, we propose novel algorithms and models to address some important and challenging security problems in wireless sensor networks.
The first part of the dissertation focuses on data trust in sensor networks. Since sensor networks are mainly deployed to monitor events and report data, the quality of received data must be ensured in order to make meaningful inferences from sensor data. We first study a false data injection attack in the distributed state estimation problem and propose a distributed Bayesian detection algorithm, which could maintain correct estimation results when less than one half of the sensors are compromised. To deal with the situation where more than one half of the sensors may be compromised, we introduce a special class of sensor nodes called \textit{trusted cores}. We then design a secure distributed trust aggregation algorithm that can utilize the trusted cores to improve network robustness. We show that as long as there exist some paths that can connect each regular node to one of these trusted cores, the network can not be subverted by attackers.
The second part of the dissertation focuses on sensor network monitoring and anomaly detection. A sensor network may suffer from system failures due to loss of links and nodes, or malicious intrusions. Therefore, it is critical to continuously monitor the overall state of the network and locate performance anomalies. The network monitoring and probe selection problem is formulated as a budgeted coverage problem and a Markov decision process. Efficient probing strategies are designed to achieve a flexible tradeoff between inference accuracy and probing overhead. Based on the probing results on traffic measurements, anomaly detection can be conducted. To capture the highly dynamic network traffic, we develop a detection scheme based on multi-scale analysis of the traffic using wavelet transforms and hidden Markov models. The performance of the probing strategy and of the detection scheme are extensively evaluated in malicious scenarios using the NS-2 network simulator.
Lastly, to better understand the role of trust in sensor networks, a game theoretic model is formulated to mathematically analyze the relation between trust and cooperation. Given the trust relations, the interactions among nodes are modeled as a network game on a trust-weighted graph. We then propose an efficient heuristic method that explores network heterogeneity to improve Nash equilibrium efficiency
- …