42 research outputs found
Recommended from our members
Sandboxed, Online Debugging of Production Bugs for SOA Systems
Short time-to-bug localization is extremely important for any 24x7 service-oriented application. To this end, we introduce a new debugging paradigm called live debugging. There are two goals that any live debugging infrastructure must meet: Firstly, it must offer real-time insight for bug diagnosis and localization, which is paramount when errors happen in user-facing applications. Secondly, live debugging should not impact user-facing performance for normal events. In large distributed applications, bugs which impact only a small percentage of users are common. In such scenarios, debugging a small part of the application should not impact the entire system.
With the above-stated goals in mind, this thesis presents a framework called Parikshan, which leverages user-space containers (OpenVZ) to launch application instances for the express purpose of live debugging. Parikshan is driven by a live-cloning process, which generates a replica (called debug container) of production services, cloned from a production container which continues to provide the real output to the user. The debug container provides a sandbox environment, for safe execution of monitoring/debugging done by the users without any perturbation to the execution environment. As a part of this framework, we have designed customized-network proxies, which replicate inputs from clients to both the production and test-container, as well safely discard all outputs. Together the network duplicator, and the debug container ensure both compute and network isolation of the debugging environment. We believe that this piece of work provides the first of its kind practical real-time debugging of large multi-tier and cloud applications, without requiring any application downtime, and minimal performance impact
Recommended from our members
weHelp: A Reference Architecture for Social Recommender Systems
Recommender systems have become increasingly popular. Most of the research on recommender systems has focused on recommendation algorithms. There has been relatively little research, however, in the area of generalized system architectures for recommendation systems. In this paper, we introduce weHelp: a reference architecture for social recommender systems — systems where recommendations are derived automatically from the aggregate of logged activities conducted by the system's users. Our architecture is designed to be application and domain agnostic. We feel that a good reference architecture will make designing a recommendation system easier; in particular, weHelp aims to provide a practical design template to help developers design their own well-modularized systems
Recommended from our members
The weHelp Reference Architecture for Community-Driven Recommender Systems
Recommender systems have become increasingly popular. Most research on recommender systems has focused on recommendation algorithms. There has been relatively little research, however, in the area of generalized system architectures for recommendation systems. In this paper, we introduce weHelp - a reference architecture for social recommender systems. Our architecture is designed to be application and domain agnostic, but we briefly discuss here how it applies to recommender systems for software engineering
Towards Diversity in Recommendations Using Social Networks
While there has been a lot of research towards improving the accuracy of recommender systems, the resulting systems have tended to become increasingly narrow in suggestion variety. An emerging trend in recommendation systems is to actively seek out diversity in recommendations, where the aim is to provide unexpected, varied, and serendipitous recommendations to the user. Our main contribution in this paper is a new approach to diversity in recommendations called "Social Diversity," a technique that uses social network information to diversify recommendation results. Social Diversity utilizes social networks in recommender systems to leverage the diverse underlying preferences of different user communities to introduce diversity into recommendations. This form of diversification ensures that users in different social networks (who may not collaborate in real life, since they are in a different network) share information, helping to prevent siloization of knowledge and recommendations. We describe our approach and show its feasibility in providing diverse recommendations for the MovieLens dataset
Recommended from our members
POWER: Parallel Optimizations With Executable Rewriting
The hardware industry's rapid development of multicore and many core hardware has outpaced the software industry's transition from sequential to parallel programs. Most applications are still sequential, and many cores on parallel machines remain unused. We propose a tool that uses data-dependence profiling and binary rewriting to parallelize executables without access to source code. Our technique uses Bernstein's conditions to identify independent sets of basic blocks that can be executed in parallel, introducing a level of granularity between fine-grained instruction level and coarse grained task level parallelism. We analyze dynamically generated control and data dependence graphs to find independent sets of basic blocks which can be parallelized. We then propose to parallelize these candidates using binary rewriting techniques. Our technique aims to demonstrate the parallelism that remains in serial application by exposing concrete opportunities for parallelism
Why the Common Model of the mind needs holographic a-priori categories
The enterprise of developing a common model of the mind aims to create a foundational architecture for rational behavior in humans. Philosopher Immanuel Kant attempted something similar in 1781. The principles laid out by Kant for pursuing this goal can shed important light on the common model project. Unfortunately, Kant's program has become hopelessly mired in philosophical hair-splitting. In this paper, we first use Kant's approach to isolate the founding conditions of rationality in humans. His philosophy lends support to Newell's knowledge level hypothesis, and together with it directs the common model enterprise to take knowledge, and not just memory, seriously as a component of the common model of the mind. We then map Kant's cognitive mechanics to the operations which are used in the current models of cognitive architecture. Finally, we argue that this mapping can pave the way to develop the ontology of the knowledge level for general intelligence. We further show how they can be actualized in a memory system using high dimensional vectors to achieve specific cognitive abilities
Quantifying the Learning Curve in the Use of a Novel Vascular Closure Device An Analysis of the NCDR (National Cardiovascular Data Registry) CathPCI Registry
ObjectivesThis study sought to quantify the learning curve for the safety and effectiveness of a newly introduced vascular closure device through evaluation of the NCDR (National Cardiovascular Data Registry) CathPCI clinical outcomes registry.BackgroundThe impact of learning on the clinical outcomes complicates the assessment of the safety and efficacy during the early experience with newly introduced medical devices.MethodsWe performed a retrospective analysis of the relationship between cumulative institutional experience and clinical device success, defined as device deployment success and freedom from any vascular complications, for the StarClose vascular closure device (Abbott Vascular, Redwood City, California). Generalized estimating equation modeling was used to develop risk-adjusted clinical success predictions that were analyzed to quantify learning curve rates.ResultsA total of 107,710 procedures used at least 1 StarClose deployment, between January 1, 2006, and December 31, 2007, with overall clinical success increasing from 93% to 97% during the study period. The learning curve was triphasic, with an initial rapid learning phase, followed by a period of declining rates of success, followed finally by a recovery to a steady-state rate of improved device success. The rates of learning were influenced positively by diagnostic (vs. percutaneous coronary intervention) procedure use and teaching status and were affected inversely by annual institutional volume.ConclusionsAn institutional-level learning curve for the initial national experience of StarClose was triphasic, likely indicating changes in patient selection and expansion of number of operators during the initial phases of device adoption. The rate of learning was influenced by several institutional factors, including overall procedural volume, utilization for percutaneous coronary intervention procedures, and teaching status
Recommended from our members
Sandboxed, Online Debugging of Production Bugs for SOA Systems
Short time-to-bug localization is extremely important for any 24x7 service-oriented application. To this end, we introduce a new debugging paradigm called live debugging. There are two goals that any live debugging infrastructure must meet: Firstly, it must offer real-time insight for bug diagnosis and localization, which is paramount when errors happen in user-facing applications. Secondly, live debugging should not impact user-facing performance for normal events. In large distributed applications, bugs which impact only a small percentage of users are common. In such scenarios, debugging a small part of the application should not impact the entire system.
With the above-stated goals in mind, this thesis presents a framework called Parikshan, which leverages user-space containers (OpenVZ) to launch application instances for the express purpose of live debugging. Parikshan is driven by a live-cloning process, which generates a replica (called debug container) of production services, cloned from a production container which continues to provide the real output to the user. The debug container provides a sandbox environment, for safe execution of monitoring/debugging done by the users without any perturbation to the execution environment. As a part of this framework, we have designed customized-network proxies, which replicate inputs from clients to both the production and test-container, as well safely discard all outputs. Together the network duplicator, and the debug container ensure both compute and network isolation of the debugging environment. We believe that this piece of work provides the first of its kind practical real-time debugging of large multi-tier and cloud applications, without requiring any application downtime, and minimal performance impact