271 research outputs found

    Sales Management Portal

    Get PDF
    Sales Management Portal is a client information tracking portal. Which client’s info contains contacts and activities, opportunities and proposals, and eventually projects. This portal has ability to add clients and search clients based on the name and prospects and can update information of contacts and activities, opportunities and proposals to the respective client. This portal has ability to display announcements message on manager\u27s screen. This portal should have a responsive design so it will adjust to diverse resolutions, making it easy for users to navigate the portal on their own devices. There are not any existing systems for client side. So all the work are handle manually and have to be noted down in some register and also taking care of that documentation. They are arranged meeting by call and if any update occurred then again the client and update meeting schedule. its wasting time and money as well and also the disturb the valuable clients

    The State of the Art in Language Workbenches. Conclusions from the Language Workbench Challenge

    Get PDF
    Language workbenches are tools that provide high-level mechanisms for the implementation of (domain-specific) languages. Language workbenches are an active area of research that also receives many contributions from industry. To compare and discuss existing language workbenches, the annual Language Workbench Challenge was launched in 2011. Each year, participants are challenged to realize a given domain-specific language with their workbenches as a basis for discussion and comparison. In this paper, we describe the state of the art of language workbenches as observed in the previous editions of the Language Workbench Challenge. In particular, we capture the design space of language workbenches in a feature model and show where in this design space the participants of the 2013 Language Workbench Challenge reside. We compare these workbenches based on a DSL for questionnaires that was realized in all workbenches

    Scaling Causality Analysis for Production Systems.

    Full text link
    Causality analysis reveals how program values influence each other. It is important for debugging, optimizing, and understanding the execution of programs. This thesis scales causality analysis to production systems consisting of desktop and server applications as well as large-scale Internet services. This enables developers to employ causality analysis to debug and optimize complex, modern software systems. This thesis shows that it is possible to scale causality analysis to both fine-grained instruction level analysis and analysis of Internet scale distributed systems with thousands of discrete software components by developing and employing automated methods to observe and reason about causality. First, we observe causality at a fine-grained instruction level by developing the first taint tracking framework to support tracking millions of input sources. We also introduce flexible taint tracking to allow for scoping different queries and dynamic filtering of inputs, outputs, and relationships. Next, we introduce the Mystery Machine, which uses a ``big data'' approach to discover causal relationships between software components in a large-scale Internet service. We leverage the fact that large-scale Internet services receive a large number of requests in order to observe counterexamples to hypothesized causal relationships. Using discovered casual relationships, we identify the critical path for request execution and use the critical path analysis to explore potential scheduling optimizations. Finally, we explore using causality to make data-quality tradeoffs in Internet services. A data-quality tradeoff is an explicit decision by a software component to return lower-fidelity data in order to improve response time or minimize resource usage. We perform a study of data-quality tradeoffs in a large-scale Internet service to show the pervasiveness of these tradeoffs. We develop DQBarge, a system that enables better data-quality tradeoffs by propagating critical information along the causal path of request processing. Our evaluation shows that DQBarge helps Internet services mitigate load spikes, improve utilization of spare resources, and implement dynamic capacity planning.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/135888/1/mcchow_1.pd

    Accelerated Data Delivery Architecture

    Get PDF
    This paper introduces the Accelerated Data Delivery Architecture (ADDA). ADDA establishes a framework to distribute transactional data and control consistency to achieve fast access to data, distributed scalability and non-blocking concurrency control by using a clean declarative interface. It is designed to be used with web-based business applications. This framework uses a combination of traditional Relational Database Management System (RDBMS) combined with a distributed Not Only SQL (NoSQL) database and a browser-based database. It uses a single physical and conceptual database schema designed for a standard RDBMS driven application. The design allows the architect to assign consistency levels to entities which determine the storage location and query methodology. The implementation of these levels is flexible and requires no database schema changes in order to change the level of an entity. Also, a data leasing system to enforce concurrency control in a non-blocking manner is employed for critical data items. The system also ensures that all data is available for query from the RDBMS server. This means that the system can have the performance advantages of a DDBMS system and the ACID qualities of a single-site RDBMS system without the complex design considerations of traditional DDBMS systems

    Achieving Adaptation Through Live Virtual Machine Migration in Two-Tier Clouds

    Get PDF
    This thesis presents a model-driven approach for application deployment and management in two-tier heterogeneous cloud environments. For application deployment, we introduce the architecture, the services and the domain specific language that abstract common features of multi-cloud deployments. By leveraging the architecture and the language, application deployers author a deployment model that captures the high-level structure of the application. The deployment model is then translated into deployment workflows on specific clouds. As a use case, we introduce a live VM migration framework that maintains the application quality of services through VM migrations across two tier-clouds. The proposed framework can monitor the performance of the applications and their underlying infrastructure and plan and executes VM migrations to eliminate hotspots in a datacenter. We evaluate both the application deployment architecture and the live migration on public clouds
    • …
    corecore