3,974 research outputs found

    Development of an autonomous distributed multiagent monitoring system for the automatic classification of end users

    Get PDF
    The purpose of this study is to investigate the feasibility of constructing a software Multi-Agent based monitoring and classification system and utilizing it to provide an automated and accurate classification for end users developing applications in the spreadsheet domain. Resulting in, is the creation of the Multi-Agent Classification System (MACS). The Microsoft‘s .NET Windows Service based agents were utilized to develop the Monitoring Agents of MACS. These agents function autonomously to provide continuous and periodic monitoring of spreadsheet workbooks by content. .NET Windows Communication Foundation (WCF) Services technology was used together with the Service Oriented Architecture (SOA) approach for the distribution of the agents over the World Wide Web in order to satisfy the monitoring and classification of the multiple developer aspect. The Prometheus agent oriented design methodology and its accompanying Prometheus Design Tool (PDT) was employed for specifying and designing the agents of MACS, and Visual Studio.NET 2008 for creating the agency using visual C# programming language. MACS was evaluated against classification criteria from the literature with the support of using real-time data collected from a target group of excel spreadsheet developers over a network. The Monitoring Agents were configured to execute automatically, without any user intervention as windows service processes in the .NET web server application of the system. These distributed agents listen to and read the contents of excel spreadsheets development activities in terms of file and author properties, function and formulas used, and Visual Basic for Application (VBA) macro code constructs. Data gathered by the Monitoring Agents from various resources over a period of time was collected and filtered by a Database Updater Agent residing in the .NET client application of the system. This agent then transfers and stores the data in Oracle server database via Oracle stored procedures for further processing that leads to the classification of the end user developers. Oracle data mining classification algorithms: Naive Bayes, Adaptive Naive Bayes, Decision Trees, and Support Vector Machine were utilized to analyse the results from the data gathering process in order to automate the classification of excel spreadsheet developers. The accuracy of the predictions achieved by the models was compared. The results of the comparison showed that Naive Bayes classifier achieved the best results with accuracy of 0.978. Therefore, the MACS can be utilized to provide a Multi-Agent based automated classification solution to spreadsheet developers with a high degree of accuracy

    The A.D.E. taxonomy of spreadsheet application development

    Get PDF
    Spreadsheets are a major application in end-user computing, one of the fastest growing areas of computing. Studies have shown that 30% of spreadsheet applications contain errors. As major decisions are often made with the assistance of spreadsheets, the control of spreadsheet applications is a matter of concern to enduser developers, managers, EDP auditors and computer professionals. The application of appropriate controls to the spreadsheet development process requires prior categorisation of the spreadsheet application. The special-purpose A.D.E. (Application, Development, Environment) taxonomy of spreadsheet application development was evolved by mathematical taxonomic methods to categorise spreadsheet development projects to facilitate their management and control

    Impliance: A Next Generation Information Management Appliance

    Full text link
    ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US

    Beyond Regulatory Compliance for Spreadsheet Controls: A Tutorial to Assist Practitioners and a Call for Research

    Get PDF
    In the past decade, accounting scandals and financial reporting errors have led to heightened awareness of the need for IT controls and legislation of control regimes. In the United States, the Sarbanes–Oxley Act of 2002 (SOX) was one of the early initiatives to legislate internal controls over financial reporting. Many countries and regions have followed with similar legislation. In this tutorial we present an analysis of the prior work on error prevention and detection in spreadsheets as it relates to SOX and IT governance frameworks, more generally. SOX requires publicly traded companies to address the problem of spreadsheet management and to assume some accountability for generating accurate information from spreadsheets for financial reporting. We attempt to reconcile requirements for SOX with IT spreadsheet research. Gaps in design and implementation of spreadsheet controls are identified. From our review of prior work on spreadsheets, we offer a series of options for controlling the spreadsheet development process. Finally, we provide suggestions to help IT practitioners in organizations look beyond SOX regulations at governance of end-user developed content

    On the Challenges of Collaborative Data Processing

    Get PDF
    The last 30 years have seen the creation of a variety of electronic collaboration tools for science and business. Some of the best-known collaboration tools support text editing (e.g., wikis). Wikipedia's success shows that large-scale collaboration can produce highly valuable content. Meanwhile much structured data is being collected and made publicly available. We have never had access to more powerful databases and statistical packages. Is large-scale collaborative data analysis now possible? Using a quantitative analysis of Web 2.0 data visualization sites, we find evidence that at least moderate open collaboration occurs. We then explore some of the limiting factors of collaboration over data.Comment: to appear as a chapter in an upcoming book (Collaborative Information Behavior

    Change-centric improvement of team collaboration

    Get PDF
    In software development, teamwork is essential to the successful delivery of a final product. The software industry has historically built software utilizing development teams that share the workplace. Process models, tools, and methodologies have been enhanced to support the development of software in a collocated setting. However, since the dawn of the 21st century, this scenario has begun to change: an increasing number of software companies are adopting global software development to cut costs and speed up the development process. Global software development introduces several challenges for the creation of quality software, from the adaptation of current methods, tools, techniques, etc., to new challenges imposed by the distributed setting, including physical and cultural distance between teams, communication problems, and coordination breakdowns. A particular challenge for distributed teams is the maintenance of a level of collaboration naturally present in collocated teams. Collaboration in this situation naturally d r ops due to low awareness of the activity of the team. Awareness is intrinsic to a collocated team, being obtained through human interaction such as informal conversation or meetings. For a distributed team, however, geographical distance and a subsequent lack of human interaction negatively impact this awareness. This dissertation focuses on the improvement of collaboration, especially within geographically dispersed teams. Our thesis is that by modeling the evolution of a software system in terms of fine-grained changes, we can produce a detailed history that may be leveraged to help developers collaborate. To validate this claim, we first c r eate a model to accurately represent the evolution of a system as sequences of fine- grained changes. We proceed to build a tool infrastructure able to capture and store fine-grained changes for both immediate and later use. Upon this foundation, we devise and evaluate a number of applications for our work with two distinct goals: 1. To assist developers with real-time information about the activity of the team. These applications aim to improve developers’ awareness of team member activity that can impact their work. We propose visualizations to notify developers of ongoing change activity, as well as a new technique for detecting and informing developers about potential emerging conflicts. 2. To help developers satisfy their needs for information related to the evolution of the software system. These applications aim to exploit the detailed change history generated by our approach in order to help developers find answers to questions arising during their work. To this end, we present two new measurements of code expertise, and a novel approach to replaying past changes according to user-defined criteria. We evaluate the approach and applications by adopting appropriate empirical methods for each case. A total of two case studies – one controlled experiment, and one qualitative user study – are reported. The results provide evidence that applications leveraging a fine-grained change history of a software system can effectively help developers collaborate in a distributed setting

    Prescriptive Analytics:A Survey of Emerging Trends And Technologies

    Get PDF
    • …
    corecore