2,280 research outputs found

    Fund Finder: A case study of database-to-ontology mapping

    Get PDF
    The mapping between databases and ontologies is a basic problem when trying to "upgrade" deep web content to the semantic web. Our approach suggests the declarative definition of mappings as a way to achieve domain independency and reusability. A specific language (expressive enough to cover some real world mapping situations like lightly structured databases or not 1st normal form ones) is defined for this purpose. Along with this mapping description language, the ODEMapster processor is in charge of carrying out the effective instance data migration. We illustrate this by testing both the mappings definition and processor on a case study

    ERP Solutions Inc.

    Get PDF
    The objective of this project is to create an online enterprise resource planning application which allows various manufacturing companies to manage their day to day operations and plan resources accordingly. This is important part of business systems which supports the concept of continuous improvement by giving organizations the resources needed to reduce waste. The problem which most manufacturing facilities encounter is poor communication between various function such as production and master production schedulers. For example, on day shift the production team is scheduled to produce two thousand xy widgets and after completing the order then change over to xyz widgets. What if day shift production encounter significant downtime and run behind schedule? This places production behind schedule and can also influence the way raw material is ordered. Therefore, the goal of this project is to create an application which can be leverage by all functions to make decisions as well as identify performance gaps. The enterprise resource planning software will be a newly implemented software replacing a paper system used to reconcile production resources and cost at the end of each shift. The system is scheduled to be available for use by production supervisors by April 2016

    An Analysis of Successful SQLIA for Future Evolutionary Prediction

    Get PDF
    Web applications are a fundamental component of the internet, many interact with backend databases. Securing web applications and their databases from hackers should be a top priority for cybersecurity researchers. Structured Query Language (SQL) injection attacks (SQLIA) constitute a significant threat to web applications. They can hijack the backend databases to steal personally identifiable information (PII), initiate scams, or launch more sophisticated cyberattacks. SQLIA has evolved since its conception in the early 2000s and will continue to do so in the coming years. This paper analyzes past literature and successful SQLIA from specific time periods to identify themes and methods used by security researchers and hackers. By extrapolating and interpreting the themes of both literature and effective SQLIA, trends can be identified, and a clearer understanding of the future of SQL injection can be defined to improve cybersecurity best practices

    Database Auto Awesome: Enhancing Database-Centric Web Applications through Informed Code Generation

    Get PDF
    Database Auto Awesome is an approach to enhancing web applications comprised of forms used to interact with stored information. It was inspired by Google\u27s Auto Awesome tool, which provides automatic enhancements for photos. Database Auto Awesome aims to automatically or semi-automatically provide improvements to an application by expanding the functionality of the application and improving the existing code. This thesis describes a tool that gathers information from the application and provides details on how the parts of the application work together. This information provides the details necessary to generate new portions of an application. These enhancements are directed by the web application administrator through specifying what they would like to have generated, in terms of functionality. Once the administrator has provided this direction, the new application code is generated and put in updated or new files. Using this approach, Database Auto Awesome provides a viable solution for semi-automatically generating enhancements to an existing web application

    Techniques for Ensuring Index Usage Predictability in Microsoft SQL Server

    Get PDF
    The demand for carrying out high-performance operations with data is growing in parallel with the vast growth of data itself. The retrieval of data for analysis, the manipulation of data, as well as its insertion in data stores must all be performed very efficiently, using techniques that ensure speed, reliability and accuracy. This paper investigates the techniques and practices that improve the performance of data retrieving by the use of SQL and Microsoft SQL Server. Being that SQL is a declarative language that specifies what should be produced as a result, instead of how to achieve that result, this paper will look at the internals of SQL Server that affect the "how" of queries and data operations, in order to propose techniques that ensure performance gains. The paper will aim to shed light on the limitations and variance in index usage, and to answer the question why indexes are sometimes used, and other times not, for the same query. To overcome the index limitations the "index fusion" technique is proposed

    Integrating Online Wire Transfer Fraud Data with Suspicious Wire Transfer Data Using SSIS

    Get PDF
    The client is one of the banking systems which is in the process of providing online wire transfers to the customers. This project is about how the suspicious wire transfer data and confirmed wire transfer fraud data are handled. Currently both data are entered through two different applications and are stored separately. Since it is important for any business to make analysis how their business is running to make further decisions the data from these two applications should be integrated. This can be achieved by using SSIS (SQL Server Integration Services)

    The Family of MapReduce and Large Scale Data Processing Systems

    Full text link
    In the last two decades, the continuous increase of computational power has produced an overwhelming flow of data which has called for a paradigm shift in the computing architecture and large scale data processing mechanisms. MapReduce is a simple and powerful programming model that enables easy development of scalable parallel applications to process vast amounts of data on large clusters of commodity machines. It isolates the application from the details of running a distributed program such as issues on data distribution, scheduling and fault tolerance. However, the original implementation of the MapReduce framework had some limitations that have been tackled by many research efforts in several followup works after its introduction. This article provides a comprehensive survey for a family of approaches and mechanisms of large scale data processing mechanisms that have been implemented based on the original idea of the MapReduce framework and are currently gaining a lot of momentum in both research and industrial communities. We also cover a set of introduced systems that have been implemented to provide declarative programming interfaces on top of the MapReduce framework. In addition, we review several large scale data processing systems that resemble some of the ideas of the MapReduce framework for different purposes and application scenarios. Finally, we discuss some of the future research directions for implementing the next generation of MapReduce-like solutions.Comment: arXiv admin note: text overlap with arXiv:1105.4252 by other author
    • …
    corecore