22 research outputs found

    Derivation of the required elements for a definition of the term middleware

    Get PDF
    Thirteen contemporary definitions of Middleware were analyzed. The definitions agree that any software that can do the following should be classified as Middleware (1) provide service that provides transparent application-to-application interaction across the network, (2) act as a service provider for distributed applications, and (3) provide services that are primarily used by distributed applications (e.g., RPCs, ORBs, Directories, name-resolution services, etc.) Most definitions agree that Middleware is that level of software required to achieve platform, location, and network transparency. There is some discrepancy about the OSI levels at which middleware operates. The majority of definitions limit it to levels 5, 6, and 7. Additionally, almost half of the definitions do not include database transparency as something achieved by Middleware, perhaps due to the ambiguous classification of ODBC and JDBC as software. Assuming that the number of times a service is mentioned, the majority of the definitions rank services associated with legal access to an application as core to Middleware, along with valid, standardized APIs for application development as core to the definition of middleware

    Use of HSM with Relational Databases

    Get PDF
    Hierarchical storage management (HSM) systems have evolved to become a critical component of large information storage operations. They are built on the concept of using a hierarchy of storage technologies to provide a balance in performance and cost. In general, they migrate data from expensive high performance storage to inexpensive low performance storage based on frequency of use. The predominant usage characteristic is that frequency of use is reduced with age and in most cases quite rapidly. The result is that HSM provides an economical means for managing and storing massive volumes of data. Inherent in HSM systems is system managed storage, where the system performs most of the work with minimum operations personnel involvement. This automation is generally extended to include: backup and recovery, data duplexing to provide high availability, and catastrophic recovery through use of off-site storage

    The organizational preparation of existing relational databases for the integration of expert systems.

    Get PDF
    This thesis is a management guide for strategically planning a future integration of relational databases and expert systems. It relates best to an organization with large established relational database(s), that is trying to assess the changes required to integrate expert systems with those databases. Technical considerations for such a change are discussed, and include the role of database normalization and the requirement to maintain applications that are independent of the database structure. The organizational considerations of such an integration are examined, and focus on the people skills required within an organization to develop and maintain database and expert system combinations. Three product categories are established to represent an integrated system, and a commercial off the shelf product from each category is reviewed to illustrate its specific capabilities. The combination of relational databases and expert systems has the potential to deliver information systems of future strategic importance. This thesis serves to assist the information systems management of military organizations in planning the transition to such a system.http://archive.org/details/organizationalpr00snowMajor, U.S. Air ForceApproved for public release; distribution is unlimited

    A Survey of Traditional and Practical Concurrency Control in Relational Database Management Systems

    Get PDF
    Traditionally, database theory has focused on concepts such as atomicity and serializability, asserting that concurrent transaction management must enable correctness above all else. Textbooks and academic journals detail a vision of unbounded rationality, where reduced throughput because of concurrency protocols is not of tremendous concern. This thesis seeks to survey the traditional basis for concurrency in relational database management systems and contrast that with actual practice. SQL-92, the current standard for concurrency in relational database management systems has defined isolation, or allowable concurrency levels, and these are examined. Some ways in which DB2, a popular database, interprets these levels and finesses extra concurrency through performance enhancement are detailed. SQL-92 standardizes de facto relational database management systems features. Given this and a superabundance of articles in professional journals detailing steps for fine-tuning transaction concurrency, the expansion of performance tuning seems bright, even at the expense of serializabilty. Are the practical changes wrought by non-academic professionals killing traditional database concurrency ideals? Not really. Reasoned changes for performance gains advocate compromise, using complex concurrency controls when necessary for the job at hand and relaxing standards otherwise. The idea of relational database management systems is only twenty years old, and standards are still evolving. Is there still an interplay between tradition and practice? Of course. Current practice uses tradition pragmatically, not idealistically. Academic ideas help drive the systems available for use, and perhaps current practice now will help academic ideas define concurrency control concepts for relational database management systems

    Data warehouse stream view update with hash filter.

    Get PDF
    A data warehouse usually contains large amounts of information representing an integration of base data from one or more external data sources over a long period of time to provide fast-query response time. It stores materialized views which provide aggregation (SUM, MIX, MIN, COUNT and AVG) on some measure attributes of interest for data warehouse users. The process of updating materialized views in response to the modification of the base data is called materialized view maintenance. Some data warehouse application domains, like stock markets, credit cards, automated banking and web log domains depend on data sources updated as continuous streams of data. In particular, electronic stock trading markets such as the NASDAQ, generate large volumes of data, in bursts that are up to 4,200 messages per second. This thesis proposes a new view maintenance algorithm (StreamVup), which improves on semi join methods by using hash filters. The new algorithm first, reduce the amount of bytes transported through the network for streams tuples, and secondly reduces the cost of join operations during view update by eliminating the recompution of view updates caused by newly arriving duplicate tuples. (Abstract shortened by UMI.)Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2003 .I85. Source: Masters Abstracts International, Volume: 42-05, page: 1753. Adviser: C. I. Ezeife. Thesis (M.Sc.)--University of Windsor (Canada), 2003

    Data warehouse stream view update with multiple streaming.

    Get PDF
    The main objective of data warehousing is to store information representing an integration of base data from single or multiple data sources over an extended period of time. To provide fast access to the data, regardless of the availability of the data source, data warehouses often use materialized views. Materialized views are able to provide aggregation on some attributes to help Decision Support Systems. Updating materialized views in response to modifications in the base data is called materialized view maintenance. In some applications, for example, the stock market and banking systems, the source data is updated so frequently that we can consider them as a continuous stream of data. To keep the materialized view updated with respect to changes in the base tables in a traditional way will cause query response times to increase. This thesis proposes a new view maintenance algorithm for multiple streaming which improves semi-join methods and hash filter methods. Our proposed algorithm is able to update a view which joins two base tables where both of the base tables are in the form of data streams (always changing). By using a timestamp, building updategrams in parallel and by optimizing the joining cost between two data sources it can reduce the query response time or execution time significantly.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .A336. Source: Masters Abstracts International, Volume: 44-03, page: 1391. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Implementation E-commerce application using Lotus Domino

    Get PDF
    E-commerce technologies enable enterprises to exchange information instantaneously, eliminate paperwork, and advertise their products and services to a global market. The Domino Server family, an integrated messaging and Web application software platform, is easy to build and manage integrated, collaborative solutions. In this project, I build a basic functional Domino-powered e-commerce application named E-Bookstore. The EBookstore web site contains three main components of an e-commerce web site (catalog of items, shopping cart, and checkout function), and provides a powerful search function to the customer. The unique session ID for each E-Bookstore web user is generated and stored, and is attached to every item a user adds to his shopping cart. This application also provides the back end maintenance functions such as add book category or book entry. Comparing to popular commercial software, the functions provided in E-Bookstore cover most of useful tools. The E-Bookstore, built on the Domino Application Server R5 platform, has a performance that scales appropriately as the amount of data set increasing and the whole system environment is security

    Tutoriel "Objets Distribués, Interopérabilité, CORBA''

    Get PDF
    Colloque sur invitation.Une première revolution, celle des architectures clients/serveurs, met fin aux applications monolithiques sur machine centralisée : elle donne naissance à des applications fragmentées sur des clients et des serveurs. Une seconde révolution, celle des objets distribués, introduit, en quelque sorte, du client/serveur au sein des architectures clients serveurs. En effet, tant du côté des clients que du côté des serveurs, les objets fragmentent une application en composants qui peuvent interopérer et coopérer dans un contexte réparti. Par ailleurs, l'émergence de standards industriels, comme CORBA ou OLE/COM, facilitent le développement d'application conformément à ces nouvelles architectures. Dans ce tutoriel, nous montrons l'apport des objets distribués pour l'interopérabilité entre systèmes, réseaux, langages ou outils éventuellement hétérogènes et particulièrement, leur apport par rapport à des environnements clients/serveurs fondés sur des serveurs de données ``traditionnels'' ou des moniteurs transactionnels. Nous parcourerons la philosophie et les services offerts par les standards. Enfin, nous explorerons quelques problèmes fondamentaux (comme la gestion des types) non encore totalement résolus dans le domaine

    COSPO/CENDI Industry Day Conference

    Get PDF
    The conference's objective was to provide a forum where government information managers and industry information technology experts could have an open exchange and discuss their respective needs and compare them to the available, or soon to be available, solutions. Technical summaries and points of contact are provided for the following sessions: secure products, protocols, and encryption; information providers; electronic document management and publishing; information indexing, discovery, and retrieval (IIDR); automated language translators; IIDR - natural language capabilities; IIDR - advanced technologies; IIDR - distributed heterogeneous and large database support; and communications - speed, bandwidth, and wireless
    corecore