94 research outputs found

    Data Warehouse Technology and Application in Data Centre Design for E-government

    Get PDF

    2003 Report on Indiana University Accomplishments supported by Shared University Research Grants from IBM, Inc.

    Get PDF
    Indiana University and IBM, Inc. have a very strong history of collaborative research, aided significantly by Shared University Research (SUR) grants from IBM to Indiana University. The purpose of this document is to review progress against recent SUR grants to Indiana University. These grants focus on the joint interests of IBM, Inc. and Indiana University in the areas of deep computing, grid computing, and especially computing for the life sciences. SUR funding and significant funding from other sources, including a 1.8MgrantfromtheNSFandaportionofa1.8M grant from the NSF and a portion of a 105M grant to Indiana University to create the Indiana Genomics Initiative, have enabled Indiana University to achieve a suite of accomplishments that exceed the ambitious goals set out in these recent SUR grants

    Analysis of computer services industry

    Full text link
    http://deepblue.lib.umich.edu/bitstream/2027.42/96905/1/MBA_KhuranaW_2000Final.pd

    Grid Databases for Shared Image Analysis in the MammoGrid Project

    Full text link
    The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UKComment: 10 pages, 5 figure

    Survey of Autonomic Computing and Experiments on JMX-based Autonomic Features

    Get PDF
    Autonomic Computing (AC) aims at solving the problem of managing the rapidly-growing complexity of Information Technology systems, by creating self-managing systems. In this thesis, we have surveyed the progress of the AC field, and studied the requirements, models and architectures of AC. The commonly recognized AC requirements are four properties - self-configuring, self-healing, self-optimizing, and self-protecting. The recommended software architecture is the MAPE-K model containing four modules, namely - monitor, analyze, plan and execute, as well as the knowledge repository. In the modern software marketplace, Java Management Extensions (JMX) has facilitated one function of the AC requirements - monitoring. Using JMX, we implemented a package that attempts to assist programming for AC features including socket management, logging, and recovery of distributed computation. In the experiments, we have not only realized the powerful Java capabilities that are unknown to many educators, we also illustrated the feasibility of learning AC in senior computer science courses

    Integrated software architecture to support modern experimental biology

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2004.Includes bibliographical references (p. 127-132).Over the past several years, the explosive growth of biological data generated by new high-throughput instruments has virtually begun to drown the biological community. There is no established infrastructure to deal with these data in a consistent and successful fashion. This thesis presents a new informatics platform capable of supporting a large subsection of the experimental methods found in modem biology. A consistent data definition strategy is outlined that can handle gel electrophoresis, microarray, fluorescence activated cell sorting, mass spectrometry, and microscopy within a single coherent set of information object definitions. A key issue for interoperability is that common attributes are made truly identical between the different methods. This dramatically decreases the overhead of separate and distinct classes for each method, and reserves the uniqueness for attributes that are different between the methods. Thus, at least one higher level of integration is obtained. The thesis shows that rich object-oriented modeling together with object-relational database features and the uniform treatment of data and metadata is an ideal candidate for complex experimental information integration tasks. This claim is substantiated by elaborating on the coherent set of information object definitions and testing the corresponded database using real experimental data. A first implementation of this work--ExperiBase--is an integrated software platform to store and query data generated by the leading experimental protocols used in biology within a single database. It provides: comprehensive database features for searching and classifying; web-based client interfaces; web services; data import and export capabilities to accommodate other data(cont.) repositories; and direct support for metadata produced by analysis programs. Using JDBC, Java Servlets and Java Server Pages, SOAP, XML, and IIOP/CORBA's technologies, the information architecture is portable and platform independent. The thesis develops an ExperiBase XML according to the single coherent set of information object definitions, and also presents a new way of database federation--translating heterogeneous database schemas into the common ExperiBase XML schema and then merging the output: XML messages to get data federated. ExperiBase has become a reference implementation of the I3C Life Science Object Ontologies group.by Shixin Zhang.Ph.D

    Developing a Filter Kit System Database: Procedure and Analysis

    Get PDF
    A thesis presented to the faculty of the College of Science and Technology at Morehead State University in partial fulfillment of the requirements for the Degree Master of Science by He Shi on April 30, 2010

    Implementation Strategies for a Graduate eCommerce Curriculum

    Get PDF
    This paper examines the strategies used in the implementation of DePaul University\u27s pioneering master\u27s degree in E-Commerce Technology. These strategies emphasize curriculum development, technical support, faculty staffing, marketing, industry partnership, and organization support. The lessons learned from DePaul\u27s implementation experience during this first year will offer other schools unique insights for introducing their e-commerce curriculum

    Hybrid Database for XML Resource Management

    Get PDF
    Although XML has been used in software applications for a considerable amount of time, managing XML files is not a common skill in the realm of backend software design. This is primarily because JSON has become a more prevalent file format and is supported by numerous SQL and NoSQL databases. In this thesis, we will delve into the fundamentals and implementation of a web application that utilizes a hybrid database, with the goal of determining whether it is suitable for managing XML resources. Upon closer examination of the existing architecture, the client discovered a problem with upgrading their project. Further investigation revealed that the current approach of storing XML files in a single folder had serious flaws that could cause issues. As a result, a decision was made to revamp the entire web application, with hybrid databases being chosen as the preferred solution due to the application's XML storage concept. It is worth noting that there exists a type of database specifically designed for XML resources, known as native XML databases. However, the development team thoroughly reviewed all the requirements provided by the product owner, Niko Siltala, and assessed the compatibility of both native XML databases and hybrid databases for the new application. Based on our analysis, it was concluded that the hybrid database is the most suitable option for the project. The changes were successfully designed and implemented, and the development team determined that hybrid databases are a viable option for managing a significant number of XML file dependencies. There were no significant obstacles encountered that would hinder the use of this type of database. The advantages of using hybrid databases were observed, including streamlined XML file storage, the ability to mix XPATH/XQUERY in SQL queries, and simplified codebases
    • 

    corecore