84 research outputs found

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Special Libraries, Fall 1989

    Get PDF
    Volume 80, Issue 4https://scholarworks.sjsu.edu/sla_sl_1989/1003/thumbnail.jp

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation

    Knowledge capturing and usage of evolving cloud application topologies

    Get PDF
    In the last few years, the advent of Cloud Computing has contributed to a considerable increase of the number of service offerings from a variety of providers and every time more applications are being partially or fully deployed in the cloud. Nowadays, experts invest considerable time and effort towards taking the maximum advantage of the benefits that the employment of cloud technologies brings. Such a wide spectrum of cloud offerings, however, increase the number of complex tasks and aspects that must be taken into account when distributing the components of an application such as the quality of service, adaptation to market changes, etc. Furthermore, the distribution of the components and the configuration of cloud resources may evolve over time and there is a lack of tooling support when it comes to assist the developers in the decision-making tasks of selecting an appropriate application distribution topology. One of the methods that human-beings use to solve problems consists of recalling past experiences and how they solved them at that time. Under this and other considerations the Case-Based Reasoning paradigm was conceived, as a mechanism for solving tasks by recalling past similar problems and adapting their solutions to new situations. This work aims to develop the concepts and mechanisms that enable the capturing and usage of knowledge that the evolution of a system brings along. Specifically, this thesis attempts to identify a set of characteristics that accurately describe a cloud application and to define the models that should be used to identify the cases solved in the past and whose solutions may be useful to solve a new problem. To achieve this, the Case-Based Reasoning with Similarity Retrieval approach is employed, to identify applications with similar characteristics, retrieve their solutions and offer means to refine them in order to obtain a distribution topology that fulfills the requirements of a given application. Furthermore, a prototypical implementation of the approach of this thesis is executed and also employed to validate the concepts and principles that this work follows

    The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction

    Analyzing epigenomic data in a large-scale context

    Get PDF
    While large amounts of epigenomic data are publicly available, their retrieval in a form suitable for downstream analysis is a bottleneck in current research. In a typical analysis, users are required to download huge files that span the entire genome, even if they are only interested in a small subset (e.g., promoter regions) or an aggregation thereof. Moreover, complex operations on genome-level data are not always feasible on a local computer due to resource limitations. The DeepBlue Epigenomic Data Server mitigates this issue by providing a robust server that affords a powerful API for searching, filtering, transforming, aggregating, enriching, and downloading data from several epigenomic consortia. Furthermore, its main component implements scalable data storage and Manipulation methods that scale with the increasing amount of epigenetic data, thereby making it the ideal resource for researchers that seek to integrate epigenomic data into their analysis workflow. This work also presents companion tools that utilize the DeepBlue API to enable users not proficient in scripting or programming languages to analyze epigenomic data in a user-friendly way: (i) an R/Bioconductor package that integrates DeepBlue into the R analysis workflow. The extracted data are automatically converted into suitable R data structures for downstream analysis and visualization within the Bioconductor frame- work; (ii) a web portal that enables users to search, select, filter and download the epigenomic data available in the DeepBlue Server. This interface provides elements, such as data tables, grids, data selections, developed for empowering users to find the required epigenomic data in a straightforward interface; (iii) DIVE, a web data analysis tool that allows researchers to perform large-epigenomic data analysis in a programming-free environment. DIVE enables users to compare their datasets to the datasets available in the DeepBlue Server in an intuitive interface, which summarizes the comparison of hundreds of datasets in a simple chart. Furthermore, these tools are integrated, being capable of sharing results among themselves, creating a powerful large-scale epigenomic data analysis environment. The DeepBlue Epigenomic Data Server and its ecosystem was well received by the International Human Epigenome Consortium and already attracted much attention by the epigenomic research community with currently 160 registered users and more than three million anonymous workflow processing requests since its release.Während große Mengen epigenomischer Daten öffentlich verfügbar sind, ist ihre Abfrage in einer für die Downstream-Analyse geeigneten Form ein Engpass in der aktuellen Forschung. Bei einer typischen Analyse müssen Benutzer riesige Dateien herunterladen, die das gesamte Genom umfassen, selbst wenn sie nur an einer kleinen Teilmenge (z.B., Promotorregionen) oder einer Aggregation davon interessiert sind. Darüber hinaus sind komplexe Vorgänge mit Daten auf Genomebene aufgrund von Ressourceneinschränkungen auf einem lokalen Computer nicht immer möglich. Der DeepBlue Epigenomic Data Server behebt dieses Problem, indem er eine leistungsstarke API zum Suchen, Filtern, Umwandeln, Aggregieren, Anreichern und Herunterladen von Daten verschiedener epigenomischer Konsortien bietet. Darüber hinaus implementiert der DeepBlue-Server skalierbare Datenspeicherungs- und manipulationsmethoden, die der zunehmenden Menge epigenetischer Daten gerecht werden. Dadurch ist der DeepBlue Server ideal für Forscher geeignet, die die aktuellen epigenomischen Ressourcen in ihren Analyse-Workflow integrieren möchten. In dieser Arbeit werden zusätzlich Begleittools vorgestellt, die die DeepBlue-API verwenden, um Benutzern, die sich mit Scripting oder Programmiersprachen nicht auskennen, die Möglichkeit zu geben, epigenomische Daten auf benutzerfreundliche Weise zu analysieren: (i) ein R/ Bioconductor-Paket, das DeepBlue in den R-Analyse-Workflow integriert. Die extrahierten Daten werden automatisch in geeignete R-Datenstrukturen für die Downstream-Analyse und Visualisierung innerhalb des Bioconductor-Frameworks konvertiert; (ii) ein Webportal, über das Benutzer die auf dem DeepBlue Server verfügbaren epigenomischen Daten suchen, auswählen, filtern und herunterladen können. Diese Schnittstelle bietet Elemente wie Datentabellen, Raster, Datenselektionen, mit denen Benutzer die erforderlichen epigenomischen Daten in einer einfachen Schnittstelle finden können; (iii) DIVE, ein Webdatenanalysetool, mit dem Forscher umfangreiche epigenomische Datenanalysen in einer programmierungsfreien Umgebung durchführen können. Mit DIVE können Benutzer ihre Datensätze mit den im Deep- Blue Server verfügbaren Datensätzen in einer intuitiven Benutzeroberfläche vergleichen. Dabei kann der Vergleich hunderter Datensätze in einem Diagramm ausgedrückt werden. Aufgrund der großen Datenmenge, die in DIVE verfügbar ist, werden Methoden bereitgestellt, mit denen die ähnlichsten Datensätze für eine vergleichende Analyse vorgeschlagen werden können. Alle zuvor genannten Tools sind miteinander integriert, so dass sie die Ergebnisse untereinander austauschen können, wodurch eine leistungsstarke Umgebung für die Analyse epigenomischer Daten entsteht. Der DeepBlue Epigenomic Data Server und sein Ökosystem wurden vom International Human Epigenome Consortium äußerst gut aufgenommen und erreichten seit ihrer Veröffentlichung große Aufmerksamkeit bei der epigenomischen Forschungsgemeinschaft mit derzeit 160 registrierten Benutzern und mehr als drei Millionen anonymen Verarbeitungsanforderungen

    The Pilot Land Data System: Report of the Program Planning Workshops

    Get PDF
    An advisory report to be used by NASA in developing a program plan for a Pilot Land Data System (PLDS) was developed. The purpose of the PLDS is to improve the ability of NASA and NASA sponsored researchers to conduct land-related research. The goal of the planning workshops was to provide and coordinate planning and concept development between the land related science and computer science disciplines, to discuss the architecture of the PLDs, requirements for information science technology, and system evaluation. The findings and recommendations of the Working Group are presented. The pilot program establishes a limited scale distributed information system to explore scientific, technical, and management approaches to satisfying the needs of the land science community. The PLDS paves the way for a land data system to improve data access, processing, transfer, and analysis, which land sciences information synthesis occurs on a scale not previously permitted because of limits to data assembly and access

    A Fortran Kernel Generation Framework for Scientific Legacy Code

    Get PDF
    Quality assurance procedure is very important for software development. The complexity of modules and structure in software impedes the testing procedure and further development. For complex and poorly designed scientific software, module developers and software testers need to put a lot of extra efforts to monitor not related modules\u27 impacts and to test the whole system\u27s constraints. In addition, widely used benchmarks cannot help programmers with accurate and program specific system performance evaluation. In this situation, the generated kernels could provide considerable insight into better performance tuning. Therefore, in order to greatly improve the productivity of various scientific software engineering tasks such as performance tuning, debugging, and verification of simulation results, we developed an automatic compute kernel extraction prototype platform for complex legacy scientific code. In addition, considering that scientific research and experiment require long-term simulation procedure and the huge size of data transfer, we apply message passing based parallelization and I/O behavior optimization to highly improve the performance of the kernel extractor framework and then use profiling tools to give guidance for parallel distribution. Abnormal event detection is another important aspect for scientific research; dealing with huge observational datasets combined with simulation results it becomes not only essential but also extremely difficult. In this dissertation, for the sake of detecting high frequency event and low frequency events, we reconfigured this framework equipped with in-situ data transfer infrastructure. Through the method of combining signal processing data preprocess(decimation) with machine learning detection model to train the stream data, our framework can significantly decrease the amount of transferred data demand for concurrent data analysis (between distributed computing CPU/GPU nodes). Finally, the dissertation presents the implementation of the framework and a case study of the ACME Land Model (ALM) for demonstration. It turns out that the generated compute kernel with lower cost can be used in performance tuning experiments and quality assurance, which include debugging legacy code, verification of simulation results through single point and multiple points of variables tracking, collaborating with compiler vendors, and generating custom benchmark tests

    Execution Performance Issues in Full-Text Information Retrieval

    Get PDF
    The task of an information retrieval system is to identify documents that will satisfy a user’s information need. Effective fulfillment of this task has long been an active area of research, leading to sophisticated retrieval models for representing information content in documents and queries and measuring similarity between the two. The maturity and proven effectiveness of these systems has resulted in demand for increased capacity, performance, scalability, and functionality, especially as information retrieval is integrated into more traditional database management environments. In this dissertation we explore a number of functionality and performance issues in information retrieval. First, we consider creation and modification of the document collection, concentrating on management of the inverted file index. An inverted file architecture based on a persistent object store is described and experimental results are presented for inverted file creation and modification. Our architecture provides performance that scales well with document collection size and the database features supported by the persistent object store provide many solutions to issues that arise during integration of information retrieval into vii more general database environments. We then turn to query evaluation speed and introduce a new optimization technique for statistical ranking retrieval systems that support structured queries. Experimental results from a variety of query sets show that execution time can be reduced by more than 50% with no noticeable impact on retrieval effectiveness, making these more complex retrieval models attractive alternatives for environments that demand high performance

    A Tight Coupling Context-Based Framework for Dataset Discovery

    Get PDF
    Discovering datasets of relevance to meet research goals is at the core of different analysis tasks in order to prove proposed hypothesis and theories. In particular, researchers in Artificial Intelligence (AI) and Machine Learning (ML) research domains where relevant datasets are essential for precise predictions have identified how the absence of methods to discover quality datasets are leading to delay and in many cases failure, of ML projects. Many research reports have brought out the absence of dataset discovery methods that fills the gap between analysis requirements and available datasets, and have given statistics to show how it hinders the process of analysis, with completion rate less than 2%. To the best of our knowledge, removing the above inadequacies remains “an open problem of great importance”. It is in this context that the thesis is making a contribution on context-based tightly coupled framework that will tightly couple dataset providers and data analytics teams. Through this framework, dataset providers publish the metadata descriptions of their datasets and analysts formulate and submit rich queries with goal specifications and quality requirements. The dataset search engine component tightly couples the query specification with metadata specifications datasets through a formal contextualized semantic matching and quality-based ranking and discover all datasets that are relevant to analyst requirements. The thesis gives a proof of concept prototype implementation and reports on its performance and efficiency through a case study
    corecore