2,044 research outputs found
Analysis domain model for shared virtual environments
The field of shared virtual environments, which also
encompasses online games and social 3D environments, has a
system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model
Behavioral patterns of individuals and groups during co-located collaboration on large, high-resolution displays
Collaboration among multiple users on large screens leads to complicated behavior patterns and group dynamics. To gain a deeper understanding of collaboration on vertical, large, high-resolution screens, this dissertation builds on previous research and gains novel insights through new observational studies. Among other things, the collected results reveal new patterns of collaborative coupling, suggest that territorial behavior is less critical than shown in previous research, and demonstrate that workspace awareness can also negatively affect the effectiveness of individual users
NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1
Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's
Heterogeneous Architectures For Parallel Acceleration
To enable a new generation of digital computing applications, the greatest challenge is to provide a better level of energy efficiency (intended as the performance that a system can provide within a certain power budget) without giving up a systems's flexibility.
This constraint applies to digital system across all scales, starting from ultra-low power implanted devices up to datacenters for high-performance computing and for the "cloud".
In this thesis, we show that architectural heterogeneity is the key to provide this efficiency and to respond to many of the challenges of tomorrow's computer architecture - and at the same time we show methodologies to introduce it with little or no loss in terms of flexibility.
In particular, we show that heterogeneity can be employed to tackle the "walls" that impede further development of new computing applications: the utilization wall, i.e. the impossibility to keep all transistors on in deeply integrated chips, and the "data deluge", i.e. the amount of data to be processed that is scaling up much faster than the computing performance and efficiency.
We introduce a methodology to improve heterogeneous design exploration of tightly coupled clusters; moreover we propose a fractal heterogeneity architecture that is a parallel accelerator for low-power sensor nodes, and is itself internally heterogeneous thanks to an heterogeneous coprocessor for brain-inspired computing.
This platform, which is silicon-proven, can lead to more than 100x improvement in terms of energy efficiency with respect to typical computing nodes used within the same domain, enabling the application of complex algorithms, vastly more performance-hungry than the current state-of-the-art in the ULP computing domain
Building a Simple Smart Factory
This thesis describes (a) the search and findings of smart factories and their enabling technologies (b) the methodology to build or retrofit a smart factory and (c) the building and operation of a simple smart factory using the methodology. A factory is an industrial site with large buildings and collection of machines, which are operated by persons to manufacture goods and services. These factories are made smart by incorporating sensing, processing, and autonomous responding capabilities.
Developments in four main areas (a) sensor capabilities (b) communication capabilities (c) storing and processing huge amount of data and (d) better utilization of technology in management and further development have contributed significantly for this incorporation of smartness to factories. There is a flurry of literature in each of the above four topics and their combinations. The findings from the literature can be summarized in the following way. Sensors detect or measure a physical property and records, indicates, or otherwise responds to it. In real-time, they can make a very large amount of observations. Internet is a global computer network providing a variety of information and communication facilities and the internet of things, IoT, is the interconnection via the Internet of computing devices embedded in everyday objects, enabling them to send and receive data. Big data handling and the provision of data services are achieved through cloud computing. Due to the availability of computing power, big data can be handled and analyzed under different classifications using several different analytics. The results from these analytics can be used to trigger autonomous responsive actions that make the factory smart.
Having thus comprehended the literature, a seven stepped methodology for building or retrofitting a smart factory was established. The seven steps are (a) situation analysis where the condition of the current technology is studied (b) breakdown prevention analysis (c) sensor selection (d) data transmission and storage selection (e) data processing and analytics (f) autonomous action network and (g) integration with the plant units.
Experience in a cement factory highlighted the wear in a journal bearing causes plant stoppages and thus warrant a smart system to monitor and make decisions. The experience was used to develop a laboratory-scale smart factory monitoring the wear of a half-journal bearing. To mimic a plant unit a load-carrying shaft supported by two half-journal bearings were chosen and to mimic a factory with two plant units, two such shafts were chosen. Thus, there were four half-journal bearings to monitor. USB Logitech C920 webcam that operates in full-HD 1080 pixels was used to take pictures at specified intervals. These pictures are then analyzed to study the wear at these intervals. After the preliminary analysis wear versus time data for all four bearings are available. Now the âmaking smart activityâ begins.
Autonomous activities are based on various analyses. The wear time data are analyzed under different classifications. Remaining life, wear coefficient specific to the bearings, weekly variation in wear and condition of adjacent bearings are some of the characteristics that can be obtained from the analytics. These can then be used to send a message to the maintenance and supplies division alerting them on the need for a replacement shortly. They can also be alerted about other bearings reaching their maturity to plan a major overhaul if needed
Recommended from our members
The Grand Challenge of Managing the Petascale Facility.
This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, we should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point
Multi-wavelength infrared imaging computer systems and applications
This dissertation presents the development of three computer systems for multi-wavelength thermal imaging.
Two computer systems were developed for the multi-wavelength imaging pyrometers (M-WIPs) that yield non-contact temperature measurements by remotely sensing the surface of objects with unknown wavelength-dependent emissivity. These M-WIP computer systems represent the state-of-art development in remote temperature measurement system based on the multi-wavelength approach. The dissertation research includes M-WIP computer system integration, software development, performance evaluation, and also applications in monitoring and control of temperature distribution of silicon wafers in a rapid thermal process system.
The two M-WIPs are capable of data acquisition, signal processing, system calibration, radiometric measurement, parallel processing and process control. Temperature measurement experiments demonstrated the accuracy of ±1°C against blackbody and ±4°C for colorbody objects. Various algorithms were developed and implemented, including real-time two-point non-uniformity correction, thermal image pseudocoloring, PC to SUN workstation data transfer, automatic IR camera integration time control, and radiometric measurement parallel processing.
A third computer system was developed for the demonstration of a 3-color InGaAs FPA which can provide images with information in three different IR wavelength range simultaneously. Numbers of functions were developed to demonstrate and characterize 3-color FPAs, and the system was delivered to be used by the 3-color FPA manufacturer
- âŠ