57,583 research outputs found
Regional Data Archiving and Management for Northeast Illinois
This project studies the feasibility and implementation options for establishing a regional data archiving system to help monitor
and manage traffic operations and planning for the northeastern Illinois region. It aims to provide a clear guidance to the
regional transportation agencies, from both technical and business perspectives, about building such a comprehensive
transportation information system. Several implementation alternatives are identified and analyzed. This research is carried
out in three phases.
In the first phase, existing documents related to ITS deployments in the broader Chicago area are summarized, and a
thorough review is conducted of similar systems across the country. Various stakeholders are interviewed to collect
information on all data elements that they store, including the format, system, and granularity. Their perception of a data
archive system, such as potential benefits and costs, is also surveyed. In the second phase, a conceptual design of the
database is developed. This conceptual design includes system architecture, functional modules, user interfaces, and
examples of usage. In the last phase, the possible business models for the archive system to sustain itself are reviewed. We
estimate initial capital and recurring operational/maintenance costs for the system based on realistic information on the
hardware, software, labor, and resource requirements. We also identify possible revenue opportunities.
A few implementation options for the archive system are summarized in this report; namely:
1. System hosted by a partnering agency
2. System contracted to a university
3. System contracted to a national laboratory
4. System outsourced to a service provider
The costs, advantages and disadvantages for each of these recommended options are also provided.ICT-R27-22published or submitted for publicationis peer reviewe
On-demand lake circulation modeling management system
Proceedings of the Seventh International Conference on Hydroscience and Engineering, Philadelphia, PA, September 2006. http://hdl.handle.net/1860/732In this paper, we present a system for real-time forecasting of lake circulation at Trout Lake Station
in Upper Wisconsin. Our system uses near-real-time meteorological data to drive a lake circulation
model. Near-real-time ADCP data will be used to validate the model. As one part of a
cyberinfrastructure lake project, this paper presents the broad system design crossing different
related organizations: Device management and data acquisition from Trout Lake Station, data
storage in the Center for Limnology, circulation modeling in Environmental Fluid Mechanics
Laboratory, and the system portal in Grid Computing Research Laboratory of SUNY Binghamton.
We also present the concept and implementation of hindcasting and nowcasting. The application of
this system shows that this work is an effective collaboration across many different fields
Design of a fulfillment pack area for a pet supply company experiencing steady growth
Includes bibliographical references
ImageJ2: ImageJ for the next generation of scientific image data
ImageJ is an image analysis program extensively used in the biological
sciences and beyond. Due to its ease of use, recordable macro language, and
extensible plug-in architecture, ImageJ enjoys contributions from
non-programmers, amateur programmers, and professional developers alike.
Enabling such a diversity of contributors has resulted in a large community
that spans the biological and physical sciences. However, a rapidly growing
user base, diverging plugin suites, and technical limitations have revealed a
clear need for a concerted software engineering effort to support emerging
imaging paradigms, to ensure the software's ability to handle the requirements
of modern science. Due to these new and emerging challenges in scientific
imaging, ImageJ is at a critical development crossroads.
We present ImageJ2, a total redesign of ImageJ offering a host of new
functionality. It separates concerns, fully decoupling the data model from the
user interface. It emphasizes integration with external applications to
maximize interoperability. Its robust new plugin framework allows everything
from image formats, to scripting languages, to visualization to be extended by
the community. The redesigned data model supports arbitrarily large,
N-dimensional datasets, which are increasingly common in modern image
acquisition. Despite the scope of these changes, backwards compatibility is
maintained such that this new functionality can be seamlessly integrated with
the classic ImageJ interface, allowing users and developers to migrate to these
new methods at their own pace. ImageJ2 provides a framework engineered for
flexibility, intended to support these requirements as well as accommodate
future needs
Interior Point Methods for Massive Support Vector Machines
We investigate the use of interior point methods for solving quadratic
programming problems with a small number of linear constraints where
the quadratic term consists of a low-rank update to a positive semi-de nite
matrix. Several formulations of the support vector machine t into this
category. An interesting feature of these particular problems is the vol-
ume of data, which can lead to quadratic programs with between 10 and
100 million variables and a dense Q matrix. We use OOQP, an object-
oriented interior point code, to solve these problem because it allows us
to easily tailor the required linear algebra to the application. Our linear
algebra implementation uses a proximal point modi cation to the under-
lying algorithm, and exploits the Sherman-Morrison-Woodbury formula
and the Schur complement to facilitate e cient linear system solution.
Since we target massive problems, the data is stored out-of-core and we
overlap computation and I/O to reduce overhead. Results are reported
for several linear support vector machine formulations demonstrating the
reliability and scalability of the method
Recommended from our members
Tools for efficient analysis of concurrent software systems
The ever increasing use of distributed computing as a method of providing added computing power and reliability has sparked interest in methods to model and analyze concurrent hardware/ software systems. Efficient automated analysis tools are needed to aid designers of such systems. The Distributed Systems Project at UCI has been developing a suite of tools (dubbed the P-NUT system) which supports efficient analysis of models of concurrent software. This paper presents the principles which guide the development of P-NUT tools and discusses the development of one of the tools: the Reachability Graph Builder (RGB). The P-NUT approach to tool development has resulted in the production of a highly efficient tool for constructing reachability graphs. The careful design of data structures and associated algorithms has significantly enlarged the class of models which can be analyzed
- …