60,015 research outputs found
Tangos: the agile numerical galaxy organization system
We present Tangos, a Python framework and web interface for database-driven
analysis of numerical structure formation simulations. To understand the role
that such a tool can play, consider constructing a history for the absolute
magnitude of each galaxy within a simulation. The magnitudes must first be
calculated for all halos at all timesteps and then linked using a merger tree;
folding the required information into a final analysis can entail significant
effort. Tangos is a generic solution to this information organization problem,
aiming to free users from the details of data management. At the querying
stage, our example of gathering properties over history is reduced to a few
clicks or a simple, single-line Python command. The framework is highly
extensible; in particular, users are expected to define their own properties
which tangos will write into the database. A variety of parallelization options
are available and the raw simulation data can be read using existing libraries
such as pynbody or yt. Finally, tangos-based databases and analysis pipelines
can easily be shared with collaborators or the broader community to ensure
reproducibility. User documentation is provided separately.Comment: Clarified various points and further improved code performance;
accepted for publication in ApJS. Tutorials (including video) at
http://tiny.cc/tango
Action Recognition in Videos: from Motion Capture Labs to the Web
This paper presents a survey of human action recognition approaches based on
visual data recorded from a single video camera. We propose an organizing
framework which puts in evidence the evolution of the area, with techniques
moving from heavily constrained motion capture scenarios towards more
challenging, realistic, "in the wild" videos. The proposed organization is
based on the representation used as input for the recognition task, emphasizing
the hypothesis assumed and thus, the constraints imposed on the type of video
that each technique is able to address. Expliciting the hypothesis and
constraints makes the framework particularly useful to select a method, given
an application. Another advantage of the proposed organization is that it
allows categorizing newest approaches seamlessly with traditional ones, while
providing an insightful perspective of the evolution of the action recognition
task up to now. That perspective is the basis for the discussion in the end of
the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4
table
Image enhancement from a stabilised video sequence
The aim of video stabilisation is to create a new video sequence where the motions (i.e. rotations, translations) and scale differences between frames (or parts of a frame) have effectively been removed. These stabilisation effects can be obtained via digital video processing techniques which use the information extracted from the video sequence itself, with no need for additional hardware or knowledge about camera physical motion.
A video sequence usually contains a large overlap between successive frames, and regions of the same scene are sampled at different positions. In this paper, this multiple sampling is combined to achieve images with a higher spatial resolution. Higher resolution imagery play an important role in assisting in the identification of people, vehicles, structures or objects of interest captured by surveillance cameras or by video cameras used in face recognition, traffic monitoring, traffic law reinforcement, driver assistance and automatic vehicle guidance systems
Improving health and public safety through knowledge management
This paper reports on KM in public healthcare and public safety. It reflects the experiences of the author as a CIO (Chief Information Officer) in both industries in Australia and New Zealand. There are commonalities in goals and challenges in KM in both industries. In the case of public safety a goal of modern policing theory is to move more towards intelligence-driven practice. That means interventions based upon research and analysis of information. In healthcare the goals include investment in capacity based upon knowledge of healthcare needs, evidence-based service planning and care delivery, capture of information and provision of knowledge at the point-of-care and evaluation of outcomes.
The issue of knowledge management is explored from the perspectives of the user of information and from the discipline of Information Technology and its application to healthcare and public safety. Case studies are discussed to illustrate knowledge management and limiting or enabling factors. These factors include strategy, architecture, standards, feed-back loops, training, quality processes, and social factors such as expectations, ownership of systems and politics
The Arizona CDFS Environment Survey (ACES): A Magellan/IMACS Spectroscopic Survey of the Chandra Deep Field South
We present the Arizona CDFS Environment Survey (ACES), a recently-completed
spectroscopic redshift survey of the Chandra Deep Field South (CDFS) conducted
using IMACS on the Magellan-Baade telescope. In total, the survey targeted 7277
unique sources down to a limiting magnitude of R = 24.1, yielding 5080 secure
redshifts across the ~30' x 30' extended CDFS region. The ACES dataset delivers
a significant increase to both the spatial coverage and the sampling density of
the spectroscopic observations in the field. Combined with
previously-published, spectroscopic redshifts, ACES now creates a
highly-complete survey of the galaxy population at R < 23, enabling the local
galaxy density (or environment) on relatively small scales (~1 Mpc) to be
measured at z < 1 in one of the most heavily-studied and data-rich fields in
the sky. Here, we describe the motivation, design, and implementation of the
survey and present a preliminary redshift and environment catalog. In addition,
we utilize the ACES spectroscopic redshift catalog to assess the quality of
photometric redshifts from both the COMBO-17 and MUSYC imaging surveys of the
CDFS.Comment: resubmitted to MNRAS; 12 pages, 12 figures, and 3 tables; updated
redshift catalog available at http://mur.ps.uci.edu/~cooper/ACES
Radio Frequency Identification Technology: Applications, Technical Challenges and Strategies
Purpose - The purpose of this paper is to discuss the technology behind RFID systems, identify the applications of RFID in various industries, and discuss the technical challenges of RFID implementation and the corresponding strategies to overcome those challenges.
Design/methodology/approach - Comprehensive literature review and integration of the findings from literature. Findings - Technical challenges of RFID implementation include tag cost, standards, tag and reader selection, data management, systems integration and security. The corresponding solution is suggested for each challenge.
Research limitations/implications - A survey type research is needed to validate the results.
Practical implications - This research offers useful technical guidance for companies which plan to implement RFID and we expect it to provide the motivation for much future research in this area.
Originality/value - As the infancy of RFID applications, few researches have existed to address the technical issues of RFID implementation. Our research filled this gap
Split and Migrate: Resource-Driven Placement and Discovery of Microservices at the Edge
Microservices architectures combine the use of fine-grained and independently-scalable services with lightweight communication protocols, such as REST calls over HTTP. Microservices bring flexibility to the development and deployment of application back-ends in the cloud.
Applications such as collaborative editing tools require frequent interactions between the front-end running on users\u27 machines and a back-end formed of multiple microservices. User-perceived latencies depend on their connection to microservices, but also on the interaction patterns between these services and their databases. Placing services at the edge of the network, closer to the users, is necessary to reduce user-perceived latencies. It is however difficult to decide on the placement of complete stateful microservices at one specific core or edge location without trading between a latency reduction for some users and a latency increase for the others.
We present how to dynamically deploy microservices on a combination of core and edge resources to systematically reduce user-perceived latencies. Our approach enables the split of stateful microservices, and the placement of the resulting splits on appropriate core and edge sites. Koala, a decentralized and resource-driven service discovery middleware, enables REST calls to reach and use the appropriate split, with only minimal changes to a legacy microservices application. Locality awareness using network coordinates further enables to automatically migrate services split and follow the location of the users. We confirm the effectiveness of our approach with a full prototype and an application to ShareLatex, a microservices-based collaborative editing application
- âŠ