15,249 research outputs found
Automated Detection of Solar Eruptions
Observation of the solar atmosphere reveals a wide range of motions, from
small scale jets and spicules to global-scale coronal mass ejections.
Identifying and characterizing these motions are essential to advancing our
understanding the drivers of space weather. Both automated and visual
identifications are currently used in identifying CMEs. To date, eruptions near
the solar surface (which may be precursors to CMEs) have been identified
primarily by visual inspection. Here we report on EruptionPatrol (EP): a
software module that is designed to automatically identify eruptions from data
collected by SDO/AIA. We describe the method underlying the module and compare
its results to previous identifications found in the Heliophysics Event
Knowledgebase. EP identifies eruptions events that are consistent with those
found by human annotations, but in a significantly more consistent and
quantitative manner. Eruptions are found to be distributed within 15Mm of the
solar surface. They possess peak speeds ranging from 4 to 100 km/sec and
display a power-law probability distribution over that range. These
characteristics are consistent with previous observations of prominences.Comment: 6 pages, 4 figures, 7th Solar Information Processing Workshop, to
appear in Space Weather and Space Climat
An affect-based video retrieval system with open vocabulary querying
Content-based video retrieval systems (CBVR) are creating
new search and browse capabilities using metadata describing significant features of the data. An often overlooked aspect of human interpretation of multimedia data is the affective dimension. Incorporating affective information into multimedia metadata can potentially enable search using
this alternative interpretation of multimedia content. Recent work has described methods to automatically assign affective labels to multimedia data using various approaches. However, the subjective and imprecise nature of affective labels makes it difficult to bridge the semantic gap between system-detected labels and user expression of information requirements in multimedia retrieval. We present a novel affect-based video retrieval system incorporating an open-vocabulary query stage based on WordNet enabling search using an unrestricted query vocabulary. The system performs automatic annotation of video data with labels of well
defined affective terms. In retrieval annotated documents are ranked using the standard Okapi retrieval model based on open-vocabulary text queries. We present experimental results examining the behaviour of the system for retrieval of a collection of automatically annotated feature films of different genres. Our results indicate that affective annotation can potentially provide useful augmentation to more traditional objective content description in multimedia retrieval
Speaker-following Video Subtitles
We propose a new method for improving the presentation of subtitles in video
(e.g. TV and movies). With conventional subtitles, the viewer has to constantly
look away from the main viewing area to read the subtitles at the bottom of the
screen, which disrupts the viewing experience and causes unnecessary eyestrain.
Our method places on-screen subtitles next to the respective speakers to allow
the viewer to follow the visual content while simultaneously reading the
subtitles. We use novel identification algorithms to detect the speakers based
on audio and visual information. Then the placement of the subtitles is
determined using global optimization. A comprehensive usability study indicated
that our subtitle placement method outperformed both conventional
fixed-position subtitling and another previous dynamic subtitling method in
terms of enhancing the overall viewing experience and reducing eyestrain
Mapping the spatiotemporal dynamics of calcium signaling in cellular neural networks using optical flow
An optical flow gradient algorithm was applied to spontaneously forming net-
works of neurons and glia in culture imaged by fluorescence optical microscopy
in order to map functional calcium signaling with single pixel resolution.
Optical flow estimates the direction and speed of motion of objects in an image
between subsequent frames in a recorded digital sequence of images (i.e. a
movie). Computed vector field outputs by the algorithm were able to track the
spatiotemporal dynamics of calcium signaling pat- terns. We begin by briefly
reviewing the mathematics of the optical flow algorithm, and then describe how
to solve for the displacement vectors and how to measure their reliability. We
then compare computed flow vectors with manually estimated vectors for the
progression of a calcium signal recorded from representative astrocyte
cultures. Finally, we applied the algorithm to preparations of primary
astrocytes and hippocampal neurons and to the rMC-1 Muller glial cell line in
order to illustrate the capability of the algorithm for capturing different
types of spatiotemporal calcium activity. We discuss the imaging requirements,
parameter selection and threshold selection for reliable measurements, and
offer perspectives on uses of the vector data.Comment: 23 pages, 5 figures. Peer reviewed accepted version in press in
Annals of Biomedical Engineerin
Taking the bite out of automated naming of characters in TV video
We investigate the problem of automatically labelling appearances of characters in TV or film material
with their names. This is tremendously challenging due to the huge variation in imaged appearance of each character and the weakness and ambiguity of available annotation. However, we demonstrate that high precision can be achieved by combining multiple sources of information, both visual and textual. The principal novelties that we introduce are: (i) automatic generation of time stamped character annotation by aligning subtitles and transcripts; (ii) strengthening the supervisory information by identifying
when characters are speaking. In addition, we incorporate complementary cues of face matching and clothing matching to propose common annotations for face tracks, and consider choices of classifier which can potentially correct errors made in the automatic extraction of training data from the weak textual annotation. Results are presented on episodes of the TV series ‘‘Buffy the Vampire Slayer”
Meteorology of Jupiter's Equatorial Hot Spots and Plumes from Cassini
We present an updated analysis of Jupiter's equatorial meteorology from
Cassini observations. For two months preceding the spacecraft's closest
approach, the Imaging Science Subsystem (ISS) onboard regularly imaged the
atmosphere. We created time-lapse movies from this period in order to analyze
the dynamics of equatorial hot spots and their interactions with adjacent
latitudes. Hot spots are quasi-stable, rectangular dark areas on
visible-wavelength images, with defined eastern edges that sharply contrast
with surrounding clouds, but diffuse western edges serving as nebulous
boundaries with adjacent equatorial plumes. Hot spots exhibit significant
variations in size and shape over timescales of days and weeks. Some of these
changes correspond with passing vortex systems from adjacent latitudes
interacting with hot spots. Strong anticyclonic gyres present to the south and
southeast of the dark areas appear to circulate into hot spots. Impressive,
bright white plumes occupy spaces in between hot spots. Compact cirrus-like
'scooter' clouds flow rapidly through the plumes before disappearing within the
dark areas. These clouds travel at 150-200 m/s, much faster than the 100 m/s
hot spot and plume drift speed. This raises the possibility that the scooter
clouds may be more illustrative of the actual jet stream speed at these
latitudes. Most previously published zonal wind profiles represent the drift
speed of the hot spots at their latitude from pattern matching of the entire
longitudinal image strip. If a downward branch of an equatorially-trapped
Rossby waves controls the overall appearance of hot spots, however, the
westward phase velocity of the wave leads to underestimates of the true jet
stream speed.Comment: 33 pages, 11 figures; accepted for publication in Icarus; for
supplementary movies, please contact autho
Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences
Results: We present an application that enables the quantitative analysis of
multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence
microscopy images. The image sequences show stem cells together with blood
vessels, enabling quantification of the dynamic behaviors of stem cells in
relation to their vascular niche, with applications in developmental and cancer
biology. Our application automatically segments, tracks, and lineages the image
sequence data and then allows the user to view and edit the results of
automated algorithms in a stereoscopic 3-D window while simultaneously viewing
the stem cell lineage tree in a 2-D window. Using the GPU to store and render
the image sequence data enables a hybrid computational approach. An
inference-based approach utilizing user-provided edits to automatically correct
related mistakes executes interactively on the system CPU while the GPU handles
3-D visualization tasks. Conclusions: By exploiting commodity computer gaming
hardware, we have developed an application that can be run in the laboratory to
facilitate rapid iteration through biological experiments. There is a pressing
need for visualization and analysis tools for 5-D live cell image data. We
combine accurate unsupervised processes with an intuitive visualization of the
results. Our validation interface allows for each data set to be corrected to
100% accuracy, ensuring that downstream data analysis is accurate and
verifiable. Our tool is the first to combine all of these aspects, leveraging
the synergies obtained by utilizing validation information from stereo
visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc
Spott : on-the-spot e-commerce for television using deep learning-based video analysis techniques
Spott is an innovative second screen mobile multimedia application which offers viewers relevant information on objects (e.g., clothing, furniture, food) they see and like on their television screens. The application enables interaction between TV audiences and brands, so producers and advertisers can offer potential consumers tailored promotions, e-shop items, and/or free samples. In line with the current views on innovation management, the technological excellence of the Spott application is coupled with iterative user involvement throughout the entire development process. This article discusses both of these aspects and how they impact each other. First, we focus on the technological building blocks that facilitate the (semi-) automatic interactive tagging process of objects in the video streams. The majority of these building blocks extensively make use of novel and state-of-the-art deep learning concepts and methodologies. We show how these deep learning based video analysis techniques facilitate video summarization, semantic keyframe clustering, and (similar) object retrieval. Secondly, we provide insights in user tests that have been performed to evaluate and optimize the application's user experience. The lessons learned from these open field tests have already been an essential input in the technology development and will further shape the future modifications to the Spott application
- …