404 research outputs found

    Remote Sensing of River Discharge: A Review and a Framing for the Discipline

    Get PDF
    Remote sensing of river discharge (RSQ) is a burgeoning field rife with innovation. This innovation has resulted in a highly non-cohesive subfield of hydrology advancing at a rapid pace, and as a result misconceptions, mis-citations, and confusion are apparent among authors, readers, editors, and reviewers. While the intellectually diverse subfield of RSQ practitioners can parse this confusion, the broader hydrology community views RSQ as a monolith and such confusion can be damaging. RSQ has not been comprehensively summarized over the past decade, and we believe that a summary of the recent literature has a potential to provide clarity to practitioners and general hydrologists alike. Therefore, we here summarize a broad swath of the literature, and find after our reading that the most appropriate way to summarize this literature is first by application area (into methods appropriate for gauged, semi-gauged, regionally gauged, politically ungauged, and totally ungauged basins) and next by methodology. We do not find categorizing by sensor useful, and everything from un-crewed aerial vehicles (UAVs) to satellites are considered here. Perhaps the most cogent theme to emerge from our reading is the need for context. All RSQ is employed in the service of furthering hydrologic understanding, and we argue that nearly all RSQ is useful in this pursuit provided it is properly contextualized. We argue that if authors place each new work into the correct application context, much confusion can be avoided, and we suggest a framework for such context here. Specifically, we define which RSQ techniques are and are not appropriate for ungauged basins, and further define what it means to be ‘ungauged’ in the context of RSQ. We also include political and economic realities of RSQ, as the objective of the field is sometimes to provide data purposefully cloistered by specific political decisions. This framing can enable RSQ to respond to hydrology at large with confidence and cohesion even in the face of methodological and application diversity evident within the literature. Finally, we embrace the intellectual diversity of RSQ and suggest the field is best served by a continuation of methodological proliferation rather than by a move toward orthodoxy and standardization

    Quaternionic Representation of the Riesz Pyramid for Video Magnification

    Get PDF
    Recently, we presented a new image pyramid, called the Riesz pyramid, that uses the Riesz transform to manipulate the phase in non-oriented sub-bands of an image sequence to produce real-time motion-magnified videos. In this report we give a quaternionic formulation of the Riesz pyramid, and show how several seemingly heuristic choices in how to use the Riesz transform for phase-based video magnification fall out of this formulation in a natural and principled way. We intend this report to accompany the original paper on the Riesz pyramid for video magnification

    Riesz pyramids for fast phase-based video magnification

    Get PDF
    We present a new compact image pyramid representation, the Riesz pyramid, that can be used for real-time phase-based motion magnification. Our new representation is less overcomplete than even the smallest two orientation, octave-bandwidth complex steerable pyramid, and can be implemented using compact, efficient linear filters in the spatial domain. Motion-magnified videos produced with this new representation are of comparable quality to those produced with the complex steerable pyramid. When used with phase-based video magnification, the Riesz pyramid phase-shifts image features along only their dominant orientation rather than every orientation like the complex steerable pyramid.Quanta Computer (Firm)Shell ResearchNational Science Foundation (U.S.) (CGV-1111415)Microsoft Research (PhD Fellowship)Massachusetts Institute of Technology. Department of MathematicsNational Science Foundation (U.S.). Graduate Research Fellowship (Grant 1122374

    Motion denoising with application to time-lapse photography

    Get PDF
    Motions can occur over both short and long time scales. We introduce motion denoising, which treats short-term changes as noise, long-term changes as signal, and re-renders a video to reveal the underlying long-term events. We demonstrate motion denoising for time-lapse videos. One of the characteristics of traditional time-lapse imagery is stylized jerkiness, where short-term changes in the scene appear as small and annoying jitters in the video, often obfuscating the underlying temporal events of interest. We apply motion denoising for resynthesizing time-lapse videos showing the long-term evolution of a scene with jerky short-term changes removed. We show that existing filtering approaches are often incapable of achieving this task, and present a novel computational approach to denoise motion without explicit motion analysis. We demonstrate promising experimental results on a set of challenging time-lapse sequences.United States. National Geospatial-Intelligence Agency (NEGI-1582-04-0004)Shell ResearchUnited States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-06-1-0734)National Science Foundation (U.S.) (0964004

    Two Weeks of Ischemic Conditioning Improves Walking Speed and Reduces Neuromuscular Fatigability in Chronic Stroke Survivors

    Get PDF
    This pilot study examined whether ischemic conditioning (IC), a noninvasive, cost-effective, and easy-to-administer intervention, could improve gait speed and paretic leg muscle function in stroke survivors. We hypothesized that 2 wk of IC training would increase self-selected walking speed, increase paretic muscle strength, and reduce neuromuscular fatigability in chronic stroke survivors. Twenty-two chronic stroke survivors received either IC or IC Sham on their paretic leg every other day for 2 wk (7 total sessions). IC involved 5-min bouts of ischemia, repeated five times, using a cuff inflated to 225 mmHg on the paretic thigh. For IC Sham, the cuff inflation pressure was 10 mmHg. Self-selected walking speed was assessed using the 10-m walk test, and paretic leg knee extensor strength and fatigability were assessed using a Biodex dynamometer. Self-selected walking speed increased in the IC group (0.86 ± 0.21 m/s pretest vs. 1.04 ± 0.22 m/s posttest, means ± SD; P\u3c 0.001) but not in the IC Sham group (0.92 ± 0.47 m/s pretest vs. 0.96 ± 0.46 m/s posttest; P= 0.25). Paretic leg maximum voluntary contractions were unchanged in both groups (103 ± 57 N·m pre-IC vs. 109 ± 65 N·m post-IC; 103 ± 59 N·m pre-IC Sham vs. 108 ± 67 N·m post-IC Sham; P = 0.81); however, participants in the IC group maintained a submaximal isometric contraction longer than participants in the IC Sham group (278 ± 163 s pre-IC vs. 496 ± 313 s post-IC, P = 0.004; 397 ± 203 s pre-IC Sham vs. 355 ± 195 s post-IC Sham; P = 0.46). The results from this pilot study thus indicate that IC training has the potential to improve walking speed and paretic muscle fatigue resistance poststroke

    The visual microphone: Passive recovery of sound from video

    Get PDF
    When sound hits an object, it causes small vibrations of the object's surface. We show how, using only high-speed video of the object, we can extract those minute vibrations and partially recover the sound that produced them, allowing us to turn everyday objects---a glass of water, a potted plant, a box of tissues, or a bag of chips---into visual microphones. We recover sounds from high-speed footage of a variety of objects with different properties, and use both real and simulated data to examine some of the factors that affect our ability to visually recover sound. We evaluate the quality of recovered sounds using intelligibility and SNR metrics and provide input and recovered audio samples for direct comparison. We also explore how to leverage the rolling shutter in regular consumer cameras to recover audio from standard frame-rate videos, and use the spatial resolution of our method to visualize how sound-related vibrations vary over an object's surface, which we can use to recover the vibration modes of an object.Qatar Computing Research InstituteNational Science Foundation (U.S.) (CGV-1111415)National Science Foundation (U.S.). Graduate Research Fellowship (Grant 1122374)Massachusetts Institute of Technology. Department of MathematicsMicrosoft Research (PhD Fellowship

    Phase-based video motion processing

    Get PDF
    We introduce a technique to manipulate small movements in videos based on an analysis of motion in complex-valued image pyramids. Phase variations of the coefficients of a complex-valued steerable pyramid over time correspond to motion, and can be temporally processed and amplified to reveal imperceptible motions, or attenuated to remove distracting changes. This processing does not involve the computation of optical flow, and in comparison to the previous Eulerian Video Magnification method it supports larger amplification factors and is significantly less sensitive to noise. These improved capabilities broaden the set of applications for motion processing in videos. We demonstrate the advantages of this approach on synthetic and natural video sequences, and explore applications in scientific analysis, visualization and video enhancement.Shell ResearchUnited States. Defense Advanced Research Projects Agency. Soldier Centric Imaging via Computational CamerasNational Science Foundation (U.S.) (CGV-1111415)Cognex CorporationMicrosoft Research (PhD Fellowship)American Society for Engineering Education. National Defense Science and Engineering Graduate Fellowshi

    Eulerian video magnification for revealing subtle changes in the world

    Get PDF
    Our goal is to reveal temporal variations in videos that are difficult or impossible to see with the naked eye and display them in an indicative manner. Our method, which we call Eulerian Video Magnification, takes a standard video sequence as input, and applies spatial decomposition, followed by temporal filtering to the frames. The resulting signal is then amplified to reveal hidden information. Using our method, we are able to visualize the flow of blood as it fills the face and also to amplify and reveal small motions. Our technique can run in real time to show phenomena occurring at the temporal frequencies selected by the user.United States. Defense Advanced Research Projects Agency (DARPA SCENICC program)National Science Foundation (U.S.) (NSF CGV-1111415)Quanta Computer (Firm)Nvidia Corporation (Graduate Fellowship
    • …
    corecore