866 research outputs found

    Contribution of sewage treatment to pollution abatement of urban streams

    Get PDF
    In this study, we assessed the efficiency and effectiveness of the Vrishabhavathy Valley Treatment plant (VVTP) in Bengaluru city, which is the oldest STP in the city. Since VVTP treats both raw sewage and polluted river water, with the latter constituting 80% of the influent, we sampled water quality at locations upstream and downstream of the plant to evaluate overall efficacy as well

    Water Management in Arkavathy basin: a situation analysis

    Get PDF
    The Arkavathy sub-basin, which is part of the Cauvery basin, is a highly stressed, rapidly urbanising watershed on the outskirts of the city of Bengaluru. The purpose of this situation analysis document is to summarise the current state of knowledge on water management in the Arkavathy sub-basin and identify critical knowledge gaps to inform future researchers in the basin. It is hoped that such an analysis will help those studying or working on water issues in the basin itself, and also provide useful insights for other such urbanising basins. The Arkavathy sub-basin is located in the state of Karnataka in India (see Figure 1). It covers an area of 4,253 km2, and is part of the inter-state Cauvery River basin. The sub-basin covers parts of eight taluka – Doddaballapur, Nelamangala, Magadi, Bangalore North, Bangalore South, Ramanagara, Anekal and Kanakapura within three districts – Bangalore Urban, Bangalore Rural and Ramanagara. The total population in the sub-basin was 72 lakhs in 2001 and is estimated to be approximately 86 lakhs in 2011. This is distributed approximately 50:50 between urban and rural settlements (although the urban share is growing rapidly), with 33 lakhs from Bengaluru city (more than one-third of Bengaluru’s total population). There are also four major Class II towns: Doddaballapur, Nelamangala, Ramanagara, and Kanakapura with populations ranging from 35,000 to 95,000. In spite of rapid urbanisation, there are still 1,107 revenue villages with populations ranging from less than 10 to 6,0001, and agriculture continues to be the mainstay of a large number of them

    Cross-View Action Recognition from Temporal Self-Similarities

    Get PDF
    This paper concerns recognition of human actions under view changes. We explore self-similarities of action sequences over time and observe the striking stability of such measures across views. Building upon this key observation we develop an action descriptor that captures the structure of temporal similarities and dissimilarities within an action sequence. Despite this descriptor not being strictly view-invariant, we provide intuition and experimental validation demonstrating the high stability of self-similarities under view changes. Self-similarity descriptors are also shown stable under action variations within a class as well as discriminative for action recognition. Interestingly, self-similarities computed from different image features possess similar properties and can be used in a complementary fashion. Our method is simple and requires neither structure recovery nor multi-view correspondence estimation. Instead, it relies on weak geometric cues captured by self-similarities and combines them with machine learning for efficient cross-view action recognition. The method is validated on three public datasets, it has similar or superior performance compared to related methods and it performs well even in extreme conditions such as when recognizing actions from top views while using side views for training only

    A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild

    Get PDF
    In this work, we investigate the problem of lip-syncing a talking face video of an arbitrary identity to match a target speech segment. Current works excel at producing accurate lip movements on a static image or videos of specific people seen during the training phase. However, they fail to accurately morph the lip movements of arbitrary identities in dynamic, unconstrained talking face videos, resulting in significant parts of the video being out-of-sync with the new audio. We identify key reasons pertaining to this and hence resolve them by learning from a powerful lip-sync discriminator. Next, we propose new, rigorous evaluation benchmarks and metrics to accurately measure lip synchronization in unconstrained videos. Extensive quantitative evaluations on our challenging benchmarks show that the lip-sync accuracy of the videos generated by our Wav2Lip model is almost as good as real synced videos. We provide a demo video clearly showing the substantial impact of our Wav2Lip model and evaluation benchmarks on our website: \url{cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild}. The code and models are released at this GitHub repository: \url{github.com/Rudrabha/Wav2Lip}. You can also try out the interactive demo at this link: \url{bhaasha.iiit.ac.in/lipsync}.Comment: 9 pages (including references), 3 figures, Accepted in ACM Multimedia, 202
    • …
    corecore