94 research outputs found

    The Design and Implementation of a Wireless Video Surveillance System.

    Get PDF
    Internet-enabled cameras pervade daily life, generating a huge amount of data, but most of the video they generate is transmitted over wires and analyzed offline with a human in the loop. The ubiquity of cameras limits the amount of video that can be sent to the cloud, especially on wireless networks where capacity is at a premium. In this paper, we present Vigil, a real-time distributed wireless surveillance system that leverages edge computing to support real-time tracking and surveillance in enterprise campuses, retail stores, and across smart cities. Vigil intelligently partitions video processing between edge computing nodes co-located with cameras and the cloud to save wireless capacity, which can then be dedicated to Wi-Fi hotspots, offsetting their cost. Novel video frame prioritization and traffic scheduling algorithms further optimize Vigil's bandwidth utilization. We have deployed Vigil across three sites in both whitespace and Wi-Fi networks. Depending on the level of activity in the scene, experimental results show that Vigil allows a video surveillance system to support a geographical area of coverage between five and 200 times greater than an approach that simply streams video over the wireless network. For a fixed region of coverage and bandwidth, Vigil outperforms the default equal throughput allocation strategy of Wi-Fi by delivering up to 25% more objects relevant to a user's query

    Effects of Macroeconomic News on Hong Kong Bond Market: An Intraday Empirical Evidence.

    Get PDF
    This research paper investigates the effect of six macroeconomic news announcements on the Hong Kong bond market. The five bonds chosen for the purpose of this study are; two year sovereign debt bond, five year government Treasury bond, seven year corporate market bond, ten year government Treasury bond and thirteen year corporate market bond. In particular we look at the effect of the six macroeconomic news announcements on bond prices and bond bid-ask spreads. The effect of the magnitude or the size of the surprise element on the bond is also investigated. We find three out of the six macroeconomict announcements to affect the government and corporate bonds significantly. We also find a widening of spreads after announcements for government Treasury bonds and a narrowing of spread for corporate market bonds. In conclusion, we find the government Treasury bonds to be more responsive than corporate market bonds

    Unexpected Political News And Its Impact On Stock Returns: Evidence From Karachi Stock Exchange

    Get PDF
    The study examines the effect of unexpected, surprise political events on stocks in the Karachi stock exchange. The study used daily closing prices on a sample portfolio of the 50 most performing stocks on the exchange, ranging over a time span of almost 26 months from August of 2006 till the August of 2008. In the empirical analysis, the non regressive event study methodology was used to calculate abnormal returns on the stocks by using the returns on the market index as a proxy, followed by a paired t-statistic to test for significant changes in stock returns. The results indicated that two out of the eight events under consideration resulted in significantly abnormal returns for the sample portfolio. In all the 8 events a few stocks always presented with significantly abnormal returns as a result of the event but on an individual basis. It was also observed that the market was extremely volatile and following every event was able to dilute the effect of it soon enough. The stock market has been so used to constant changes in the political condition of the country that most stocks were unable to mirror the effect of the incident in their returns

    Understanding HTML with Large Language Models

    Full text link
    Large language models (LLMs) have shown exceptional performance on a variety of natural language tasks. Yet, their capabilities for HTML understanding -- i.e., parsing the raw HTML of a webpage, with applications to automation of web-based tasks, crawling, and browser-assisted retrieval -- have not been fully explored. We contribute HTML understanding models (fine-tuned LLMs) and an in-depth analysis of their capabilities under three tasks: (i) Semantic Classification of HTML elements, (ii) Description Generation for HTML inputs, and (iii) Autonomous Web Navigation of HTML pages. While previous work has developed dedicated architectures and training procedures for HTML understanding, we show that LLMs pretrained on standard natural language corpora transfer remarkably well to HTML understanding tasks. For instance, fine-tuned LLMs are 12% more accurate at semantic classification compared to models trained exclusively on the task dataset. Moreover, when fine-tuned on data from the MiniWoB benchmark, LLMs successfully complete 50% more tasks using 192x less data compared to the previous best supervised model. Out of the LLMs we evaluate, we show evidence that T5-based models are ideal due to their bidirectional encoder-decoder architecture. To promote further research on LLMs for HTML understanding, we create and open-source a large-scale HTML dataset distilled and auto-labeled from CommonCrawl

    Traditional use of medicinal plants among the tribal communities of Chhota Bhangal, Western Himalaya

    Get PDF
    The importance of medicinal plants in traditional healthcare practices, providing clues to new areas of research and in biodiversity conservation is now well recognized. However, information on the uses for plants for medicine is lacking from many interior areas of Himalaya. Keeping this in view the present study was initiated in a tribal dominated hinterland of western Himalaya. The study aimed to look into the diversity of plant resources that are used by local people for curing various ailments. Questionnaire surveys, participatory observations and field visits were planned to illicit information on the uses of various plants. It was found that 35 plant species are commonly used by local people for curing various diseases. In most of the cases (45%) under ground part of the plant was used. New medicinal uses of Ranunculus hirtellus and Anemone rupicola are reported from this area. Similarly, preparation of "sik" a traditional recipe served as a nutritious diet to pregnant women is also not documented elsewhere. Implication of developmental activities and changing socio-economic conditions on the traditional knowledge are also discussed

    An effective nutrient medium for asymbiotic seed germination and large-scale in vitro regeneration of Dendrobium hookerianum, a threatened orchid of northeast India

    Get PDF
    Submergence inhibits photosynthesis by terrestrial wetland plants, but less so in species that possess leaf gas films when submerged. Floodwaters are often supersaturated with dissolved CO2 enabling photosynthesis by submerged terrestrial plants, although rates remain well-below those in air. This important adaptation that enhances survival in submerged conditions is reviewed

    Towards Generalist Biomedical AI

    Full text link
    Medicine is inherently multimodal, with rich data modalities spanning text, imaging, genomics, and more. Generalist biomedical artificial intelligence (AI) systems that flexibly encode, integrate, and interpret this data at scale can potentially enable impactful applications ranging from scientific discovery to care delivery. To enable the development of these models, we first curate MultiMedBench, a new multimodal biomedical benchmark. MultiMedBench encompasses 14 diverse tasks such as medical question answering, mammography and dermatology image interpretation, radiology report generation and summarization, and genomic variant calling. We then introduce Med-PaLM Multimodal (Med-PaLM M), our proof of concept for a generalist biomedical AI system. Med-PaLM M is a large multimodal generative model that flexibly encodes and interprets biomedical data including clinical language, imaging, and genomics with the same set of model weights. Med-PaLM M reaches performance competitive with or exceeding the state of the art on all MultiMedBench tasks, often surpassing specialist models by a wide margin. We also report examples of zero-shot generalization to novel medical concepts and tasks, positive transfer learning across tasks, and emergent zero-shot medical reasoning. To further probe the capabilities and limitations of Med-PaLM M, we conduct a radiologist evaluation of model-generated (and human) chest X-ray reports and observe encouraging performance across model scales. In a side-by-side ranking on 246 retrospective chest X-rays, clinicians express a pairwise preference for Med-PaLM M reports over those produced by radiologists in up to 40.50% of cases, suggesting potential clinical utility. While considerable work is needed to validate these models in real-world use cases, our results represent a milestone towards the development of generalist biomedical AI systems

    Large Language Models Encode Clinical Knowledge

    Full text link
    Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications

    PaLM: Scaling Language Modeling with Pathways

    Full text link
    Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies
    corecore