299 research outputs found

    AI-assisted Protective Action: Study of ChatGPT as an Information Source for a Population Facing Climate Hazards

    Full text link
    ChatGPT has been emerging as a novel information source, and it is likely that the public might seek information from ChatGPT while taking protective actions when facing climate hazards such as floods and hurricanes. The objective of this study is to evaluate the accuracy and completeness of responses generated by ChatGPT when individuals seek information about aspects of taking protective actions. The survey analysis results indicated that: (1) the emergency managers considered the responses provided by ChatGPT as accurate and complete to a great extent; (2) it was statistically verified in evaluations that the generated information was accurate, but lacked completeness, implying that the extent of information provided is accurate; and (3) information generated for prompts related to hazard insurance received the highest evaluation, whereas the information generated related to evacuation received the lowest. This last result implies that, for complex, context-specific protective actions (such as evacuation), the information was rated as less complete compared with other protective actions. Also, the results showed that the perception of respondents regarding the utility of AI- assistive technologies (such as ChatGPT) for emergency preparedness and response improved after taking the survey and evaluating the information generated by ChatGPT. The findings from this study provide empirical evaluation regarding the utility of AI-assistive technologies for improving public decision-making and protective actions in disasters

    Weaving Equity into Infrastructure Resilience Research and Practice: A Decadal Review and Future Directions

    Full text link
    After about a decade of research in this domain, what is missing is a systematic overview of the research agenda across different infrastructures and hazards. It is now imperative to evaluate the current progress and gaps. This paper presents a systematic review of equity literature on disrupted infrastructure during a natural hazard event. Following a systematic review protocol, we collected, screened, and evaluated almost 3,000 studies. Our analysis focuses on the intersection within the dimensions of the eight-dimensional assessment framework that distinguishes focus of the study, methodological approaches, and equity dimensions (distributional-demographic, distributional-spatial, procedural, and capacity equity). To conceptualize the intersection of the different dimensions of equity, we refer to pathways, which identify how equity is constructed, analyzed, and used. Significant findings show that (1) the interest in equity in infrastructure resilience has exponentially increased, (2) the majority of studies are in the US and by extension in the global north, (3) most data collection use descriptive and open-data and none of the international studies use location-intelligence data. The most prominent equity conceptualization is distributional equity, such as the disproportionate impacts to vulnerable populations and spaces. The most common pathways to study equity connect distributional equity to the infrastructure's power, water, and transportation in response to flooding and hurricane storms. Other equity concepts or pathways, such as connections of equity to decision-making and building household capacity, remain understudied. Future research directions include quantifying the social costs of infrastructure disruptions and better integration of equity into resilience decision-making.Comment: 37 pages, 11 figures, 2 table

    Freeway Traffic Density and On-Ramp Queue Control via ILC Approach

    Get PDF
    A new queue length information fused iterative learning control approach (QLIF-ILC) is presented for freeway traffic ramp metering to achieve a better performance by utilizing the error information of the on-ramp queue length. The QLIF-ILC consists of two parts, where the iterative feedforward part updates the control input signal by learning from the past control data in previous trials, and the current feedback part utilizes the tracking error of the current learning iteration to stabilize the controlled plant. These two parts are combined in a complementary manner to enhance the robustness of the proposed QLIF-ILC. A systematic approach is developed to analyze the convergence and robustness of the proposed learning scheme. The simulation results are further given to demonstrate the effectiveness of the proposed QLIF-ILC

    Gallic acid caused cultured mice TM4 Sertoli cells apoptosis and necrosis

    Get PDF
    Objective The study was designed to determine the cytotoxic effect of gallic acid (GA), obtained by the hydrolysis of tannins, on mice TM4 Sertoli cells apoptosis. Methods In the present study, non-tumorigenic mice TM4 Sertoli cells were treated with different concentrations of GA for 24 h. After treatment, cell viability was evaluated using WST-1, mitochondrial dysfunction, cells apoptosis and necrosis was detected using JC-1, Hoechst 33342 and propidium iodide staining. The expression levels of Cyclin B1, proliferating cell nuclear antigen (PCNA), Bcl-2-associated X protein (BAX), and Caspase-3 were also detected by quantitative real-time polymerase chain reaction and Western-blotting. Results The results showed that 20 to 400 μM GA inhibited viability of TM4 Sertoli cells in a dose-dependent manner. Treatment with 400 μM GA significantly inhibited PCNA and Cyclin B1 expression, however up-regulated BAX and Caspase-3 expression, caused mitochondrial membrane depolarization, activated Caspase-3, and induced DNA damage, thus, markedly increased the numbers of dead cells. Conclusion Our findings showed that GA could disrupt mitochondrial function and caused TM4 cells to undergo apoptosis and necrosis

    Self-Supervised Video Hashing with Hierarchical Binary Auto-encoder

    Full text link
    Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval

    SUN: Exploring Intrinsic Uncertainties in Text-to-SQL Parsers

    Full text link
    This paper aims to improve the performance of text-to-SQL parsing by exploring the intrinsic uncertainties in the neural network based approaches (called SUN). From the data uncertainty perspective, it is indisputable that a single SQL can be learned from multiple semantically-equivalent questions.Different from previous methods that are limited to one-to-one mapping, we propose a data uncertainty constraint to explore the underlying complementary semantic information among multiple semantically-equivalent questions (many-to-one) and learn the robust feature representations with reduced spurious associations. In this way, we can reduce the sensitivity of the learned representations and improve the robustness of the parser. From the model uncertainty perspective, there is often structural information (dependence) among the weights of neural networks. To improve the generalizability and stability of neural text-to-SQL parsers, we propose a model uncertainty constraint to refine the query representations by enforcing the output representations of different perturbed encoding networks to be consistent with each other. Extensive experiments on five benchmark datasets demonstrate that our method significantly outperforms strong competitors and achieves new state-of-the-art results. For reproducibility, we release our code and data at https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/sunsql.Comment: Accepted at COLING 202
    • …
    corecore