450 research outputs found

    Natech災害と気候変動:多様な気候シナリオの下での米国における熱帯低気圧を引き金としたNatech事象の発生率と変動性に関する広範囲の空間モデリング

    Get PDF
    京都大学新制・課程博士博士(工学)甲第23170号工博第4814号新制||工||1752(附属図書館)京都大学大学院工学研究科都市社会工学専攻(主査)教授 CRUZ Ana Maria , 教授 宇野 伸宏, 准教授 横松 宗太学位規則第4条第1項該当Doctor of Philosophy (Engineering)Kyoto UniversityDFA

    Programmable Biomolecule Assembly and Activity in Prepackaged BioMEMS

    Get PDF
    Antibiotic resistance is an increasing public health concern and few new drugs for bacterial pathogenesis have been obtained without addressing this resistance. Quorum sensing (QS) is a newly-discovered system mediated by extracellular chemical signals known as "autoinducers", which can coordinate population-scale changes in gene regulation when the number of cells reaches a "quorum" level. The capability to intercept and rewire the biosynthesis pathway of autoinduer-2 (AI-2), a universal chemical signaling molecule, opens the door to discover novel antimicrobial drugs that are able to bypass the antibiotic resistance. In this research, chitosan-mediated in situ biomolecule assembly has been demonstrated as a facile approach to direct the assembly of biological components into a prefabricated, systematically controlled bio-microelectromechanical system (bioMEMS). Our bioMEMS device enables post-fabricated, signal-guided assembly of labile biomolecules such as proteins and DNA onto localized inorganic surfaces inside microfluidic channels with spatial and temporal programmability. Particularly, the programmable assembly and enzymatic activity of the metabolic pathway enzyme Pfs, one of the two AI-2 synthases, have been demonstrated as an important step to reconstruct and interrogate the AI-2 synthesis pathway in the bioMEMS environment. Additionally, the bioMEMS has been optimized for studies of metabolic pathway enzymes by implementing a novel packaging technique and an experimental strategy to improve the signal-to-background ratio of the site-specific enzymatic reactions in the bioMEMS device. I envision that the demonstrated technologies represent a key step in progress toward a bioMEMS technology suitable to support metabolic engineering research and development

    Quantifying Spatiotemporal Dynamics of Solar Radiation over the Northeast China Based on ACO-BPNN Model and Intensity Analysis

    Get PDF
    Reliable information on the spatiotemporal dynamics of solar radiation plays a crucial role in studies relating to global climate change. In this study, a new backpropagation neural network (BPNN) model optimized with an Ant Colony Optimization (ACO) algorithm was developed to generate the ACO-BPNN model, which had demonstrated superior performance for simulating solar radiation compared to traditional BPNN modelling, for Northeast China. On this basis, we applied an intensity analysis to investigate the spatiotemporal variation of solar radiation from 1982 to 2010 over the study region at three levels: interval, category, and conversion. Research findings revealed that (1) the solar radiation resource in the study region increased from the 1980s to the 2000s and the average annual rate of variation from the 1980s to the 1990s was lower than that from the 1990s to the 2000s and (2) the gains and losses of solar radiation at each level were in different conditions. The poor, normal, and comparatively abundant levels were transferred to higher levels, whereas the abundant level was transferred to lower levels. We believe our findings contribute to implementing ad hoc energy management strategies to optimize the use of solar radiation resources and provide scientific suggestions for policy planning

    Deep Learning-enabled Spatial Phase Unwrapping for 3D Measurement

    Full text link
    In terms of 3D imaging speed and system cost, the single-camera system projecting single-frequency patterns is the ideal option among all proposed Fringe Projection Profilometry (FPP) systems. This system necessitates a robust spatial phase unwrapping (SPU) algorithm. However, robust SPU remains a challenge in complex scenes. Quality-guided SPU algorithms need more efficient ways to identify the unreliable points in phase maps before unwrapping. End-to-end deep learning SPU methods face generality and interpretability problems. This paper proposes a hybrid method combining deep learning and traditional path-following for robust SPU in FPP. This hybrid SPU scheme demonstrates better robustness than traditional quality-guided SPU methods, better interpretability than end-to-end deep learning scheme, and generality on unseen data. Experiments on the real dataset of multiple illumination conditions and multiple FPP systems differing in image resolution, the number of fringes, fringe direction, and optics wavelength verify the effectiveness of the proposed method.Comment: 26 page

    Robust image steganography against lossy JPEG compression based on embedding domain selection and adaptive error correction

    Full text link
    Transmitting images for communication on social networks has become routine, which is helpful for covert communication. The traditional steganography algorithm is unable to successfully convey secret information since the social network channel will perform lossy operations on images, such as JPEG compression. Previous studies tried to solve this problem by enhancing the robustness or making the cover adapt to the channel processing. In this study, we proposed a robust image steganography method against lossy JPEG compression based on embedding domain selection and adaptive error correction. To improve anti-steganalysis performance, the embedding domain is selected adaptively. To increase robustness and lessen the impact on anti-steganalysis performance, the error correction capacity of the error correction code is adaptively adjusted to eliminate redundancy. The experimental results show that the proposed method achieves better anti-steganalysis and robustness

    Self-Play and Self-Describe: Policy Adaptation with Vision-Language Foundation Models

    Full text link
    Recent progress on vision-language foundation models have brought significant advancement to building general-purpose robots. By using the pre-trained models to encode the scene and instructions as inputs for decision making, the instruction-conditioned policy can generalize across different objects and tasks. While this is encouraging, the policy still fails in most cases given an unseen task or environment. To adapt the policy to unseen tasks and environments, we explore a new paradigm on leveraging the pre-trained foundation models with Self-PLAY and Self-Describe (SPLAYD). When deploying the trained policy to a new task or a new environment, we first let the policy self-play with randomly generated instructions to record the demonstrations. While the execution could be wrong, we can use the pre-trained foundation models to accurately self-describe (i.e., re-label or classify) the demonstrations. This automatically provides new pairs of demonstration-instruction data for policy fine-tuning. We evaluate our method on a broad range of experiments with the focus on generalization on unseen objects, unseen tasks, unseen environments, and sim-to-real transfer. We show SPLAYD improves baselines by a large margin in all cases. Our project page is available at https://geyuying.github.io/SPLAYD/Comment: Project page: https://geyuying.github.io/SPLAYD

    Cloud Resource Management With Turnaround Time Driven Auto-Scaling

    Get PDF
    Cloud resource management research and techniques have received relevant attention in the last years. In particular, recently numerous studies have focused on determining the relationship between server-side system information and performance experience for reducing resource wastage. However, the genuine experiences of clients cannot be readily understood only by using the collected server-side information. In this paper, a cloud resource management framework with two novel turnaround time driven auto-scaling mechanisms is proposed for ensuring the stability of service performance. In the first mechanism, turnaround time monitors are deployed in the client-side instead of the more traditional server-side, and the information collected outside the server is used for driving a dynamic auto-scaling operation. In the second mechanism, a schedule-based auto scaling preconfiguration maker is designed to test and identify the amount of resources required in the cloud. The reported experimental results demonstrate that using our original framework for cloud resource management, stable service quality can be ensured and, moreover, a certain amount of quality variation can be handled in order to allow the stability of the service performance to be increased

    R Functions for Sample Size and Probability Calculations for Assessing Consistency of Treatment Effects in Multi-Regional Clinical Trials

    Get PDF
    Multi-regional clinical trials have been widely used for efficient global new drug developments. Due to potential heterogeneity of patient populations, it is critical to evaluate consistency of treatment effects across different regions in a multi-regional trial in order to determine the applicability of the overall treatment effect to the patients in individual regions. Quan et al. (2010) proposed definitions for the assessments of consistency of treatment effects in multi-regional trials. To facilitate the application of their ideas to design multi-regional trials, in this paper, we provide the corresponding R functions for calculating the unconditional and conditional probabilities for demonstrating consistency in relationship with the overall/regional sample sizes and the anticipated treatment effects. Detailed step by step instructions and trial examples are also provided to illustrate the applications of these R functions

    Reasoning in Conversation: Solving Subjective Tasks through Dialogue Simulation for Large Language Models

    Full text link
    Large Language Models (LLMs) have achieved remarkable performance in objective tasks such as open-domain question answering and mathematical reasoning, which can often be solved through recalling learned factual knowledge or chain-of-thought style reasoning. However, we find that the performance of LLMs in subjective tasks is still unsatisfactory, such as metaphor recognition, dark humor detection, etc. Compared to objective tasks, subjective tasks focus more on interpretation or emotional response rather than a universally accepted reasoning pathway. Based on the characteristics of the tasks and the strong dialogue-generation capabilities of LLMs, we propose RiC (Reasoning in Conversation), a method that focuses on solving subjective tasks through dialogue simulation. The motivation of RiC is to mine useful contextual information by simulating dialogues instead of supplying chain-of-thought style rationales, thereby offering potential useful knowledge behind dialogues for giving the final answers. We evaluate both API-based and open-source LLMs including GPT-4, ChatGPT, and OpenChat across twelve tasks. Experimental results show that RiC can yield significant improvement compared with various baselines
    corecore