1,609 research outputs found

    Variance-Constrained H∞H_{\infty } finite-horizon filtering for multi-rate time-varying networked systems based on stochastic protocols

    Get PDF
    summary:In this paper, the variance-constrained H∞H_\infty finite-horizon filtering problem is investigated for a class of time-varying nonlinear system under muti-rate communication network and stochastic protocol (SP). The stochastic protocol is employed to determine which sensor obtains access to the muti-rate communication network in order to relieve communication burden. A novel mapping technology is applied to characterize the randomly switching behavior of the data transmission resulting from the utilization of the SP in muti-rate communication network. By using relaxation method, sufficient conditions are derived for the existence of the finite-horizon filter satisfying both the prescribed H∞H_\infty performance and the covariance requirement of filtering errors, and the solutions of filters satisfying the above indexes are obtained by using linear matrix inequalities. Finally, the validity and effectiveness of the proposed filter scheme are verified by numerical simulation

    A Preliminary Exploration of YouTubers' Use of Generative-AI in Content Creation

    Full text link
    Content creators increasingly utilize generative artificial intelligence (Gen-AI) on platforms such as YouTube, TikTok, Instagram, and various blogging sites to produce imaginative images, AI-generated videos, and articles using Large Language Models (LLMs). Despite its growing popularity, there remains an underexplored area concerning the specific domains where AI-generated content is being applied, and the methodologies content creators employ with Gen-AI tools during the creation process. This study initially explores this emerging area through a qualitative analysis of 68 YouTube videos demonstrating Gen-AI usage. Our research focuses on identifying the content domains, the variety of tools used, the activities performed, and the nature of the final products generated by Gen-AI in the context of user-generated content.Comment: Accepted at CHI LBW 202

    Fault Detection of Networked Control Systems Based on Sliding Mode Observer

    Get PDF
    This paper is concerned with the network-based fault detection problem for a class of nonlinear discrete-time networked control systems with multiple communication delays and bounded disturbances. First, a sliding mode based nonlinear discrete observer is proposed. Then the sufficient conditions of sliding motion asymptotical stability are derived by means of the linear matrix inequality (LMI) approach on a designed surface. Then a discrete-time sliding-mode fault observer is designed that is capable of guaranteeing the discrete-time sliding-mode reaching condition of the specified sliding surface. Finally, an illustrative example is provided to show the usefulness and effectiveness of the proposed design method

    Semantic Communications for Image Recovery and Classification via Deep Joint Source and Channel Coding

    Full text link
    With the recent advancements in edge artificial intelligence (AI), future sixth-generation (6G) networks need to support new AI tasks such as classification and clustering apart from data recovery. Motivated by the success of deep learning, the semantic-aware and task-oriented communications with deep joint source and channel coding (JSCC) have emerged as new paradigm shifts in 6G from the conventional data-oriented communications with separate source and channel coding (SSCC). However, most existing works focused on the deep JSCC designs for one task of data recovery or AI task execution independently, which cannot be transferred to other unintended tasks. Differently, this paper investigates the JSCC semantic communications to support multi-task services, by performing the image data recovery and classification task execution simultaneously. First, we propose a new end-to-end deep JSCC framework by unifying the coding rate reduction maximization and the mean square error (MSE) minimization in the loss function. Here, the coding rate reduction maximization facilitates the learning of discriminative features for enabling to perform classification tasks directly in the feature space, and the MSE minimization helps the learning of informative features for high-quality image data recovery. Next, to further improve the robustness against variational wireless channels, we propose a new gated deep JSCC design, in which a gated net is incorporated for adaptively pruning the output features to adjust their dimensions based on channel conditions. Finally, we present extensive numerical experiments to validate the performance of our proposed deep JSCC designs as compared to various benchmark schemes

    Deep learning methods for protein torsion angle prediction

    Get PDF
    Background: Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. Results: We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Conclusions: Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy
    • …
    corecore