1,583 research outputs found

    Density alteration in non-physiological cells

    Get PDF
    In the present study an important phenomenon of cells was discovered: the change of intracellular density in cell's response to drug and environmental factors. For convenience, this phenomenon is named as "density alteration in non-physiological cells" ( DANCE). DANCE was determined by discontinuous sucrose gradient centrifugation (DSGC), in which cells were separated into several bands. The number and position of the bands in DSGC varied with the change of cell culture conditions, drugs, and physical process, indicating that cell's response to these factors was associated with alteration of intracellular density. Our results showed that the bands of cells were molecularly different from each other, such as the expression of some mRNAs. For most cells tested, intracellular density usually decreased when the cells were in bad conditions, in presence of drugs, or undergoing pathological changes. However, unlike other tissue cells, brain cells showed increased intracellular density in 24 hrs after the animal death. In addition, DANCE was found to be related to drug resistance, with higher drug-resistance in cells of lower intracellular density. Further study found that DANCE also occurred in microorganisms including bacteria and fungus, suggesting that DANCE might be a sensitive and general response of cells to drugs and environmental change. The mechanisms for DANCE are not clear. Based on our study the following causes were hypothesized: change of metabolism mode, change of cell membrane function, and pathological change. DANCE could be important in medical and biological sciences. Study of DANCE might be helpful to the understanding of drug resistance, development of new drugs, separation of new subtypes from a cell population, forensic analysis, and importantly, discovery of new physiological or pathological properties of cells

    Fixed-Time Convergent Distributed Observer Design of Linear Systems: A Kernel-Based Approach

    Get PDF
    The robust distributed state estimation for a class of continuous-time linear time-invariant systems is achieved by a novel kernel-based distributed observer, which, for the first time, ensures fixed-time convergence properties. The communication network between the agents is prescribed by a directed graph in which each node involves a fixed-time convergent estimator. The local observer estimates and broadcasts the observable states among neighbours so that the full state vector can be recovered at each node and the estimation error reaches zero after a predefined fixed time in the absence of perturbation. This represents a new distributed estimation framework that enables faster convergence speed and further reduced information exchange compared to a conventional Luenberger-like approach. The ubiquitous timevarying communication delay across the network is suitably compensated by a prediction scheme. Moreover, the robustness of the algorithm in the presence of bounded measurement and process noise is characterised. Numerical simulations and comparisons demonstrate the effectiveness of the observer and its advantages over the existing methods

    On Expressivity and Trainability of Quadratic Networks

    Full text link
    Inspired by the diversity of biological neurons, quadratic artificial neurons can play an important role in deep learning models. The type of quadratic neurons of our interest replaces the inner-product operation in the conventional neuron with a quadratic function. Despite promising results so far achieved by networks of quadratic neurons, there are important issues not well addressed. Theoretically, the superior expressivity of a quadratic network over either a conventional network or a conventional network via quadratic activation is not fully elucidated, which makes the use of quadratic networks not well grounded. Practically, although a quadratic network can be trained via generic backpropagation, it can be subject to a higher risk of collapse than the conventional counterpart. To address these issues, we first apply the spline theory and a measure from algebraic geometry to give two theorems that demonstrate better model expressivity of a quadratic network than the conventional counterpart with or without quadratic activation. Then, we propose an effective and efficient training strategy referred to as ReLinear to stabilize the training process of a quadratic network, thereby unleashing the full potential in its associated machine learning tasks. Comprehensive experiments on popular datasets are performed to support our findings and evaluate the performance of quadratic deep learning

    1-[4-(4-Chloro­but­oxy)-2-hy­droxy­phen­yl]ethanone

    Get PDF
    In the title compound, C12H15ClO3, the eth­oxy group is nearly coplanar with the benzene ring, making a dihedral angle of 9.03 (4)°, and is involved in an intra­molecular O—H⋯O hydrogen bond to the neighbouring hy­droxy group
    • …
    corecore