185 research outputs found
Metaverse Security and Privacy: An Overview
Metaverse is a living space and cyberspace that realizes the process of
virtualizing and digitizing the real world. It integrates a plethora of
existing technologies with the goal of being able to map the real world, even
beyond the real world. Metaverse has a bright future and is expected to have
many applications in various scenarios. The support of the Metaverse is based
on numerous related technologies becoming mature. Hence, there is no doubt that
the security risks of the development of the Metaverse may be more prominent
and more complex. We present some Metaverse-related technologies and some
potential security and privacy issues in the Metaverse. We present current
solutions for Metaverse security and privacy derived from these technologies.
In addition, we also raise some unresolved questions about the potential
Metaverse. To summarize, this survey provides an in-depth review of the
security and privacy issues raised by key technologies in Metaverse
applications. We hope that this survey will provide insightful research
directions and prospects for the Metaverse's development, particularly in terms
of security and privacy protection in the Metaverse.Comment: IEEE BigData 2022. 10 pages, 2 figure
A Natural Wind Defrosting, Nano-coated Antibacterial Self-cleaning Energy-saving Health Air-cooled Refrigerator
The air-cooled frost-free household refrigerator is popular in the market because of its large size and frost-free size. However, the evaporator defrost process consumes a large amount of electrical energy to limit the wide spread of this refrigerator, at the same time because of its structural problems, resulting in its evaporator, air duct can not be artificially cleaned, leading to the growth of bacteria, pollution of food storage. This research has developed a self-cleaning energy-saving health refrigerator that uses indoor natural wind defrosting, ultra-hydrophilic nano-titanium dioxide coating photocatalytic sterilization and sterilization. After experimental comparison, under the same operating time of the same operating conditions, the refrigeration mode saves 1.5%, the defrost process saves 95%, reduces the amount of frosting by 23%, the temperature changes of the freezer is less than 7 ℃ , and the desterilization rate of nano-coated reaches 80%
AI-Generated Content (AIGC): A Survey
To address the challenges of digital intelligence in the digital economy,
artificial intelligence-generated content (AIGC) has emerged. AIGC uses
artificial intelligence to assist or replace manual content generation by
generating content based on user-inputted keywords or requirements. The
development of large model algorithms has significantly strengthened the
capabilities of AIGC, which makes AIGC products a promising generative tool and
adds convenience to our lives. As an upstream technology, AIGC has unlimited
potential to support different downstream applications. It is important to
analyze AIGC's current capabilities and shortcomings to understand how it can
be best utilized in future applications. Therefore, this paper provides an
extensive overview of AIGC, covering its definition, essential conditions,
cutting-edge capabilities, and advanced features. Moreover, it discusses the
benefits of large-scale pre-trained models and the industrial chain of AIGC.
Furthermore, the article explores the distinctions between auxiliary generation
and automatic generation within AIGC, providing examples of text generation.
The paper also examines the potential integration of AIGC with the Metaverse.
Lastly, the article highlights existing issues and suggests some future
directions for application.Comment: Preprint. 14 figures, 4 table
Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models
The recent performance leap of Large Language Models (LLMs) opens up new
opportunities across numerous industrial applications and domains. However,
erroneous generations, such as false predictions, misinformation, and
hallucination made by LLMs, have also raised severe concerns for the
trustworthiness of LLMs', especially in safety-, security- and
reliability-sensitive scenarios, potentially hindering real-world adoptions.
While uncertainty estimation has shown its potential for interpreting the
prediction risks made by general machine learning (ML) models, little is known
about whether and to what extent it can help explore an LLM's capabilities and
counteract its undesired behavior. To bridge the gap, in this paper, we
initiate an exploratory study on the risk assessment of LLMs from the lens of
uncertainty. In particular, we experiment with twelve uncertainty estimation
methods and four LLMs on four prominent natural language processing (NLP) tasks
to investigate to what extent uncertainty estimation techniques could help
characterize the prediction risks of LLMs. Our findings validate the
effectiveness of uncertainty estimation for revealing LLMs'
uncertain/non-factual predictions. In addition to general NLP tasks, we
extensively conduct experiments with four LLMs for code generation on two
datasets. We find that uncertainty estimation can potentially uncover buggy
programs generated by LLMs. Insights from our study shed light on future design
and development for reliable LLMs, facilitating further research toward
enhancing the trustworthiness of LLMs.Comment: 20 pages, 4 figure
Enhancing the Protein Tertiary Structure Prediction by Multiple Sequence Alignment Generation
The field of protein folding research has been greatly advanced by deep
learning methods, with AlphaFold2 (AF2) demonstrating exceptional performance
and atomic-level precision. As co-evolution is integral to protein structure
prediction, AF2's accuracy is significantly influenced by the depth of multiple
sequence alignment (MSA), which requires extensive exploration of a large
protein database for similar sequences. However, not all protein sequences
possess abundant homologous families, and consequently, AF2's performance can
degrade on such queries, at times failing to produce meaningful results. To
address this, we introduce a novel generative language model, MSA-Augmenter,
which leverages protein-specific attention mechanisms and large-scale MSAs to
generate useful, novel protein sequences not currently found in databases.
These sequences supplement shallow MSAs, enhancing the accuracy of structural
property predictions. Our experiments on CASP14 demonstrate that MSA-Augmenter
can generate de novo sequences that retain co-evolutionary information from
inferior MSAs, thereby improving protein structure prediction quality on top of
strong AF2
Ammonia Nitrogen Pollution Characteristics of Natural Rainfall in Urban Business District in Southern China: A Case Study of Chengdu City
Chengdu city was chosen as the representative of southern cities in China in this work, characteristics of ammonia nitrogen (NH3-N) pollution in natural rainfall were analyzed by measuring the concentration in 15 natural rainfalls from April to September in 2017. The influence of ammonia emission from toilet vent of building on NH3-N pollution in rainfall was investigated, and the variation of total NH3-N pollutants and its influencing factors were expounded. The results showed that the average concentration of NH3-N in first rainfall was the highest, reaching 18.2mg/L, the average concentration of NH3-N in the subsequent 14 rainfalls was between 2.0 and 5.0mg/L, which is higher than Grade V (?2mg/L) of Environmental Quality Standards of Surface Water (GB 3838-2002), and was an important source of NH3-N pollution in water. The concentration of NH3-N in natural rainfalls decreased with the increase of the distance between the sampling point and the toilet vent, indicating that the ammonia discharged from toilet exhaust is a major source of NH3-N pollution in urban atmosphere. The main factors affecting total NH3-N pollutants in natural precipitation include rainfall intensity, rainfall duration and drought days. The total amount of NH3-N pollutants in surface runoff is less than that in natural rainfall
Multimodal Large Language Models: A Survey
The exploration of multimodal language models integrates multiple data types,
such as images, text, language, audio, and other heterogeneity. While the
latest large language models excel in text-based tasks, they often struggle to
understand and process other data types. Multimodal models address this
limitation by combining various modalities, enabling a more comprehensive
understanding of diverse data. This paper begins by defining the concept of
multimodal and examining the historical development of multimodal algorithms.
Furthermore, we introduce a range of multimodal products, focusing on the
efforts of major technology companies. A practical guide is provided, offering
insights into the technical aspects of multimodal models. Moreover, we present
a compilation of the latest algorithms and commonly used datasets, providing
researchers with valuable resources for experimentation and evaluation. Lastly,
we explore the applications of multimodal models and discuss the challenges
associated with their development. By addressing these aspects, this paper aims
to facilitate a deeper understanding of multimodal models and their potential
in various domains.Comment: IEEE BigData 2023. 10 page
MYT1L is required for suppressing earlier neuronal development programs in the adult mouse brain
In vitro studies indicate the neurodevelopmental disorder gene myelin transcription factor 1-like (MYT1L) suppresses non-neuronal lineage genes during fibroblast-to-neuron direct differentiation. However, MYT1L\u27s molecular and cellular functions in the adult mammalian brain have not been fully characterized. Here, we found that MYT1L loss leads to up-regulated deep layer (DL) gene expression, corresponding to an increased ratio of DL/UL neurons in the adult mouse cortex. To define potential mechanisms, we conducted Cleavage Under Targets & Release Using Nuclease (CUT&RUN) to map MYT1L binding targets and epigenetic changes following MYT1L loss in mouse developing cortex and adult prefrontal cortex (PFC). We found MYT1L mainly binds to open chromatin, but with different transcription factor co-occupancies between promoters and enhancers. Likewise, multiomic data set integration revealed that, at promoters, MYT1L loss does not change chromatin accessibility but increases H3K4me3 and H3K27ac, activating both a subset of earlier neuronal development genes as well a
- …