17 research outputs found
FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and Character Design
Recent advancements in Large Language Models (LLMs) have exhibited notable
efficacy in question-answering (QA) tasks across diverse domains. Their prowess
in integrating extensive web knowledge has fueled interest in developing
LLM-based autonomous agents. While LLMs are efficient in decoding human
instructions and deriving solutions by holistically processing historical
inputs, transitioning to purpose-driven agents requires a supplementary
rational architecture to process multi-source information, establish reasoning
chains, and prioritize critical tasks. Addressing this, we introduce
\textsc{FinMem}, a novel LLM-based agent framework devised for financial
decision-making. It encompasses three core modules: Profiling, to customize the
agent's characteristics; Memory, with layered message processing, to aid the
agent in assimilating hierarchical financial data; and Decision-making, to
convert insights gained from memories into investment decisions. Notably,
\textsc{FinMem}'s memory module aligns closely with the cognitive structure of
human traders, offering robust interpretability and real-time tuning. Its
adjustable cognitive span allows for the retention of critical information
beyond human perceptual limits, thereby enhancing trading outcomes. This
framework enables the agent to self-evolve its professional knowledge, react
agilely to new investment cues, and continuously refine trading decisions in
the volatile financial environment. We first compare \textsc{FinMem} with
various algorithmic agents on a scalable real-world financial dataset,
underscoring its leading trading performance in stocks. We then fine-tuned the
agent's perceptual span and character setting to achieve a significantly
enhanced trading performance. Collectively, \textsc{FinMem} presents a
cutting-edge LLM agent framework for automated trading, boosting cumulative
investment returns
Perception and willingness toward various immunization routes for COVID-19 vaccines: a cross-sectional survey in China
Conclusion: Needle-free vaccination is a promising technology for the next generation of vaccines, but we found that intramuscular injection was still the most acceptable immunization route in this survey. One major reason might be that most people lack knowledge about needle-free vaccination. We should strengthen the publicity of needle-free vaccination technology, and thus improve the acceptance and coverage of vaccination in different populations
Perception and willingness toward various immunization routes for COVID-19 vaccines: a cross-sectional survey in China
BackgroundTo date, most vaccines, including the COVID-19 vaccine, are mainly administered by intramuscular injection, which might lead to vaccine hesitancy in some populations due to needle fear. Alternatively, needle-free immunization technology is extensively developed to improve the efficacy and acceptance of vaccination. However, there is no study to report the perception and willingness toward various immunization routes of the COVID-19 vaccine in the general population.MethodsA cross-sectional survey was conducted nationwide using an online questionnaire. Bivariate analyses were undertaken to assess variable associations among the participants who reported a hesitancy to receive the COVID-19 booster vaccination. Multivariable logistic regression with a backward step-wise approach was used to analyze the predicted factors associated with the willingness to receive the COVID-19 booster vaccination.ResultsA total of 3,244 valid respondents were included in this survey, and 63.2% of participants thought they had a good understanding of intramuscular injection, but only 20.7, 9.2, 9.4, and 6.0% of participants had a self-perceived good understanding of inhalation vaccine, nasal spray vaccine, oral vaccine, and microneedle patch vaccine. Correspondingly, there was high acceptance for intramuscular injection (76.5%), followed by oral inhalation (64.4%) and nasal spray (43.0%). Those participants who were only willing to receive an intramuscular vaccine had less vaccine knowledge (OR = 0.78; 95% CI: 0.65–0.94) than those who were willing to receive a needle-free vaccine (OR = 1.97; 95% CI: 1.52–2.57). Some factors were found to be associated with vaccine hesitancy toward booster COVID-19 vaccination.ConclusionNeedle-free vaccination is a promising technology for the next generation of vaccines, but we found that intramuscular injection was still the most acceptable immunization route in this survey. One major reason might be that most people lack knowledge about needle-free vaccination. We should strengthen the publicity of needle-free vaccination technology, and thus improve the acceptance and coverage of vaccination in different populations
Recommended from our members
Eye of Aurora
Visually impairment is an essential issue in the world that many people are trying to solve. Our project aims to help visually impaired people know what is happening around them. Therefore, we made a pair of camera glasses that can describe the scene in front of the user with sound. The describing sentences are generated by deep learning. We trained our own image caption models and found that VGG 19 as the neural network model with Flickr8k as the dataset performed best. The hardware includes a camera, an earphone, a glass frame, an LCD touch screen, batteries, and a Raspberry Pi 4. With our project, visually impaired people can truly “see” and engage with the world around them
Imaging Mass Spectrometry of Three-Dimensional Cell Culture Systems
Three-dimensional (3D) cell cultures have increased complexity compared to simple monolayer and suspension cultures, recapitulating the cellular architecture and molecular gradients in tissue. As such, they are popular for in vitro models in biological research. Classical imaging methodologies, like immunohistochemistry, are commonly used to examine the distribution of specific species within the spheroids. However, there is a need for an unbiased discovery-based methodology that would allow examination of protein/peptide distributions in 3D culture systems, without a need for prior knowledge of the analytes. We have developed a matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS)-based imaging approach to examine protein distributions in 3D cell culture models. Using colon carcinoma cell lines, we detect changes in the spatial distribution of proteins across 3D culture structures. To identify the protein species present, we are combining results from the MS/MS capabilities of MALDI-MS to sequence peptides in a de novo fashion and nanoflow liquid chromatography–tandem mass spectrometry (nLC–MS/MS) of homogenized cultures. As a proof-of-principle, we have identified cytochrome C and Histone H4 as two of the predominant protein species in the 3D colon carcinoma cultures
Auto-Encoding Transformations in Reparameterized Lie Groups for Unsupervised Learning
Unsupervised training of deep representations has demonstrated remarkable potentials in mitigating the prohibitive expenses on annotating labeled data recently. Among them is predicting transformations as a pretext task to self-train representations, which has shown great potentials for unsupervised learning. However, existing approaches in this category learn representations by either treating a discrete set of transformations as separate classes, or using the Euclidean distance as the metric to minimize the errors between transformations. None of them has been dedicated to revealing the vital role of the geometry of transformation groups in learning representations. Indeed, an image must continuously transform along the curved manifold of a transformation group rather than through a straight line in the forbidden ambient Euclidean space. This suggests the use of geodesic distance to minimize the errors between the estimated and groundtruth transformations. Particularly, we focus on homographies, a general group of planar transformations containing the Euclidean, similarity and affine transformations as its special cases. To avoid an explicit computing of intractable Riemannian logarithm, we project homographies onto an alternative group of rotation transformations SR(3) with a tractable form of geodesic distance. Experiments demonstrate the proposed approach to Auto-Encoding Transformations exhibits superior performances on a variety of recognition problems
FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and Character Design
Recent advancements in Large Language Models (LLMs) have exhibited notable efficacy in question-answering (QA) tasks across diverse domains. Their prowess in integrating extensive web knowledge has fueled interest in developing LLM-based autonomous agents. While LLMs are efficient in decoding human instructions and deriving solutions by holistically processing historical inputs, transitioning to purpose-driven agents requires a supplementary rational architecture to process multi-source information, establish reasoning chains, and prioritize critical tasks. Addressing this, we introduce FinMem, a novel LLM-based agent framework devised for financial decision-making. It encompasses three core modules: Profiling, to customize the agent's characteristics; Memory, with layered message processing, to aid the agent in assimilating hierarchical financial data; and Decision-making, to convert insights gained from memories into investment decisions. Notably, FinMem's memory module aligns closely with the cognitive structure of human traders, offering robust interpretability and real-time tuning. Its adjustable cognitive span allows for the retention of critical information beyond human perceptual limits, thereby enhancing trading outcomes. This framework enables the agent to self-evolve its professional knowledge, react agilely to new investment cues, and continuously refine trading decisions in the volatile financial environment. We first compare FinMem with various algorithmic agents on a scalable real-world financial dataset, underscoring its leading trading performance in stocks. We then fine-tuned the agent's perceptual span and character setting to achieve a significantly enhanced trading performance. Collectively, FinMem presents a cutting-edge LLM agent framework for automated trading, boosting cumulative investment returns