24 research outputs found
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework
This technical report presents AutoGen, a new framework that enables
development of LLM applications using multiple agents that can converse with
each other to solve tasks. AutoGen agents are customizable, conversable, and
seamlessly allow human participation. They can operate in various modes that
employ combinations of LLMs, human inputs, and tools. AutoGen's design offers
multiple advantages: a) it gracefully navigates the strong but imperfect
generation and reasoning abilities of these LLMs; b) it leverages human
understanding and intelligence, while providing valuable automation through
conversations between agents; c) it simplifies and unifies the implementation
of complex LLM workflows as automated agent chats. We provide many diverse
examples of how developers can easily use AutoGen to effectively solve tasks or
build applications, ranging from coding, mathematics, operations research,
entertainment, online decision-making, question answering, etc.Comment: 28 page
Effects of simulated multi-sensory stimulation integration on physiological and psychological restoration in virtual urban green space environment
Virtual urban green environment images and audio stimuli had been proven to have restorative effects on subjects’ physical and mental health. In this area, researchers predominantly focused on visual, auditory and olfactory aspects, while tactile and gustatory senses have been minimally explored. However, the optimal combination of sensory stimuli for promoting physical and mental recovery remains unclear. Therefore, a simulated sensory stimulation approach involving 240 participants was employed, with 30 individuals included in each of the eight experimental groups: the visual–auditory (VA), visual–auditory-olfactory (VAO), visual–auditory-tactile (VAT), visual–auditory-gustatory(VAG), visual–auditory-olfactory-tactile (VAOT), visual–auditory-olfactory-gustatory (VAOG), visual–auditory-tactile-gustatory (VATG), and visual–auditory-olfactory-tactile-gustatory (VAOTG) groups. This study aimed to explore the differences in participants’ physiological and psychological health recovery after exposure to different combinations of simulated sensory stimuli in virtual UGSs. The results indicated that the following: (1) In terms of physiological recovery, the blood pressure of the 8 experimental groups decreased significantly after the experiment, indicating that the virtual urban green space environment has a certain recovery effect on physiological state. The combination of VAOTG stimuli in the multisensory group resulted in the best blood pressure recovery (p < 0.05). Tactile is an important sense to enhance the physiological recovery effect. Olfactory-tactile or tactile-gustatory stimuli interactions significantly enhance physiological recovery, emphasizing the importance of tactile stimulation in improving physiological recovery. (2) In terms of psychological recovery, the common trigger of olfactory-gustatory is the most key element to enhance psychological recovery through multi-sensory stimulation of virtual urban green space environment. VAOG stimulation had the best effect on psychological recovery (p < 0.05), followed by VAOTG stimulation (p < 0.05). Gustatory is an important sense to enhance the psychological recovery effect, and both the tactile-gustatory interaction and the olfactory-gustatory interaction significantly enhance the recovery effect. At the same time, the psychological recovery effect obtained by four or more sensory combinations was higher than that obtained by two or three sensory stimulation groups. This study confirms more possibilities for ways to restore physical and mental health through virtual natural environments. It expands the research on the benefits of virtual nature experience and provides theoretical support for the application of this method
LPS-Net: Lightweight Parallel Strategy Network for Underwater Image Enhancement
Underwater images are frequently subject to color distortion and loss of details. However, previous enhancement methods did not tackle these mixed degradations by dividing them into sub-problems that could be effectively addressed. Moreover, the parameters and computations required for these methods are usually costly for underwater equipment, which has limited power supply, processing capabilities, and memory capacity. To address these challenges, this work proposes a Lightweight Parallel Strategy Network (LPS-Net). Firstly, a Dual-Attention Enhancement Block and a Mirror Large Receptiveness Block are introduced to, respectively, enhance the color and restore details in degraded images. Secondly, we employed these blocks on parallel branches at each stage of LPS-Net, with the goal of achieving effective image color and detail rendering simultaneously. Thirdly, a Gated Fusion Unit is proposed to merge features from different branches at each stage. Finally, the network utilizes four stages of parallel enhancement, achieving a balanced trade-off between performance and parameters. Extensive experiments demonstrated that LPS-Net achieves optimal color enhancement and superior detail restoration in terms of visual quality. Furthermore, it attains state-of-the-art underwater image enhancement performance on the evaluation metrics, while using only 80.12 k parameters
LEARNING-BASED AUTOMATIC BREAST TUMOR DETECTION AND SEGMENTATION IN ULTRASOUND IMAGES
Ultrasound (US) images have been widely used in the diagnosis of breast cancer in particular. While experienced doctors may locate the tumor regions in a US image manually, it is highly desirable to develop algorithms that automatically detect the tumor regions in order to assist medical diagnosis. In this paper, we propose a novel algorithm for automatic detection of breast tumors in US images. We formulate the tumor detection as a two step learning problem: tumor localization by bounding box and exact boundary delineation. Specifically, the proposed method uses an AdaBoost classifier on Harr-like features to detect a preliminary set of tumor regions. The preliminarily detected tumor regions are further screened with a support vector machine using quantized intensity features. Finally, the random walk segmentation algorithm is performed on the US image to retrieve the boundary of each detected tumor region. The proposed method has been evaluated on a data set containing 112 breast US images, including histologically confirmed 80 diseased ones and 32 normal ones. The data set contains one image from each patient and the patients are from 31 to 75 years old. Experiments demonstrate that the proposed algorithm can automatically detect breast tumors, with their locations and boundary shapes retrieved with high accuracy