32598 research outputs found
Sort by
Performance Testing of Asphalt Binder Modified with Amine-Impregnated Zeolite and Plastic in Hot Mix Asphalt to Reduce Carbon Footprint
The rise in global temperatures, driven in part by significant transportation carbon emissions, necessitate sustainable solutions for infrastructure. Traditional asphalt binders and lime additives significantly contribute to carbon emissions, and conventional liquid amine-based antistrip agents, which are used to reduce moisture damage, lose efficacy over time. This study evaluates the performance of PG 64-16 Low Carbon binder, incorporating 10% post-consumer plastic and amine-impregnated zeolite (AIMZ) as a protective carrier for liquid amines. Researchers compare this low-carbon binder to conventional PG 64-16 binder and evaluate AIMZ against amine and zeolite separately (AZ) and a commercial liquid antistrip (LAS). The study tests three aging levels (3, 5, and 7 days), simulating 4, 8, and 10 years, respectively, of field aging in Southern California. The evaluation of moisture-induced damage uses the Tensile Strength Ratio (TSR), while the Hamburg Wheel Tracking (HWT) test assesses rutting resistance (the wear from tires and loads that occurs on roads). The IDEAL Cracking Test measures cracking resistance, Hello,We have a new project that needs to be published to ScholarWorks and registered with DOI.Attached are the report and RB PDFs.Thank you in advance,
and the Moisture-Induced Shear-Thinning Index (MISTI) and Multiple-Stress Creep Recovery (MSCR) tests analyze moisture susceptibility and rheological properties, all of which are important factors to consider in long-term efficacy. AIMZ demonstrated higher TSR values compared to those with AZ and LAS at both 5 days and 7 days of aging levels for both binders. Rutting resistance is comparable between binders, and low-carbon binder mixtures show improved cracking resistance over time. MISTI values suggest lower moisture susceptibility for the low-carbon binder, though MSCR results suggest it is best suited for low-traffic volumes. This study indicates that AIMZ effectively prolongs liquid amine efficacy and that low-carbon binders, despite some limitations, offer environmental and performance benefits. These findings support the potential for incorporating post-consumer plastics in asphalt pavements, promoting sustainability in infrastructure
Evaluating User Interaction and Feedback Mechanisms in a Robotic Bartender: A Study on Smartini’s Social Interaction, Cocktail Preparation, and Customer Engagement
Creation of a cocktail-making robot called Smartini, which is both interactive and capable of learning. Our research involved the development of the Smartini Cocktail Robot, which aimed to address the challenges of human-robot interaction and modern cocktail-making using cutting-edge technology. To achieve this, we incorporated various modes of communication, such as eye contact, gestures, and speech, and also included an entertainment system that offered news and jokes. Furthermore, Smartini was designed to learn customer preferences and adjust its recipes accordingly. Customer feedback revealed high satisfaction levels with the cocktail-making process, scoring 4.23/5. However, Smartini\u27s movements, eye contact, and ease of communication were rated 3.54/5, possibly due to limitations in the iCub\u27s speed and our computer\u27s computational power. To create a truly functional and enjoyable Smartini robot, these areas need further improvement
Retrieval-Augmented Generation (RAG) Chatbots: A Comparative Study of Claude, GPT-4o, DeepSeek, and Llama
The use of Retrieval-augmented generation (RAG) in chatbot platforms has transformed academic spaces by significantly improving information accessibility. RAG has become a viable approach to upgrading Large Language Models (LLMs) with external knowledge access in real time. With the growing availability of advanced LLMs such as GPT, DeepSeek, Claude, Gemini, and Llama, there is a growing need to compare RAG systems based on different LLMs. This study compares the responses of four different RAG chatbots using popular LLMs against a uniquely designed evaluation dataset. Specifically, the study compares the responses and performance of closed-source (GPT-4o and Claude) and open-source models (DeepSeek and Llama) against questions requiring inference from multiple scientific corpora with intricate content and structure. All RAG models in this research use a Chroma vector database to store embeddings. The retrieved documents and the query are provided as input prompts to the LLMs, thus allowing contextually grounded response construction. Each chatbot is evaluated based on ten complex research papers from various domains in computer science. The specifically designed evaluation dataset contains 75 questions derived from these research papers, with a wide variety of questions ranging from simple yes/no questions to questions requiring an understanding of multiple papers. The responses provided by each chatbot are measured quantitatively using standard measures, including Bilingual Evaluation Understudy (BLEU), Recall-Oriented Understudy for Gisting Evaluation (ROUGE), and Bidirectional Encoder Representations from Transformers (BERT) scores, to evaluate response quality comprehensively
Social Engineering Scenario Generation for Awareness-Based Attack Resilience
Social engineering is found in a strong majority of cyberattacks today, as it is a powerful manipulation tactic that does not require the technical skills of hacking. Calculated social engineers utilize simple communication to deceive and exploit their victims, all by capitalizing on the vulnerabilities of human nature: trust and fear. When successful, this inconspicuous technique can lead to millions of dollars in losses. Social engineering is not a one-dimensional technique; criminals often leverage a combination of strategies to craft a robust yet subtle attack. In addition, offenders are continually evolving their methods in efforts to surpass preventive measures. A common utility to defend against social engineering attacks is detection-based software. Security awareness, however, is a valuable approach that is often eclipsed by automated tech solutions. Awareness establishes a strong first line of defense against these ever-changing attacks. This study utilizes three data-supplemented large language models to generate custom social engineering scenarios with the goal of supporting strong example-driven security awareness programs. The performances of BERT, GPT-3.5, and Llama 3.1 are comparatively analyzed, with Llama 3.1 producing the highest quality scenarios based on a series of metrics, including LLM-as-a-judge
Dynamic Pricing for Revenue Maximization in 5G Networks with Elastic Network Slicing and Reinforcement Learning
Network slicing is a concept that enables a diverse array of network applications with lots of unique service requirements. For the scope of this research, we delve into elastic network slicing, which has a potential benefit for both the providers and users through cost-effective resource utilization. Dynamic pricing of these slice resources is a method for operators to realize the balance of different types of network slices by implicitly communicating the current network state to slice users. We have designed a custom pricing scheme using Deep Reinforcement Learning for elastic network slices that maximizes the revenue of slice providers while meeting the users’ slice service requirements. Our experiment results indicate that a balanced distribution of different slice types, which is realized by our dynamic pricing, increases the total revenue of a slice provider without violating the service level agreement with slice users
Dynamic Imprints of Colliding-wind Dust Formation from WR 140
Carbon-rich Wolf-Rayet (WR) binaries are a prominent source of carbonaceous dust that contribute to the dust budget of galaxies. The “textbook” example of an episodic dust-producing WR binary, WR 140 (HD 193793), provides us with an ideal laboratory for investigating the dust physics and kinematics in an extreme environment. This study is among the first to utilize two separate JWST observations, from Cycle 1 ERS (2022 July) and Cycle 2 (2023 September), to measure WR 140’s dust kinematics and confirm its morphology. To measure the proper motions and projected velocities of the dust shells, we performed a novel point-spread function (PSF) subtraction to reduce the effects of the bright diffraction spikes and carefully aligned the Cycle 2 to the Cycle 1 images. At 7.7 μm, through the bright feature common to 16 dust shells (C1), we find an average dust shell proper motion of 390 ± 29 mas yr−1, which equates to a projected velocity of 2714 ± 188 km s−1 at a distance of 1.64 kpc. Our measured speeds are constant across all visible shells and consistent with previously reported dust expansion velocities. Our observations not only prove that these dusty shells are astrophysical (i.e., not associated with any PSF artifact) and originate from WR 140, but also confirm the “clumpy” morphology of the dust shells, in which identifiable substructures within certain shells persist for at least 14 months from one cycle to the next. These results support the hypothesis that clumping in the wind collision region is required for dust production in WR binaries
FRAMEWORK FOR IDENTITY PRIVACY THROUGH GENDER BASED SKELETONIZATION
The protection of one’s privacy and sensitive information is becoming increasingly difficult in the modern age full of surveillance and data collection. Through the use of image based object detection machine learning models trained for human and facial recognition, people can be identified and tracked to a terrifyingly accurate degree. On the other hand, the information present in surveillance media can play a key role in security and law enforcement. This presents a problem of how to preserve key information without compromising the privacy of any individuals present in the video. In this research project, Computer Vision techniques and a collection of different machine learning models are used to replace the bodies of people present in a video with a gendered skeleton representation. The Ultralytics YOLO CNN object detection model is used to detect the people in the video. The DeepSORT Deep Learning object tracking model is used to accurately track and assign unique ids to each person. An open source HuggingFace CNN gender detection model is used to assign a gender to each detected person. Finally the Google Mediapipe pose landmark detection model is used to generate a skeleton representation of each detected person. Using this technique, personally identifiable features such as their facial features, skin tone, clothes, etc. can be hidden while preserving gender and movement data
Domain-Specific Graph RAG Pipelines: Optimized Approaches for Building Efficient Personal Knowledge Repositories
Managing personal data, including notes, calendar events, to-do lists, and other personal information, has become increasingly complex and challenging. In response to this issue, I propose a framework using RAG that enables a large language model (LLM) to efficiently query this data without requiring training on the personal data itself. Conventional retrieval systems, including those leveraging vector-based retrievalaugmented generation (RAG), are effective at handling basic queries but struggle to deliver coherent global abstractions, integrate diverse knowledge sources, and account for temporal nuances. This research explores domain-specific Graph based RAG frameworks that incorporate a knowledge graph to better model relationships, thereby enabling more comprehensive reasoning. By optimizing graph construction and modeling temporal entities for enhanced understanding, this research aims to advance the capabilities of RAG systems beyond the limitations of vector-based approaches for personal knowledge management
DISEASE DIAGNOSIS USING RAG LLM WITH SMART PROMPT ENGINEERING
Although recent trends indicate that LLMs outperform traditional methods in solving complex problems with enhanced reasoning, there has been barely any progress in replicating the quality of diagnoses like those of actual human doctors. The identification of an accurate diagnosis with thorough reasoning is still a significant challenge, even with advanced AI models. The process of performing accurate diagnosis remains challenging due to a lack of transparency in state-of-the-art models existing today, a lack of explanation in the diagnosis process, an emphasis on results rather than reasoning, and a lack of foundational knowledge in models, along with limited exploration of diseases. To solve these problems, we propose a RAG model with smart prompt engineering to develop a sound medical diagnostic agent. The rapidly evolving techniques in LLMs, particularly RAG models, have shown promising results in processing and interpreting complex data. With the use of smart prompt engineering techniques and the use of a RAG framework with strong databases, we have achieved desirable results and enhanced the diagnostic reasoning performance. Deepseek-R1-Distill-Qwen-7B, Mixtral-8x7B, and MedAlpaca were trained as RAG models with PubMed articles and PMC patient data by varying the number of documents. We observe an average increase of 15% in accuracy scores when we introduce relevant documents in the RAG framework. Additionally, prompt engineering guides the formulation of differential diagnosis and a chain-of-thought inference. This study is the first of its kind to identify the variation in model performance by varying the number of documents in the RAG framework