93 research outputs found

    A Mobile Terrain Mapping Robot

    Get PDF
    The purpose of this project is to develop a user controlled mobile robot that maps an area of terrain such as a floor of a building. The motivation for this project is to build a robot that can perform tasks that are too menial, difficult, or dangerous to be performed by humans. The robot is a radio-controlled truck with a computer system and sensory equipment attached to it. The user, who stands in the room with the robot, drives the robot through a room with a radio control system while the robot uses its rotating SONAR to detect the surrounding terrain. The robot records its position with every point of the SONAR data so that it can accurately modify a map stored in the memory of its computer system. It calculated this position with a compass and an optical wheel encoder. The time required to map an entire room is anticipated to be less than an hour. Objects such as walls, doorways, trashcans, desks, etc. will be mapped. SONAR data is in the form of a time required for sound to travel to an object and be bounced back. An object\u27s distance from the truck is calculated by knowing the time measurement and the speed that sound travels in air under standard conditions. The SONAR transducer is mounted to a servomotor mast that is controlled by an algorithm so that its direction is has traveled in. After this data is transmitted to the computer a map of the area can be generated. The success of this project will be determined by the accuracy of the map generated

    A new mild hyperthermia device to treat vascular involvement in cancer surgery

    Get PDF
    Abstract Surgical margin status in cancer surgery represents an important oncologic parameter affecting overall prognosis. The risk of disease recurrence is minimized and survival often prolonged if margin-negative resection can be accomplished during cancer surgery. Unfortunately, negative margins are not always surgically achievable due to tumor invasion into adjacent tissues or involvement of critical vasculature. Herein, we present a novel intra-operative device created to facilitate a uniform and mild heating profile to cause hyperthermic destruction of vessel-encasing tumors while safeguarding the encased vessel. We use pancreatic ductal adenocarcinoma as an in vitro and an in vivo cancer model for these studies as it is a representative model of a tumor that commonly involves major mesenteric vessels. In vitro data suggests that mild hyperthermia (41–46 °C for ten minutes) is an optimal thermal dose to induce high levels of cancer cell death, alter cancer cell’s proteomic profiles and eliminate cancer stem cells while preserving non-malignant cells. In vivo and in silico data supports the well-known phenomena of a vascular heat sink effect that causes high temperature differentials through tissues undergoing hyperthermia, however temperatures can be predicted and used as a tool for the surgeon to adjust thermal doses delivered for various tumor margins

    Question Decomposition Improves the Faithfulness of Model-Generated Reasoning

    Full text link
    As large language models (LLMs) perform more difficult tasks, it becomes harder to verify the correctness and safety of their behavior. One approach to help with this issue is to prompt LLMs to externalize their reasoning, e.g., by having them generate step-by-step reasoning as they answer a question (Chain-of-Thought; CoT). The reasoning may enable us to check the process that models use to perform tasks. However, this approach relies on the stated reasoning faithfully reflecting the model's actual reasoning, which is not always the case. To improve over the faithfulness of CoT reasoning, we have models generate reasoning by decomposing questions into subquestions. Decomposition-based methods achieve strong performance on question-answering tasks, sometimes approaching that of CoT while improving the faithfulness of the model's stated reasoning on several recently-proposed metrics. By forcing the model to answer simpler subquestions in separate contexts, we greatly increase the faithfulness of model-generated reasoning over CoT, while still achieving some of the performance gains of CoT. Our results show it is possible to improve the faithfulness of model-generated reasoning; continued improvements may lead to reasoning that enables us to verify the correctness and safety of LLM behavior.Comment: For few-shot examples and prompts, see https://github.com/anthropics/DecompositionFaithfulnessPape

    Salve Regina University Act on Climate: Strategic Plan for the University to Reach State Carbon Neutrality Goals

    Get PDF
    In order to become more sustainable and meet the mandate set by the 2021 Rhode Island Act on Climate law (RI General Law §42-6.2), Salve Regina University must work to reach net-zero greenhouse gas emissions by the year 2050. Action to meet these standards begins now and must be continually built upon to ensure that Salve Regina University, as leader in Rhode Island, is always working for a more sustainable future. Throughout the Spring 2022 semester, students of the BIO-140: Humans and Their Environment course instructed by Dr. Jameson Chace have researched ways in which Salve Regina can begin on the path to zero greenhouse gas emissions today. By focusing on change in the areas of energy, transportation, food, financial investments, and sequestration, Salve Regina can reduce the greenhouse gas emissions of today for a more sustainable tomorrow. Recommendations are broken into three time periods. Action for today to achieve by 2030 include improving energy efficiency, installing the first electric vehicle (EV) parking/charging stations, increasing carbon sequestration, reducing beef in the campus diet, and assessing the carbon impact of university financial holdings. Actions to be initiated soon and to be achieved by 2040 include shifting away from natural gas heating when system renewals take place, increasing EV parking to meet rising demand, during turnover replace current university vehicles with electric or hybrid, continuing with sequestration efforts on campus, begin phasing out high carbon diet items, and by 2040 the university investment portfolio should be carbon neutral. If carbon neutrality can be reached by 2050 the most challenging aspects of campus life that need to change will require planning now and thoughtful implementation. The class in 2022 envisions a campus in 2050 where solar lights illuminate campus and buildings through the night, all university vehicles and most faculty and staff vehicles are electric and are found charging during the day at solar powered charging stations, dining services in Miley supports community agriculture and includes incentives for meatless and low carbon meal plans, the university has become a leader in low carbon/green market investing demonstrating how careful planning can reap high returns, and carbon sequestration on campus grounds has maximized such that off campus carbon offsets are established with local land trusts to complete the carbon neutrality goals. In doing so no only will the university be recognized as a state-wide leader in climate action, but will also be a global leader in working towards a world that is more harmonious, just, and merciful.https://digitalcommons.salve.edu/bio140_arboretum/1033/thumbnail.jp

    Specific versus General Principles for Constitutional AI

    Full text link
    Human feedback can prevent overtly harmful utterances in conversational models, but may not automatically mitigate subtle problematic behaviors such as a stated desire for self-preservation or power. Constitutional AI offers an alternative, replacing human feedback with feedback from AI models conditioned only on a list of written principles. We find this approach effectively prevents the expression of such behaviors. The success of simple principles motivates us to ask: can models learn general ethical behaviors from only a single written principle? To test this, we run experiments using a principle roughly stated as "do what's best for humanity". We find that the largest dialogue models can generalize from this short constitution, resulting in harmless assistants with no stated interest in specific motivations like power. A general principle may thus partially avoid the need for a long list of constitutions targeting potentially harmful behaviors. However, more detailed constitutions still improve fine-grained control over specific types of harms. This suggests both general and specific principles have value for steering AI safely

    Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

    Full text link
    Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.Comment: updated to add missing acknowledgement
    corecore