31 research outputs found

    Rockport Comprehensive Plan

    Get PDF
    This document was developed and prepared by Texas Target Communities (TxTC) at Texas A&M University in partnership with the City of Rockport, Texas Sea Grant, Texas A&M University - Corpus Christi, Texas A&M University - School of Law and Texas Tech University.Founded in 1871, the City of Rockport aims to continue growing economically and sustainably. Rockport is a resilient community dedicated to sustainable growth and attracting businesses to the area. Rockport is a charming town that offers a close-knit community feel and is a popular tourist destination for marine recreation, fairs, and exhibitions throughout the year. The Comprehensive Plan 2020-2040 is designed to guide the city of Rockport for its future growth. The guiding principles for this planning process were Rockport's vision statement and its corresponding goals, which were crafted by the task force. The goals focus on factors of growth and development including public participation, development considerations, transportation, community facilities, economic development, parks, and housing and social vulnerability

    Introducing v0.5 of the AI Safety Benchmark from MLCommons

    Get PDF
    This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark

    Introducing v0.5 of the AI Safety Benchmark from MLCommons

    Get PDF
    This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark

    Correlation Between the Spirit Bike Maximal Power Output and Other Lower Extremity Power Output Tests

    Get PDF
    The assessment of a patient’s lower extremity function is important for physical therapists to make clinical judgements about the subject’s mobility and physical capabilities. For physical therapists to accurately assess a patient’s lower extremity function, clinicians must utilize the most appropriate tests, evaluation techniques, and/or tools. It is not clear that single leg hop tests will provide the most accurate assessment of lower extremity function for patients with hip, knee, ankle, and or foot biomechanical dysfunctions, as in some severe cases, these tests may even be contraindicated

    Cyclooctatetraene: a bioactive cubane paradigm complement

    No full text
    Cubane was recently validated as a phenyl ring (bio)isostere, but highly strained caged carbocyclic systems lack π character, which is often critical for mediating key biological interactions. This electronic property restriction associated with cubane has been addressed herein with cyclooctatetraene (COT), using known pharmaceutical and agrochemical compounds as templates. COT either outperformed or matched cubane in multiple cases suggesting that versatile complementarity exists between the two systems for enhanced bioactive molecule discovery

    Introducing v0.5 of the AI Safety Benchmark from MLCommons

    Get PDF
    This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a single use case (an adult chatting to a general-purpose assistant in English), and a limited set of personas (i.e., typical users, malicious users, and vulnerable users). We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0.5 benchmark. We plan to release version 1.0 of the AI Safety Benchmark by the end of 2024. The v1.0 benchmark will provide meaningful insights into the safety of AI systems. However, the v0.5 benchmark should not be used to assess the safety of AI systems. We have sought to fully document the limitations, flaws, and challenges of v0.5. This release of v0.5 of the AI Safety Benchmark includes (1) a principled approach to specifying and constructing the benchmark, which comprises use cases, types of systems under test (SUTs), language and context, personas, tests, and test items; (2) a taxonomy of 13 hazard categories with definitions and subcategories; (3) tests for seven of the hazard categories, each comprising a unique set of test items, i.e., prompts. There are 43,090 test items in total, which we created with templates; (4) a grading system for AI systems against the benchmark; (5) an openly available platform, and downloadable tool, called ModelBench that can be used to evaluate the safety of AI systems on the benchmark; (6) an example evaluation report which benchmarks the performance of over a dozen openly available chat-tuned language models; (7) a test specification for the benchmark

    Comparison of Healthcare Experiences in Autistic and Non-Autistic Adults: A Cross-Sectional Online Survey Facilitated by an Academic-Community Partnership

    No full text
    BACKGROUND: Little is known about the healthcare experiences of adults on the autism spectrum. Moreover, autistic adults have rarely been included as partners in autism research. OBJECTIVE: To compare the healthcare experiences of autistic and non-autistic adults via an online survey. METHODS: We used a community-based participatory research (CBPR) approach to adapt survey instruments to be accessible to autistic adults and to conduct an online cross-sectional survey. We assessed preliminary psychometric data on the adapted scales. We used multivariate analyses to compare healthcare experiences of autistic and non-autistic participants. RESULTS: Four hundred and thirty-seven participants completed the survey (209 autistic, 228 non-autistic). All adapted scales had good to excellent internal consistency reliability (alpha 0.82–0.92) and strong construct validity. In multivariate analyses, after adjustment for demographic characteristics, health insurance, and overall health status, autistic adults reported lower satisfaction with patient-provider communication (beta coefficient −1.9, CI −2.9 to −0.9), general healthcare self-efficacy (beta coefficient −11.9, CI −14.0 to −8.6), and chronic condition self-efficacy (beta coefficient −4.5, CI −7.5 to −1.6); higher odds of unmet healthcare needs related to physical health (OR 1.9 CI 1.1–3.4), mental health (OR 2.2, CI 1.3–3.7), and prescription medications (OR 2.8, CI 2.2–7.5); lower self-reported rates of tetanus vaccination (OR 0.5, CI 0.3–0.9) and Papanicolaou smears (OR 0.5, CI 0.2–0.9); and greater odds of using the emergency department (OR 2.1, CI 1.8–3.8). CONCLUSION: A CBPR approach may facilitate the inclusion of people with disabilities in research by increasing researchers’ ability to create accessible data collection instruments. Autistic adults who use the Internet report experiencing significant healthcare disparities. Efforts are needed to improve the healthcare of autistic individuals, including individuals who may be potentially perceived as having fewer disability-related needs

    Mimicry in coral reef fishes: ecological and behavioural responses of a mimic to its model

    No full text
    Mimicry is a widely documented phenomenon in coral reef fishes, but the underlying relationships between mimics and models are poorly understood. Juveniles of the surgeonfish Acanthurus pyroferus mimic the coloration of different pygmy angelfish Centropyge spp. at different locations throughout the geographic range of the surgeonfish, while adopting a common species-specific coloration as adults. This study examines the ecological and behavioural relationships between A. pyroferus and one of its models, Centropyge vroliki, in Papua New Guinea. Surgeonfish underwent a transition from the juvenile (mimetic) coloration to the adult (non-mimetic) coloration when they reached the maximum size of the angelfish. As typical of mimic-model relationships, mimic surgeonfish were always less abundant than their model. Spatial variation in the abundance of mimics was correlated with models, while the abundance of adults was not. We show that juvenile surgeonfish gain a foraging advantage by mimicking the angelfish. Mimic surgeonfish were always found within 1-2 m of a similar-sized individual of C. vroliki with which they spent c. 10% of their time in close association. When in association with angelfish, juvenile surgeonfish exhibited an increase of c. 10% in the amount of time spent feeding compared to when they were alone. This foraging benefit seems to be explained by reduced aggression by the territorial damselfish Plectroglyphidon lacrymatus, which dominates the reef crest habitat. While adult A. pyroferus and all other surgeonfish were aggressively displaced from damselfish territories, mimic surgeonfish and their models were attacked less frequently and were not always displaced. Stomach contents analysis showed that the diet of C. vroliki differed substantially from P. lacrymatus, while that of A. pyroferus was more similar to the damselfish. We hypothesize that mimics deceive damselfish as to their diet in order to gain access to food supplies in defended areas
    corecore