12 research outputs found
SwissNYF: Tool Grounded LLM Agents for Black Box Setting
While Large Language Models (LLMs) have demonstrated enhanced capabilities in
function-calling, these advancements primarily rely on accessing the functions'
responses. This methodology is practical for simpler APIs but faces scalability
issues with irreversible APIs that significantly impact the system, such as a
database deletion API. Similarly, processes requiring extensive time for each
API call and those necessitating forward planning, like automated action
pipelines, present complex challenges. Furthermore, scenarios often arise where
a generalized approach is needed because algorithms lack direct access to the
specific implementations of these functions or secrets to use them. Traditional
tool planning methods are inadequate in these cases, compelling the need to
operate within black-box environments. Unlike their performance in tool
manipulation, LLMs excel in black-box tasks, such as program synthesis.
Therefore, we harness the program synthesis capabilities of LLMs to strategize
tool usage in black-box settings, ensuring solutions are verified prior to
implementation. We introduce TOPGUN, an ingeniously crafted approach leveraging
program synthesis for black box tool planning. Accompanied by SwissNYF, a
comprehensive suite that integrates black-box algorithms for planning and
verification tasks, addressing the aforementioned challenges and enhancing the
versatility and effectiveness of LLMs in complex API interactions. The public
code for SwissNYF is available at https://github.com/iclr-dummy-user/SwissNYF
ENABLING INSTANTANEOUS SWITCH FROM DEFAULT MULTICAST DISTRIBUTION TREE TO DATA MULTICAST DISTRIBUTION TREE FOR MOBILITY AND MULTIHOMING IN ETHERNET VIRTUAL PRIVATE NETWORK TENANT ROUTED MULTICAST
Techniques are described for providing an optimized way of switching from a default Multicast Distribution Tree (MDT) to a data MDT to handle two key use-cases in Ethernet Virtual Private Network (EVPN) / Tenant Routed Multicast (TRM). The first key use-case is an Ethernet Segment Identifier (ESI) failover case over a multihomed network in an EVPN fabric. The second use-case is host mobility
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Large language models (LLMs) have been shown to be able to perform new tasks
based on a few demonstrations or natural language instructions. While these
capabilities have led to widespread adoption, most LLMs are developed by
resource-rich organizations and are frequently kept from the public. As a step
towards democratizing this powerful technology, we present BLOOM, a
176B-parameter open-access language model designed and built thanks to a
collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer
language model that was trained on the ROOTS corpus, a dataset comprising
hundreds of sources in 46 natural and 13 programming languages (59 in total).
We find that BLOOM achieves competitive performance on a wide variety of
benchmarks, with stronger results after undergoing multitask prompted
finetuning. To facilitate future research and applications using LLMs, we
publicly release our models and code under the Responsible AI License
Noninvasive Characterisation of Foot Reflexology Areas by Swept Source-Optical Coherence Tomography in Patients with Low Back Pain
Objective. When exploring the scientific basis of reflexology techniques, elucidation of the surface and subsurface features of reflexology areas (RAs) is crucial. In this study, the subcutaneous features of RAs related to the lumbar vertebrae were evaluated by swept source-optical coherence tomography (SS-OCT) in subjects with and without low back pain (LBP). Methods. Volunteers without LBP (n=6 (maleâ:âfemale = 1â:â1)) and subjects with LBP (n=15 (maleâ:âfemale = 2â:â3)) were clinically examined in terms of skin colour (visual perception), localised tenderness (visual analogue scale) and structural as well as optical attributes as per SS-OCT. From each subject, 6 optical tomograms were recorded from equidistant transverse planes along the longitudinal axis of the RAs, and from each tomogram, 25 different spatial locations were considered for recording SS-OCT image attributes. The images were analysed with respect to the optical intensity distributions and thicknesses of different skin layers by using AxioVision Rel. 4.8.2 software. The SS-OCT images could be categorised into 4 pathological grades (i.e., 0, 1, 2, and 3) according to distinctness in the visible skin layers. Results. Three specific grades for abnormalities in SS-OCT images were identified considering gradual loss of distinctness and increase in luminosity of skin layers. Almost 90.05% subjects were of mixed type having predominance in certain grades. Conclusion. The skin SS-OCT system demonstrated a definite association of the surface features of healthy/unhealthy RAs with cutaneous features and the clinical status of the lumbar vertebrae
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License