316 research outputs found
Clinical measurement of patellar tendon: accuracy and relationship to surgical tendon dimensions.
Patellar tendon width and length are commonly used for preoperative planning for anterior cruciate ligament reconstruction (ACLR). In the study reported here, we assessed the accuracy of preoperative measurements made by palpation through the skin, and correlated these measurements with the actual dimensions of the tendons at surgery. Before making incisions in 53 patients undergoing ACLR with patellar tendon autograft, we measured patellar tendon length with the knee in full extension and in 90° of flexion, and tendon width with the knee in 90° of flexion. The tendon was then exposed, and its width was measured with the knee in 90° of flexion. The length of the central third of the tendon was measured after the graft was prepared. Mean patellar tendon length and width with the knee in 90° of flexion were 39 mm and 32 mm, respectively. No clinical difference was found between the estimated pre-incision and surgical widths. However, the estimated pre-incision length with the knee in full extension and in 90° of flexion was significantly shorter than the surgical length. Skin measurements can be used to accurately determine patellar tendon width before surgery, but measurements of length are not as reliable
Using the Triple Bottom Line to Select Sustainable Suppliers for a Major Oil and Gas Company
Companies have primarily been focusing on the financial bottom line i.e., on increasing profits by increasing revenues and reducing costs. With high energy usage and environmental change posing threats to the environment and business operations, companies are now considering sustainability. Since some global suppliers have low cost labor, Social well-being and human development has also emerged as major goals of a company performing global operations. Focusing on these three goals is termed the Triple Bottom Line (TBL). We study and explore the TBL benefits that could be realized by an oil and gas company by focusing on sustainable suppliers. A company with a global supply chain cannot be sustainable without sustainable suppliers. This thesis develops the business case for sustainable suppliers using the TBL and presents the benefits of integrating sustainable suppliers into the supply chain. We consider a major oil and gas company and use multi-objective decision analysis to perform the analysis
Representation learning for neural population activity with Neural Data Transformers
Neural population activity is theorized to reflect an underlying dynamical
structure. This structure can be accurately captured using state space models
with explicit dynamics, such as those based on recurrent neural networks
(RNNs). However, using recurrence to explicitly model dynamics necessitates
sequential processing of data, slowing real-time applications such as
brain-computer interfaces. Here we introduce the Neural Data Transformer (NDT),
a non-recurrent alternative. We test the NDT's ability to capture autonomous
dynamical systems by applying it to synthetic datasets with known dynamics and
data from monkey motor cortex during a reaching task well-modeled by RNNs. The
NDT models these datasets as well as state-of-the-art recurrent models.
Further, its non-recurrence enables 3.9ms inference, well within the loop time
of real-time applications and more than 6 times faster than recurrent baselines
on the monkey reaching dataset. These results suggest that an explicit dynamics
model is not necessary to model autonomous neural population dynamics. Code:
https://github.com/snel-repo/neural-data-transformer
lfads-torch: A modular and extensible implementation of latent factor analysis via dynamical systems
Latent factor analysis via dynamical systems (LFADS) is an RNN-based
variational sequential autoencoder that achieves state-of-the-art performance
in denoising high-dimensional neural activity for downstream applications in
science and engineering. Recently introduced variants and extensions continue
to demonstrate the applicability of the architecture to a wide variety of
problems in neuroscience. Since the development of the original implementation
of LFADS, new technologies have emerged that use dynamic computation graphs,
minimize boilerplate code, compose model configuration files, and simplify
large-scale training. Building on these modern Python libraries, we introduce
lfads-torch -- a new open-source implementation of LFADS that unifies existing
variants and is designed to be easier to understand, configure, and extend.
Documentation, source code, and issue tracking are available at
https://github.com/arsedler9/lfads-torch .Comment: 4 pages, 1 figure, 1 tabl
- …
