16 research outputs found
Wanted: standards for automatic reproducibility of computational experiments
Those seeking to reproduce a computational experiment often need to manually
look at the code to see how to build necessary libraries, configure parameters,
find data, and invoke the experiment; it is not automatic. Automatic
reproducibility is a more stringent goal, but working towards it would benefit
the community. This work discusses a machine-readable language for specifying
how to execute a computational experiment. We invite interested stakeholders to
discuss this language at https://github.com/charmoniumQ/execution-description .Comment: Submitted to SE4RS'23 Portland, O
The art and science of using quality control to understand and improve fMRI data
Designing and executing a good quality control (QC) process is vital to robust and reproducible science and is often taught through hands on training. As FMRI research trends toward studies with larger sample sizes and highly automated processing pipelines, the people who analyze data are often distinct from those who collect and preprocess the data. While there are good reasons for this trend, it also means that important information about how data were acquired, and their quality, may be missed by those working at later stages of these workflows. Similarly, an abundance of publicly available datasets, where people (not always correctly) assume others already validated data quality, makes it easier for trainees to advance in the field without learning how to identify problematic data. This manuscript is designed as an introduction for researchers who are already familiar with fMRI, but who did not get hands on QC training or who want to think more deeply about QC. This could be someone who has analyzed fMRI data but is planning to personally acquire data for the first time, or someone who regularly uses openly shared data and wants to learn how to better assess data quality. We describe why good QC processes are important, explain key priorities and steps for fMRI QC, and as part of the FMRI Open QC Project, we demonstrate some of these steps by using AFNI software and AFNI’s QC reports on an openly shared dataset. A good QC process is context dependent and should address whether data have the potential to answer a scientific question, whether any variation in the data has the potential to skew or hide key results, and whether any problems can potentially be addressed through changes in acquisition or data processing. Automated metrics are essential and can often highlight a possible problem, but human interpretation at every stage of a study is vital for understanding causes and potential solutions
Identifying Individual Rain Events with a Dense Disdrometer Network
The use of point detectors to measure properties of rainfall is ubiquitous in the hydrological sciences. An early step in most rainfall analysis includes the partitioning of the data record into “rain events.” This work utilizes data from a dense network of optical disdrometers to explore the effects of instrument sampling on this partitioning. It is shown that sampling variability may result in event identifications that can statistically magnify the differences between two similar data records. The data presented here suggest that these magnification effects are not equally impactful for all common definitions of a rain event
Hybrid Performance Golf Cart: Examining the Feasibility of Low-Budget Hybrid Engines
This thesis project is on the design and performance of a hybrid engine created for a golf
cart. The design is for a prototype golf cart and has not yet been refined to be reproducible in a
commercial production-focused setting. Our idea is to improve upon a former project completed
by Steve Slovenski and George Thalheim under the guidance of Glenn Northey. This improved
concept would create a golf cart with range and performance that exceeds both electric and gaspowered golf carts, all while being able to keep the efficiency of the cart high and overall
environmental impact low. Doing so requires the addition of a combustion engine to an electric
golf cart. This combustion engine will be used to charge the batteries of the cart at all times, with
charging more prevalent while idling and speed increased while driving. This increased
performance results in a far greater range and, efficiency for the cart. Throughout the project we
have encountered many obstacles including, but not limited to, location on the cart for the
petroleum engine and the connection of both engines to the hybrid transmission. Luckily, there
was room to work with at the back of the golf cart where golf clubs are usually stored, which
allowed us to implement our design after minor changes to the cart shell. Upon completion of
this project we have been able to create a machine that can have increased range and better
overall performance than both an electric and gas-powered engine would have on their own,
barring future iterations and experimenting with gearing and power balance between electric and
gas motors. Our initial tests of our electric powered golf cart showed that the top horsepower
output was 9 HP, it had a top speed of 14.2mph and a battery range of 6.4 hours while running
on full speed the entire time. The final product displayed a top horsepower of 6HP. The decrease
in horsepower is due to friction losses, gearing, and the improper balance of power caused by the
alternator draining battery from our cart. It had an increased top speed of 14.5mph. The
estimated battery range after running the golf cart at top speed has decreased, but this is due to
the alternator being stopped and the field current using the charge from the batteries. However,
since our design recharges the battery, the battery life increased drastically. Our tests have
determined that our design successfully increased the performance of the golf cart at both low
and high speeds however, we encountered unforeseen battery issues at high speeds which can be
eliminated with modifications to this initial prototype
Multi-echo fMRI protocols
A collection of protocol PDFs for common multi-echo fMRI sequences
The timing of transcranial magnetic stimulation relative to the phase of prefrontal alpha EEG modulates downstream target engagement
Background: The communication through coherence model posits that brain rhythms are synchronized across different frequency bands and that effective connectivity strength between interacting regions depends on their phase relation. Evidence to support the model comes mostly from electrophysiological recordings in animals while evidence from human data is limited. Methods: Here, an fMRI-EEG-TMS (fET) instrument capable of acquiring simultaneous fMRI and EEG during noninvasive single pulse TMS applied to dorsolateral prefrontal cortex (DLPFC) was used to test whether prefrontal EEG alpha phase moderates TMS-evoked top-down influences on subgenual, rostral and dorsal anterior cingulate cortex (ACC). Six runs (276 total trials) were acquired in each participant. Phase at each TMS pulse was determined post-hoc using single-trial sorting. Results were examined in two independent datasets: healthy volunteers (HV) (n = 11) and patients with major depressive disorder (MDD) (n = 17) collected as part of an ongoing clinical trial. Results: In both groups, TMS-evoked functional connectivity between DLPFC and subgenual ACC (sgACC) depended on the EEG alpha phase. TMS-evoked DLPFC to sgACC fMRI-derived effective connectivity (EC) was modulated by EEG alpha phase in healthy volunteers, but not in the MDD patients. Top-down EC was inhibitory for TMS pulses during the upward slope of the alpha wave relative to TMS timed to the downward slope of the alpha wave. Prefrontal EEG alpha phase dependent effects on TMS-evoked fMRI BOLD activation of the rostral anterior cingulate cortex were detected in the MDD patient group, but not in the healthy volunteer group. Discussion: Results demonstrate that TMS-evoked top-down influences vary as a function of the prefrontal alpha rhythm, and suggest potential clinical applications whereby TMS is synchronized to the brain's internal rhythms in order to more efficiently engage deep therapeutic targets
ME-ICA/tedana: 23.0.1
Release Notes This release changes many internal aspects of the code, will make future improvements easier, and will hopefully make it easier for more people to understand their results and contribute. The denoising results should be identical. Right before releasing this new version, we released version 0.0.13, which is the last version of the older code. If you want to confirm the consistency of results, these are the two versions you should compare. Instructions for comparing results are below. Key changes Large portions of the code were reorganized and modularized to make understanding the code easier and facilitate future development Breaking change: tedana can no longer be used to manually change component classifications. A separate program, ica_reclassify, can be used for this. This makes it easier for programs like Rica to output a list of component numbers to change and to then change them with ica_reclassify. The component classification process that designates components as "accepted" or "rejected" was completely rewritten so that every step in the process is modular and the inputs and outputs of every step are logged. The documentation includes descriptions of the newly outputted files and file contents. It is now possible to select different decision trees for component selection using the --tree option. The default tree is kundu and should replicate the current outputs. We also include minimal which is a simpler tree that is intended to provide more consistent results across a study, but still needs more testing and validation and may still change. Flow charts for these two options are here. Anyone can create their own decision tree. If one is using metrics that are already calculated, like kappa and rho, and doing greater/less than comparisons, one can make a decision tree with a user-provided json file. More complex calculations might require editing the tedana python code. This change also means any metric that has one value per component can be used in a selection process. This makes it possible to combine the multi-echo metrics used in tedana with other selection metrics, such as correlations to head motion. The documentation includes instructions on building and understanding this component selection process. Breaking change: No components are classified as ignored. "Ignored" has long confused users. It was intended to identify components with such low variation that it wasn't worth deciding whether to lose a statistical degree of freedom by rejecting them. They were treated identically to accepted components. Now they are classified as "accepted" and tagged as "Low variance" or "Borderline Accept". These classification tags now appear on the html report of the results. A registry of all files outputted by tedana is now stored with the outputs. This allows for multiple file naming methods and means internal and external programs that want to interact with the tedana outputs just need to load this file. Nearly 100% of the new code and 98% of all tedana code is covered by integration testing. Tedana python package management now uses pyproject.toml Minimum python version is now 3.8 and minimum pandas version is now 2.0 (might cause problems if the same python environment is used for packages that require older versions of pandas) More comprehensive documentation of changes is in pull request #756 and the full release notes are here: https://github.com/ME-ICA/tedana/releases/tag/23.0.0 Changes [REF] Decision Tree Modularization (#756) @jbteves @handwerkerd @n-reddy @marco7877 @tsal