155 research outputs found

    Compensation for Blur Requires Increase in Field of View and Viewing Time

    No full text
    <div><p>Spatial resolution is an important factor for human pattern recognition. In particular, low resolution (blur) is a defining characteristic of low vision. Here, we examined spatial (field of view) and temporal (stimulus duration) requirements for blurry object recognition. The spatial resolution of an image such as letter or face, was manipulated with a low-pass filter. In experiment 1, studying spatial requirement, observers viewed a fixed-size object through a window of varying sizes, which was repositioned until object identification (moving window paradigm). Field of view requirement, quantified as the number of “views” (window repositions) for correct recognition, was obtained for three blur levels, including no blur. In experiment 2, studying temporal requirement, we determined threshold viewing time, the stimulus duration yielding criterion recognition accuracy, at six blur levels, including no blur. For letter and face recognition, we found blur significantly increased the number of views, suggesting a larger field of view is required to recognize blurry objects. We also found blur significantly increased threshold viewing time, suggesting longer temporal integration is necessary to recognize blurry objects. The temporal integration reflects the tradeoff between stimulus intensity and time. While humans excel at recognizing blurry objects, our findings suggest compensating for blur requires increased field of view and viewing time. The need for larger spatial and longer temporal integration for recognizing blurry objects may further challenge object recognition in low vision. Thus, interactions between blur and field of view should be considered for developing low vision rehabilitation or assistive aids.</p></div

    Mean threshold viewing time as a function of cutoff spatial-frequency for letter and face.

    No full text
    <p>Mean threshold viewing time as a function of cutoff spatial-frequency for letter and face.</p

    Contrast energy as a function of threshold viewing time (msec) for face and letter.

    No full text
    <p>Contrast energy of each image was computed as follows: , where <i>L</i><sub><i>i</i>, <i>j</i></sub> is the luminance of the pixel [<i>i</i>, <i>j</i>] of an image of size <i>N</i> by <i>M</i>, and is the mean luminance of the stimulus image. Each data point (solid dots) was an average of threshold viewing time across participants (<i>n</i> = 7) for a given blur level. The solid lines are the best fits of a linear function to the data.</p

    Cutoff spatial-frequencies of the low-pass filter used for the study.

    No full text
    <p>Cutoff spatial-frequencies of the low-pass filter used for the study.</p

    The results from the mouse tracking moving window.

    No full text
    <p>The number of views was plotted as a function of window size (°) for three different blur levels: unfiltered (black dots), 1.2 c/letter or 3.2 c/face (gray dots), and 4 c/letter and 8 c/face (green dots). Stimuli were 26 uppercase letter <b>(a)</b> or 26 celebrity faces <b>(b)</b> with the image size of 4° of visual angle (the same format as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0162711#pone.0162711.g003" target="_blank">Fig 3</a>). Each data point (solid dots) was an average of the number of views across participants (<i>n</i> = 7). Data were fitted with the exponential-decay function (<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0162711#pone.0162711.e002" target="_blank">Eq 2</a>). The solid lines are the best fits of the model. Error bars represent ±1 SEM.</p

    Examples of traces of the gaze-contingent moving window and the total area viewed by the moving window.

    No full text
    <p><b>(a)</b> These trace maps show example trials (correct trials) taken from letter or face recognition for unfiltered and the lowest cutoff spatial-frequency (1.5 c/letter for letter and 5.2 c/face for face). The traces of the moving window (yellow blobs) were superimposed on target images. <b>(b)</b> Mean total amount of viewed area (deg<sup>2</sup>) collapsed across participants (<i>n</i> = 8) was plotted as a function of cutoff spatial-frequency (blur level) for letter and face. Area data were computed by aggregating the regions (e.g., yellow blobs in Fig 4A) of an object image viewed through the moving window (1.2°). Error bars represent ±1 SEM. Note that three asterisks (***) indicate the p value of < 0.001.</p

    Schematic diagrams of stimulus and task procedure.

    No full text
    <p><b>(a)</b> A target letter was viewed through an aperture with varying sizes (in diameter) ranging from 1.2° to 9° in diameter. The numbers in the parentheses indicate the aperture sizes as percentages of the area of the circle containing the stimulus target. The size of the target image was 4° of visual angle. <b>(b)</b> At the beginning of each trial, participants were instructed to fixate on a central dot on the display screen to make sure that the aperture always appears on the center of a target image. A participant’s task was to identify the target stimulus as quickly and accurately as possible. Participants freely moved the viewing window over the target image via a gaze-contingent display until they could recognize the target identity. Participants pressed space bar as soon as they recognized its identity. Then, participants reported the target identity by clicking one of 26-letter images or 26-face names response keys forming a clock face. This measurement was repeated for three different blur levels, including no blur.</p

    SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues

    No full text
    <div><p>Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinder<sup>F</sup> and a template predictor SNBRFinder<sup>T</sup>. SNBRFinder<sup>F</sup> was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinder<sup>T</sup> was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinder<sup>F</sup> was clearly superior to the commonly used sequence profile-based predictor and SNBRFinder<sup>T</sup> can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at <a href="http://ibi.hzau.edu.cn/SNBRFinder" target="_blank">http://ibi.hzau.edu.cn/SNBRFinder</a>.</p></div

    Additional file 2: Dataset 1. of Associating transcriptional modules with colon cancer survival through weighted gene co-expression network analysis

    No full text
    WGCNA and survival analysis for the 3600 genes contained in the 11 co-expression modules. The kME and k.in with the parent module and the survival calculation for RFS and molecular subtypes (type 3 and type 4) are presented. (XLS 3080 kb

    Data_Sheet_1_Global disability-adjusted life years and deaths attributable to child and maternal malnutrition from 1990 to 2019.doc

    No full text
    BackgroundChild and maternal malnutrition (CMM) caused heavy disability-adjusted life years (DALY) and deaths globally. It is crucial to understand the global burden associated with CMM in order to prioritize prevention and control efforts. We performed a comprehensive analysis of the global DALY and deaths attributable to CMM from 1990 to 2019 in this study.MethodsThe age-standardized CMM related burden including DALY and death from 1990 to 2019 were accessed from the Global Burden of Disease study 2019 (GBD 2019). The changing trend were described by average annual percentage change (AAPC). The relationship between sociodemographic factors and burden attributable to CMM were explored by generalized linear model (GLM).ResultsGlobally, in 2019, the age-standardized DALY and death rates of CMM were 4,425.24/100,000 (95% UI: 3,789.81/100,000–5,249.55/100,000) and 44.72/100,000 (95% UI: 37.83/100,000–53.47/100,000), respectively. The age-standardized DALY rate (AAPC = −2.92, 95% CI: −2.97% to −2.87%) and death rates (AAPC = −3.19, 95% CI: −3.27% to −3.12%) presented significantly declining trends during past 30 years. However, CMM still caused heavy burden in age group of ConclusionAlthough global burden attributable to CMM has significantly declined, it still caused severe health burden annually. To strengthen interventions and address resources allocation in the vulnerable population and regions is necessary.</p
    corecore