158 research outputs found
Compensation for Blur Requires Increase in Field of View and Viewing Time
<div><p>Spatial resolution is an important factor for human pattern recognition. In particular, low resolution (blur) is a defining characteristic of low vision. Here, we examined spatial (field of view) and temporal (stimulus duration) requirements for blurry object recognition. The spatial resolution of an image such as letter or face, was manipulated with a low-pass filter. In experiment 1, studying spatial requirement, observers viewed a fixed-size object through a window of varying sizes, which was repositioned until object identification (moving window paradigm). Field of view requirement, quantified as the number of “views” (window repositions) for correct recognition, was obtained for three blur levels, including no blur. In experiment 2, studying temporal requirement, we determined threshold viewing time, the stimulus duration yielding criterion recognition accuracy, at six blur levels, including no blur. For letter and face recognition, we found blur significantly increased the number of views, suggesting a larger field of view is required to recognize blurry objects. We also found blur significantly increased threshold viewing time, suggesting longer temporal integration is necessary to recognize blurry objects. The temporal integration reflects the tradeoff between stimulus intensity and time. While humans excel at recognizing blurry objects, our findings suggest compensating for blur requires increased field of view and viewing time. The need for larger spatial and longer temporal integration for recognizing blurry objects may further challenge object recognition in low vision. Thus, interactions between blur and field of view should be considered for developing low vision rehabilitation or assistive aids.</p></div
Mean threshold viewing time as a function of cutoff spatial-frequency for letter and face.
<p>Mean threshold viewing time as a function of cutoff spatial-frequency for letter and face.</p
Contrast energy as a function of threshold viewing time (msec) for face and letter.
<p>Contrast energy of each image was computed as follows: , where <i>L</i><sub><i>i</i>, <i>j</i></sub> is the luminance of the pixel [<i>i</i>, <i>j</i>] of an image of size <i>N</i> by <i>M</i>, and is the mean luminance of the stimulus image. Each data point (solid dots) was an average of threshold viewing time across participants (<i>n</i> = 7) for a given blur level. The solid lines are the best fits of a linear function to the data.</p
Application research on internal prestress monitoring of box girder based on FBG self-sensing steel strand
Internal prestressing is an effective prestressing approach in the concrete box girder that ultimately plays a vital role in their construction. In practical engineering, accurate measurement and long-term monitoring are still challenging. FBG-based self-sensing steel strand (FSS) possesses the advantages of quasi-distributed monitoring of internal cable prestress and long-term stable monitoring. To examine its application in monitoring the internal prestressing of the actual box girder project, as well as to reveal the law of prestressing distribution and prestressing loss in the box girder, the self-sensing steel strand is appropriately experimented by using the cyclic tensile test. Precast and cast-in-place beams are then selected to examine the prestressing process and long-term prestressing monitoring of box girders with single-ended and double-ended tensioning processes. The results reveal that the linearity of FSS is 0.84%, the hysteresis is 0.75%, and the repeatability is 1.87%, which exhibits a good prestress monitoring performance. In the comparative application with the Elasto-Magnetic sensor and the reverse tension method, the instantaneous prestress monitoring results are basically consistent with the Elasto-Magnetic sensor. Due to the influence of the friction resistance of the anchorage, it is slightly lower than the measured value of the reverse tension method. Therefore, the FSS exhibits the accuracy of prestress monitoring and consistency of the actual box girder engineering application. In the monitoring of instantaneous prestress loss, such a loss at the tension end of the box girder mostly caused by the internal shrinkage of prestressed tendon, accounting for 81.3 % on average. The mid-span and the fixed end of the box girder, which experience tension under the single-ended tension conditions, are affected by friction and concrete compression loss. Therefore, reducing the amount of shrinkage of steel strands and adopting the double-ended tension method is still the key to reducing the instantaneous prestress loss. In the long-term prestress loss, the measured value exhibits a certain discontinuity. Compared to the theoretical value, the amount of prestress loss of the box girder tensioned at both ends is linearly positively correlated with the magnitude of applied effective prestress, and the amount of loss is negatively correlated with channel curvature
The results from the mouse tracking moving window.
<p>The number of views was plotted as a function of window size (°) for three different blur levels: unfiltered (black dots), 1.2 c/letter or 3.2 c/face (gray dots), and 4 c/letter and 8 c/face (green dots). Stimuli were 26 uppercase letter <b>(a)</b> or 26 celebrity faces <b>(b)</b> with the image size of 4° of visual angle (the same format as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0162711#pone.0162711.g003" target="_blank">Fig 3</a>). Each data point (solid dots) was an average of the number of views across participants (<i>n</i> = 7). Data were fitted with the exponential-decay function (<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0162711#pone.0162711.e002" target="_blank">Eq 2</a>). The solid lines are the best fits of the model. Error bars represent ±1 SEM.</p
Examples of traces of the gaze-contingent moving window and the total area viewed by the moving window.
<p><b>(a)</b> These trace maps show example trials (correct trials) taken from letter or face recognition for unfiltered and the lowest cutoff spatial-frequency (1.5 c/letter for letter and 5.2 c/face for face). The traces of the moving window (yellow blobs) were superimposed on target images. <b>(b)</b> Mean total amount of viewed area (deg<sup>2</sup>) collapsed across participants (<i>n</i> = 8) was plotted as a function of cutoff spatial-frequency (blur level) for letter and face. Area data were computed by aggregating the regions (e.g., yellow blobs in Fig 4A) of an object image viewed through the moving window (1.2°). Error bars represent ±1 SEM. Note that three asterisks (***) indicate the p value of < 0.001.</p
Cutoff spatial-frequencies of the low-pass filter used for the study.
<p>Cutoff spatial-frequencies of the low-pass filter used for the study.</p
Schematic diagrams of stimulus and task procedure.
<p><b>(a)</b> A target letter was viewed through an aperture with varying sizes (in diameter) ranging from 1.2° to 9° in diameter. The numbers in the parentheses indicate the aperture sizes as percentages of the area of the circle containing the stimulus target. The size of the target image was 4° of visual angle. <b>(b)</b> At the beginning of each trial, participants were instructed to fixate on a central dot on the display screen to make sure that the aperture always appears on the center of a target image. A participant’s task was to identify the target stimulus as quickly and accurately as possible. Participants freely moved the viewing window over the target image via a gaze-contingent display until they could recognize the target identity. Participants pressed space bar as soon as they recognized its identity. Then, participants reported the target identity by clicking one of 26-letter images or 26-face names response keys forming a clock face. This measurement was repeated for three different blur levels, including no blur.</p
Additional file 2: Dataset 1. of Associating transcriptional modules with colon cancer survival through weighted gene co-expression network analysis
WGCNA and survival analysis for the 3600 genes contained in the 11 co-expression modules. The kME and k.in with the parent module and the survival calculation for RFS and molecular subtypes (type 3 and type 4) are presented. (XLS 3080 kb
SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues
<div><p>Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinder<sup>F</sup> and a template predictor SNBRFinder<sup>T</sup>. SNBRFinder<sup>F</sup> was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinder<sup>T</sup> was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinder<sup>F</sup> was clearly superior to the commonly used sequence profile-based predictor and SNBRFinder<sup>T</sup> can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at <a href="http://ibi.hzau.edu.cn/SNBRFinder" target="_blank">http://ibi.hzau.edu.cn/SNBRFinder</a>.</p></div
- …