1,800 research outputs found
Ocular hypertension in myopia: analysis of contrast sensitivity
Purpose: we evaluated the evolution of contrast sensitivity reduction in patients affected by ocular hypertension and glaucoma, with low to moderate myopia. We also evaluated the relationship between contrast sensitivity and mean deviation of visual field.
Material and methods: 158 patients (316 eyes), aged between 38 and 57 years old, were enrolled and divided into 4 groups: emmetropes, myopes, myopes with ocular hypertension (IOP≥21 ±2 mmHg), myopes with glaucoma. All patients underwent anamnestic and complete eye evaluation, tonometric curves with Goldmann’s applanation tonometer, cup/disc ratio evaluation, gonioscopy by Goldmann’s three-mirrors lens, automated perimetry (Humphrey 30-2 full-threshold test) and contrast sensitivity evaluation by Pelli-Robson charts. A contrast sensitivity under 1,8 Logarithm of the Minimum Angle of Resolution (LogMAR) was considered
abnormal.
Results: contrast sensitivity was reduced in the group of myopes with ocular hypertension (1,788 LogMAR) and in the group of myopes with glaucoma (1,743 LogMAR), while it was preserved in the group of myopes (2,069 LogMAR) and in the group of emmetropes (1,990 LogMAR). We also found a strong correlation between contrast sensitivity reduction and mean deviation of visual fields in myopes with glaucoma (coefficient relation = 0.86) and in myopes with ocular hypertension (coefficient relation = 0.78).
Conclusions: the contrast sensitivity assessment performed by the Pelli-Robson test should be performed in all patients with middle-grade myopia, ocular hypertension and optic disc suspected for glaucoma, as it may be useful in the early diagnosis of the disease.
Introduction Contrast can be defined as the ability of the eye to discriminate differences in luminance between the stimulus and the background.
The sensitivity to contrast is represented by the inverse of the minimal contrast necessary to make an object visible; the lower the
contrast the greater the sensitivity, and the other way around.
Contrast sensitivity is a fundamental aspect of vision together with visual acuity: the latter defines the smallest spatial detail that the subject manages to discriminate under optimal conditions, but it only provides information about the size of the stimulus that the eye is capable to perceive; instead, the evaluation of contrast sensitivity provides information not obtainable with only the measurement of visual acuity, as it establishes the minimum difference in luminance that must be present between the stimulus and its background so that the retina is adequately stimulated to perceive the stimulus itself. The clinical methods of examining contrast sensitivity (lattices,
luminance gradients, variable-contrast optotypic tables and lowcontrast optotypic tables) relate the two parameters on which the
ability to distinctly perceive an object depends, namely the different luminance degree of the two adjacent areas and the spatial frequency,
which is linked to the size of the object.
The measurement of contrast sensitivity becomes valuable in the diagnosis and follow up of some important eye conditions such as
glaucoma. Studies show that contrast sensitivity can be related to data obtained with the visual perimetry, especially with the perimetric
damage of the central area and of the optic nerve head
Methane on a stepped surface: Dynamical insights on the dissociation of CHD3 on Pt(111) and Pt(211)
Theoretical Chemistr
CHD3 dissociation on Pt(111): A comparison of the reaction dynamics based on the PBE functional and on a specific reaction parameter functional
Theoretical Chemistr
Semi-empirical approach to the simulation of molecule-surface reaction dynamcis
Catalysis is of extreme relevance in the production of
everyday materials and plays a central role in
many aspects of our life. On the industrial level, metal based catalysts are
widely used to produce
molecular hydrogen, which can be used as fuel, or nitrogen, one of the
building blocks in the fertilizers
synthesis, and other fundamental molecules.
A better understanding of heterogeneous catalyzed processes would help to
design better and more
efficient catalysts but it is hard to achieve because of their high level of
complexity. Molecular
dissociation on metal surfaces is usually a multi-step process which can be
best investigated through a
joint experimental and theoretical effort.
The comparison of molecular beam experiments with molecular dynamics
simulations can help to
improve over the theoretical method used, called density functional theory
(DFT), in order to achieve
chemical accuracy (i.e., errors smaller than 1 kcal/mol) for the reaction
studied.
As we show in the research reported in the thesis, being able to accurately
compute the dissociation
barriers for methane on metals like nickel and platinum is of great
importance in order to make
predictions about the most reactive sites on the surface and possibly, in the
future, it can help improving
over industrial catalysts.
This work has been made possible by financial support by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) and by the European Research Council through an ERC-2013 advanced grant (Nr. 338580), and with computer time granted by NWO Exacte Wetenschappen, EW (NWO Physical Sciences Division).Theoretical Chemistr
Power-aware allocation of MBSFN subframes using Discontinuous Cell Transmission in LTE systems
In LTE and its evolutions, energy efficiency is a critical aspect, also in view of the dramatic traffic growth foreseen for the next years. Cell Discontinuous Transmission (DTX) techniques can be important tools to achieve the needed efficiency in the networks, and one possibility is to implement the DTX by switching off the eNB at some subframes (MBSFN subframes) and not in others (where reference signals are also transmitted). Switching schedules in LTE are made for larger periods (e.g., 40/80ms or even more). We present an algorithm that i) estimates how many resources will be needed in a period, and ii) shows how many resource blocks to activate in each subframe so as to maximize the power efficiency. The problem is formulated as an integer linear problem and solved heuristically. Numerical results show that the power saving is significant, close to the theoretical minimum at low loads, and it comes with a tolerable extra dela
A context-based approach for partitioning big data
In recent years, the amount of available data keeps growing at fast rate, and it is therefore crucial to be able to process them in an efficient way. The level of parallelism in tools such as Hadoop or Spark is determined, among other things, by the partitioning applied to the dataset. A common method is to split the data into chunks considering the number of bytes. While this approach may work well for text-based batch processing, there are a number of cases where the dataset contains structured information, such as the time or the spatial coordinates, and one may be interested in exploiting such a structure to improve the partitioning. This could have an impact on the processing time and increase the overall resource usage efficiency. This paper explores an approach based on the notion of context, such as temporal or spatial information, for partitioning the data. We design a context-based multi-dimensional partitioning technique that divides an n 12dimensional space into splits by considering the distribution of the each contextual dimension in the dataset. We tested our approach on a dataset from a touristic scenario, and our experiments show that we are able to improve the efficiency of the resource usage
What is the role of context in fair group recommendations?
We investigate the role played by the context, i.e. the situation the group is currently experiencing, in the design of a system that recommends sequences of activities as a multi-objective optimization problem, where the satisfaction of the group and the available time interval are two of the functions to be optimized. In particular, we highlight that the dynamic evolution of the group can be the key contextual feature that has to be considered to produce fair suggestions
Failed Surgery for Patellar Tendinopathy in Athletes: Midterm Results of Further Surgical Management
Background:Tendon injuries are commonly seen in sports medicine practice. Many elite players involved in high-impact activities develop patellar tendinopathy (PT) symptoms. Of them, a small percentage will develop refractory PT and need to undergo surgery. In some of these patients, surgery does not resolve these symptoms.Purpose:To report the clinical results in a cohort of athletes who underwent further surgery after failure of primary surgery for PT.Study Design:Case series; Level of evidence, 4.Methods:A total of 22 athletes who had undergone revision surgery for failed surgical management of PT were enrolled in the present study. Symptom severity was assessed through the Victorian Institute of Sport Assessment Scale for Patellar Tendinopathy (VISA-P) upon admission and at the final follow-up. Time to return to training, time to return to competition, and complications were also recorded.Results:The mean age of the athletes was 25.4 years, and the mean symptom duration from the index intervention was 15.3 months. At a mean follow-up of 30.0 +/- 4.9 months, the VISA-P score improved 27.8 points (P < .0001). The patients returned to training within a mean of 9.2 months. Fifteen patients (68.2%) returned to competition within a mean of 11.6 months. Of these 15 patients, a further 2 had decreased their performance, and 2 more had abandoned sports participation by the final follow-up. The overall rate of complications was 18.2%. One patient (4.5%) had a further revision procedure.Conclusion:Revision surgery was feasible and effective in patients in whom PT symptoms persisted after previous surgery for PT, achieving a statistically significant and clinically relevant improvement of the VISA-P score as well as an acceptable rate of return to sport at a follow-up of 30 months
What makes spatial data big? A discussion on how to partition spatial data
The amount of available spatial data has significantly increased in the last years so that traditional analysis tools have become inappropriate to effectively manage them. Therefore, many attempts have been made in order to define extensions of existing MapReduce tools, such as Hadoop or Spark, with spatial capabilities in terms of data types and algorithms. Such extensions are mainly based on the partitioning techniques implemented for textual data where the dimension is given in terms of the number of occupied bytes. However, spatial data are characterized by other features which describe their dimension, such as the number of vertices or the MBR size of geometries, which greatly affect the performance of operations, like the spatial join, during data analysis. The result is that the use of traditional partitioning techniques prevents to completely exploit the benefit of the parallel execution provided by a MapReduce environment. This paper extensively analyses the problem considering the spatial join operation as use case, performing both a theoretical and an experimental analysis for it. Moreover, it provides a solution based on a different partitioning technique, which splits complex or extensive geometries. Finally, we validate the proposed solution by means of some experiments on synthetic and real datasets
Slowed-Down Rehabilitation Following Percutaneous Repair of Achilles Tendon Rupture
Background: Following percutaneous repair of acute Achilles tendon (AT) ruptures, early postoperative weightbearing is advocated; however, it is debatable how aggressive rehabilitation should be. We compared the clinical and functional outcomes in 2 groups of patients who followed either our "traditional" or a "slowed down" rehabilitation after percutaneous surgical repair. Methods: Sixty patients were prospectively recruited to a slowed down (29 patients) or a traditional (31 patients) rehabilitation program. Both groups were allowed immediate weightbearing postoperatively; a removable brace with 5 heel wedges was applied at 2 weeks. In the slowed-down group, 1 wedge was removed after 4 weeks. Gradual removal of the boot took place after 4 wedges were kept for 4 weeks. In the traditional group, 1 wedge was removed every 2 weeks, with removal of the boot after 2 wedges had been kept for 2 weeks. The AT Resting Angle (ATRA) evaluated tendon elongation. Patient reported functional outcomes were assessed using the AT Rupture Score (ATRS). Calf circumference difference and the isometric plantarflexion strength of the gastro-soleus complex were evaluated. Results: At the 12-month follow-up, both ATRA and ATRS were more favorable in the slowed-down group. The isometric strength and the calf circumference were more similar to the contralateral leg in the slowed-down group than in the traditional one. Conclusion: Following percutaneous repair of acute Achilles tendon patients undergoing slowed down rehabilitation performed better than the traditional one. These conclusions must be considered within the limitations of the present study
- …