995 research outputs found
A model for simulating dynamic problems of economic development
At head of title: Economic dynamics"July 1960."Includes bibliographic references (p. 198-203
Donor/Acceptor Mixed Self-Assembled Monolayers for Realising a Multi-Redox-State Surface
Mixed molecular self-assembled monolayers (SAMs) on gold, based on two types of electroactive molecules, that is, electron-donor (ferrocene) and electron-acceptor (anthraquinone) molecules, are prepared as an approach to realise surfaces exhibiting multiple accessible redox states. The SAMs are investigated in different electrolyte media. The nature of these media has a strong impact on the types of redox processes that take place and on the redox potentials. Under optimised conditions, surfaces with three redox states are achieved. Such states are accessible in a relatively narrow potential window in which the SAMs on gold are stable. This communication elucidates the key challenges in fabricating bicomponent SAMs as electrochemical switches.We acknowledge the financial support of the EU projects ERC StG
2012-306826 e-GAMES, ITN iSwitch (GA no. 642196) CIG (PCIG10-
GA-2011-303989), ACMOL (GA no. 618082), the Networking Research
Center of Bioengineering, Biomaterials and Nanomedicine
(CIBER-BBN), the DGI (Spain) with project BE-WELL CTQ2013-
40480-R and the Generalitat de Catalunya with project 2014-
SGR-17. The authors also acknowledge financial support from
the Spanish Ministry of Economy and Competitiveness, through
the “Severo Ochoa” Programme for Centres of Excellence in R&D
(SEV-2015-0496). N.C acknowledges the RyC Program. J.C-M. and
E.M. are enrolled in the Materials Science PhD program of UAB.Peer reviewe
Low Speed Rear End Automobile Collisions and Whiplash Injury, the Biomechanical Approach
The extent of injury in low speed rear end collisions is controversial. In many cases, the impact speed of the striking vehicle is low, neither car shows much if any post collision damage, and at the scene, the occupant of the struck vehicle appears uninjured. Yet many of these incidents progress to lawsuits with sometimes very significant damage and injury claims. In testimony, Plaintiff argues that the collision was significant while Defendant describes the collision as minor. A Biomechanical approach which addresses the forces in the collision and the resulting forces and kinematics of the occupant can help to resolve some of these issues. In the following, the process of a biomechanical analysis is described, using a specific example. A discussion of how courts have viewed this type of testimony is then presented
Analysis of interdecadal variability of temperature extreme events in Argentina applying EVT
The frequency of occurrence of temperature extreme events has changed throughout the last century: significant positive trends in warm nights and negative trends in cold nights have been observed all over the world. In Argentina, the probability of occurrence of warm annual extremes of maximum temperature has decreased in the last decades, while there has been an increase in warm annual extremes of minimum temperature. The main objective of this paper is to evaluate observed interdecadal changes in the distribution of temperature events that exceed a fixed threshold in five meteorological stations from Argentina over the period 1941-2000, by applying the extreme value theory (EVT). The availability of daily data allows fitting a generalized Pareto distribution (GPD) to daily temperature anomalies over the 90th or below the 10th percentile, in order to estimate return values of extreme events. Daily temperature anomalies are divided into three consecutive and non-overlapping subperiods of 20 years. GPD is fitted to each subperiod independently and a comparison is made between return values estimated in each subperiod. Results show that there is a decrease in the intensity of warm extreme events during the whole period, together with anincrease in its frequency of occurrence during the last 20 years of the twentieth century. Cold extremes alsoshow a decrease in their intensity. However, changes in their frequency of occurrence are not so consistentbetween the different stations analyzed.La frecuencia de ocurrencia de los eventos extremos de temperatura ha sufrido variaciones a lo largo del último siglo: se han observado tendencias positivas significativas en las noches cálidas y tendencias negativas en las noches frías en todo el mundo. En Argentina, la probabilidad de ocurrencia de extremos cálidos anuales de la temperatura máxima disminuyó en las últimas décadas, mientras que hubo un incremento en la probabilidad de ocurrencia de extremos cálidos anuales de la temperatura mínima. El objetivo principal de este trabajo es evaluar la variabilidad interdecadal observada en la distribución de los eventos de temperatura que superan un determinado umbral en cinco estaciones meteorológicas de Argentina durante el periodo 1941-2000, mediante la aplicación de la teoría de valores extremos. La disponibilidad de datos diarios permite el ajuste de la distribución generalizada de Pareto (DGP) a las anomalías diarias de temperatura que exceden el percentil 90 o que no alcanzan el percentil 10 con el propósito de estimar los valores de retorno de los eventos extremos. Las series de anomalías diarias de temperatura se dividen en tres subperiodos consecutivos sin superposición de 20 años cada uno. La DGP se ajusta en cada uno de los tres subperiodos en forma independiente y se comparan los valores de retorno estimados en cada subperiodo. Los resultados muestran que hay una disminución en la intensidad de eventos extremos cálidos durante todo el periodo de estudio, junto con un incremento en su frecuencia de ocurrencia durante los últimos 20 años del siglo XX. Los extremos fríos también muestran una disminución en intensidad. Sin embargo, los cambios en su frecuencia de ocurrencia no son tan consistentes entre las diferentes estaciones estudiadas
Analysis of interdecadal Variability of Temperature Extreme Events in Argentina applying EVT
La frecuencia de ocurrencia de los eventos extremos de temperatura ha sufrido variaciones a lo largo del último siglo: se han observado tendencias positivas significativas en las noches cálidas y tendencias negativas en las noches frías en todo el mundo. En Argentina, la probabilidad de ocurrencia de extremos cálidos anuales de la temperatura máxima disminuyó en las últimas décadas, mientras que hubo un incremento en la probabilidad de ocurrencia de extremos cálidos anuales de la temperatura mínima. El objetivo principal de este trabajo es evaluar la variabilidad interdecadal observada en la distribución de los eventos de temperatura que superan un determinado umbral en cinco estaciones meteorológicas de Argentina durante el periodo 1941-2000, mediante la aplicación de la teoría de valores extremos. La disponibilidad de datos diarios permite el ajuste de la distribución generalizada de Pareto (DGP) a las anomalías diarias de temperatura que exceden el percentil 90 o que no alcanzan el percentil 10 con el propósito de estimar los valores de retorno de los eventos extremos. Las series de anomalías diarias de temperatura se dividen en tres subperiodos consecutivos sin superposición de 20 años cada uno. La DGP se ajusta en cada uno de los tres subperiodos en forma independiente y se comparan los valores de retorno estimados en cada subperiodo. Los resultados muestran que hay una disminución en la intensidad de eventos extremos cálidos durante todo el periodo de estudio, junto con un incremento en su frecuencia de ocurrencia durante los últimos 20 años del siglo XX. Los extremos fríos también muestran una disminución en intensidad. Sin embargo, los cambios en su frecuencia de ocurrencia no son tan consistentes entre las diferentes estaciones estudiadas.The frequency of occurrence of temperature extreme events has changed throughout the last century: significant positive trends in warm nights and negative trends in cold nights have been observed all over the world. In Argentina, the probability of occurrence of warm annual extremes of maximum temperature has decreased in the last decades, while there has been an increase in warm annual extremes of minimum temperature. The main objective of this paper is to evaluate observed interdecadal changes in the distribution of temperature events that exceed a fixed threshold in five meteorological stations from Argentina over the period 1941-2000, by applying the extreme value theory (EVT). The availability of daily data allows fitting a generalized Pareto distribution (GPD) to daily temperature anomalies over the 90th or below the 10th percentile, in order to estimate return values of extreme events. Daily temperature anomalies are divided into three consecutive and non-overlapping subperiods of 20 years. GPD is fitted to each subperiod independently and a comparison is made between return values estimated in each subperiod. Results show that there is a decrease in the intensity of warm extreme events during the whole period, together with an increase in its frequency of occurrence during the last 20 years of the twentieth century. Cold extremes also show a decrease in their intensity. However, changes in their frequency of occurrence are not so consistent between the different stations analyzed.Fil: Tencer, Barbara. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Ciencias de la Atmósfera y los Océanos; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria; ArgentinaFil: Rusticucci, Matilde Monica. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Ciencias de la Atmósfera y los Océanos; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria; Argentin
Semi-supervised machine learning techniques for classification of evolving data in pattern recognition
The amount of data recorded and processed over recent years has increased exponentially. To create intelligent systems that can learn from this data, we need to be able to identify patterns hidden in the data itself, learn these pattern and predict future results based on our current observations. If we think about this system in the context of time, the data itself evolves and so does the nature of the classification problem. As more data become available, different classification algorithms are suitable for a particular setting. At the beginning of the learning cycle when we have a limited amount of data, online learning algorithms are more suitable. When truly large amounts of data become available, we need algorithms that can handle large amounts of data that might be only partially labeled as a result of the bottleneck in the learning pipeline from human labeling of the data.
An excellent example of evolving data is gesture recognition, and it is present throughout our work. We need a gesture recognition system to work fast and with very few examples at the beginning. Over time, we are able to collect more data and the system can improve. As the system evolves, the user expects it to work better and not to have to become involved when the classifier is unsure about decisions. This latter situation produces additional unlabeled data. Another example of an application is medical classification, where experts’ time is a rare resource and the amount of received and labeled data disproportionately increases over time.
Although the process of data evolution is continuous, we identify three main discrete areas of contribution in different scenarios. When the system is very new and not enough data are available, online learning is used to learn after every single example and to capture the knowledge very fast. With increasing amounts of data, offline learning techniques are applicable. Once the amount of data is overwhelming and the teacher cannot provide labels for all the data, we have another setup that combines labeled and unlabeled data. These three setups define our areas of contribution; and our techniques contribute in each of them with applications to pattern recognition scenarios, such as gesture recognition and sketch recognition.
An online learning setup significantly restricts the range of techniques that can be used. In our case, the selected baseline technique is the Evolving TS-Fuzzy Model. The semi-supervised aspect we use is a relation between rules created by this model. Specifically, we propose a transductive similarity model that utilizes the relationship between generated rules based on their decisions about a query sample during the inference time. The activation of each of these rules is adjusted according to the transductive similarity, and the new decision is obtained using the adjusted activation. We also propose several new variations to the transductive similarity itself.
Once the amount of data increases, we are not limited to the online learning setup, and we can take advantage of the offline learning scenario, which normally performs better than the online one because of the independence of sample ordering and global optimization with respect to all samples. We use generative methods to obtain data outside of the training set. Specifically, we aim to improve the previously mentioned TS Fuzzy Model by incorporating semi-supervised learning in the offline learning setup without unlabeled data. We use the Universum learning approach and have developed a method called UFuzzy. This method relies on artificially generated examples with high uncertainty (Universum set), and it adjusts the cost function of the algorithm to force the decision boundary to be close to the Universum data. We were able to prove the hypothesis behind the design of the UFuzzy classifier that Universum learning can improve the TS Fuzzy Model and have achieved improved performance on more than two dozen datasets and applications.
With increasing amounts of data, we use the last scenario, in which the data comprises both labeled data and additional non-labeled data. This setting is one of the most common ones for semi-supervised learning problems. In this part of our work, we aim to improve the widely popular tecjniques of self-training (and its successor help-training) that are both meta-frameworks over regular classifier methods but require probabilistic representation of output, which can be hard to obtain in the case of discriminative classifiers. Therefore, we develop a new algorithm that uses the modified active learning technique Query-by-Committee (QbC) to sample data with high certainty from the unlabeled set and subsequently embed them into the original training set. Our new method allows us to achieve increased performance over both a range of datasets and a range of classifiers.
These three works are connected by gradually relaxing the constraints on the learning setting in which we operate. Although our main motivation behind the development was to increase performance in various real-world tasks (gesture recognition, sketch recognition), we formulated our work as general methods in such a way that they can be used outside a specific application setup, the only restriction being that the underlying data evolve over time. Each of these methods can successfully exist on its own. The best setting in which they can be used is a learning problem where the data evolve over time and it is possible to discretize the evolutionary process.
Overall, this work represents a significant contribution to the area of both semi-supervised learning and pattern recognition. It presents new state-of-the-art techniques that overperform baseline solutions, and it opens up new possibilities for future research
Monocyte Chemoattractant Protein 1 is a Prognostic Marker in ANCA-Associated Small Vessel Vasculitis
Background. The (anti neutrophil cytoplasmatic autoantibody ANCA), associated small vessel vasculitides (ASVV) are relapsing-remitting inflammatory disorders, involving various organs, such as the kidneys. (Monocyte chemoattractant protein 1 MCP-1) has been shown to be locally up regulated in glomerulonephritis and recent studies have pointed out MCP-1 as a promising marker of renal inflammation. Here we measure urinary cytokine levels in different phases of disease, exploring the possible prognostic value of MCP-1, together with (interleukin 6 IL-6), (interleukin 8 IL-8) and (immunoglobulin M IgM). Methods. MCP-1, IL-6 and IL-8 were measured using commercially available ELISA kits, whereas IgM in the urine was measured by an in-house ELISA. Results. The MCP-1 levels in urine were significantly higher in patients in stable phase of the disease, compared with healthy controls. Patients in stable phase, with subsequent adverse events; had significantly higher MCP-1 values than patients who did not. MCP-1 and IgM both tended to be higher in patients relapsing within three months, an observation, however, not reaching statistical significance. Urinary levels of IL-6 correlated with relapse tendency, and IL-8 was associated with disease outcome. Conclusions. Patients with ASVV have raised cytokine levels in the urine compared to healthy controls, even during remission. Raised MCP-1 levels are associated with poor prognosis and possibly also with relapse tendency. The association with poor prognosis was stronger for U-MCP-1 than for conventional markers of disease like CRP, BVAS, and ANCA, as well as compared to candidate markers like U-IgM and U-IL-8. We thus consider U-MCP-1 to have promising potential as a prognostic marker in ASVV
- …
