29 research outputs found
The tolerance premium as a constitutional element of the protective welfare state
Indeed it is the case that [e]very possible general answer has already been given to the question of how much should be transferred to the poor (Olson, 1986). The underlying motive for this multiplicity of answers can mostly be located in the range of exogenously taken views on the trade-off between equity and efficiency which is supposed to arise every specific set of social policy measures. If however the welfare state is analyzed endogenously, that is from a constitutional economic standpoint, three clear-cut motives for its existence are to be evoked: the insurance motive, the self-protection motive, and the interpretation of charity as a public good. Yet, even if these motives escape the orthodox trade-off, they do not entirely embrace the reciprocal conditionality of the productive and the protective state. A remedy to this can be the tolerance premium (Homann/Pies, 1996), representing the collective measures and payments attributable to the productive state which at all enable the consent of every individual to the productive state. Traces of arrangements resembling a tolerance premium in the mentioned sense can also be found in the work of James Buchanan. They however all essentially remain trapped in an apology of the status quo and thus in practice re-enter the orthodox dualism of efficiency and equity of welfare measures. Only a thoroughly conflict-economic setup accounts for the dynamics initiated by the tolerance premium, but this at the price of overcoming Buchanan's two-stage theory: protective and productive states are so interdependent that they cannot be separated and steadily have to arise from simultaneously taken decisions. This approach in practice relates to other approaches like the power resource theory which interprets the welfare state as ultimately being the reflection of power relations in e.g. collective bargaining
A Model of the Ventral Visual System Based on Temporal Stability and Local Memory
The cerebral cortex is a remarkably homogeneous structure suggesting a rather generic computational machinery. Indeed, under a variety of conditions, functions attributed to specialized areas can be supported by other regions. However, a host of studies have laid out an ever more detailed map of functional cortical areas. This leaves us with the puzzle of whether different cortical areas are intrinsically specialized, or whether they differ mostly by their position in the processing hierarchy and their inputs but apply the same computational principles. Here we show that the computational principle of optimal stability of sensory representations combined with local memory gives rise to a hierarchy of processing stages resembling the ventral visual pathway when it is exposed to continuous natural stimuli. Early processing stages show receptive fields similar to those observed in the primary visual cortex. Subsequent stages are selective for increasingly complex configurations of local features, as observed in higher visual areas. The last stage of the model displays place fields as observed in entorhinal cortex and hippocampus. The results suggest that functionally heterogeneous cortical areas can be generated by only a few computational principles and highlight the importance of the variability of the input signals in forming functional specialization
Automated fault detection using deep belief networks for the quality inspection of electromotors
Vibration inspection of electro-mechanical components and systems is an important tool for automated reliable online as well as post-process production quality assurance. Considering that bad electromotor samples are very rare in the production line, we propose a novel automated fault detection method named "Tilearâ, based on Deep Belief Networks (DBNs) training only with good electromotor samples. Tilear consctructs an auto-encoder with DBNs, aiming to reconstruct the inputs as closely as possible. Tilear is structured in two parts: training and decision-making. During training, Tilear is trained only with informative features extracted from preprocessed vibration signals of good electromotors, which enables the trained Tilear only to know how to reconstruct good electromotor vibration signal features. In the decision-making part, comparing the recorded signal from test electromotor and the Tilear reconstructed signal, allows to measure how well a recording from a test electromotor matches the Tilear model learned from good electromotors. A reliable decision can be mad
Prediction of human core body temperature using non-invasive measurement methods
The measurement of core body temperature is an efficient method for monitoring heat stress amongst workers in hot conditions. However, invasive measurement of core body temperature (e.g. rectal, intestinal, oesophageal temperature) is impractical for such applications. Therefore, the aim of this study was to define relevant non-invasive measures to predict core body temperature under various conditions. We conducted two human subject studies with different experimental protocols, different environmental temperatures (10°C, 30°C) and different subjects. In both studies the same non-invasive measurement methods (skin temperature, skin heat flux, heart rate) were applied. A principle component analysis was conducted to extract independent factors, which were then used in a linear regression model. We identified six parameters (three skin temperatures, two skin heat fluxes and heart rate), which were included for the calculation of two factors. The predictive value of these factors for core body temperature was evaluated by a multiple regression analysis. The calculated root mean square deviation (rmsd) was in the range from 0.28°C to 0.34°C for all environmental conditions. These errors are similar to previous models using non-invasive measures to predict core body temperature. The results from this study illustrate that multiple physiological parameters (e.g. skin temperature and skin heat fluxes) are needed to predict core body temperature. In addition, the physiological measurements chosen in this study and the algorithm defined in this work are potentially applicable as real-time core body temperature monitoring to assess health risk in broad range of working conditions
173 Jahre nach Kölliker: Zeit fĂŒr eine neue Flora des Kantons ZĂŒrich
Die alteingesessene ZĂŒrcherische Botanische Gesellschaft mit GrĂŒndungsjahr 1890 wendet sich ihrem zentralen Anliegen zu und wagt sich an eine Neuauflage (Ăberarbeitung) der Flora des Kantons ZĂŒrich. Ăber 170 Jahre ist es her, seit Albert Kölliker 1839 die bisher einzige Flora des Kantons publizierte. Seither hat sich die Landschaft stark verĂ€ndert, so dass sich eine Bestandesaufnahme der Flora aufdrĂ€ngt. Die Vorarbeiten fĂŒr das ehrgeizige Projekt sind bereits weit vorangeschritten. In systematischer Weise soll die Flora stichprobenartig auf einem Neuntel der KantonsflĂ€che, genauer auf 208 Kartierquadraten von je 1 km2 Grösse, vollstĂ€ndig erfasst werden. ErgĂ€nzt wird das Inventar durch eine Wiederkartierung der Welten/Sutter-KartierflĂ€chen, wodurch ein Vergleich der Flora mit dem Zustand vor rund 40 Jahren möglich ist. Recherchen in den vereinigten Herbarien der UniversitĂ€t und ETH ZĂŒrich sowie im Herbarium Georg Kummer im Museum zu Allerheiligen in Schaffhausen sollen die Kartierungen ergĂ€nzen. WĂ€hrend einer Pilotkartierung im Juni 2011 wurden erste praktische Erfahrungen gesammelt. Die Ergebnisse zeigen, dass idealerweise mehr als ein Kartierteam pro Kilometerquadrat im Einsatz stehen sollte und dass die Anzahl der kartierten Arten in einem Quadrat wesentlich von der Aufenthaltsdauer und von der zurĂŒckgelegten Wegstrecke abhĂ€ngt. Die Flora
soll in erster Linie im Internet publiziert werden und neue Resultate regelmĂ€ssig verfĂŒgbar machen. Die vorgeschlagene Flora-Kartierung ist ein realistisches Projekt, weil viele Leute im Kanton ZĂŒrich bereit sind, wĂ€hrend mehrerer Jahre einen Teil ihrer Freizeit in den Dienst dieser Kartierung zu stellen
Automated fault detection using deep belief networks for the quality inspection of electromotors
Vibration inspection of electro-mechanical components and systems is an important tool for automated reliable online as well as post-process production quality assurance. Considering that bad electromotor samples are very rare in the production line, we propose a novel automated fault detection method named "Tilearâ, based on Deep Belief Networks (DBNs) training only with good electromotor samples. Tilear consctructs an auto-encoder with DBNs, aiming to reconstruct the inputs as closely as possible. Tilear is structured in two parts: training and decision-making. During training, Tilear is trained only with informative features extracted from preprocessed vibration signals of good electromotors, which enables the trained Tilear only to know how to reconstruct good electromotor vibration signal features. In the decision-making part, comparing the recorded signal from test electromotor and the Tilear reconstructed signal, allows to measure how well a recording from a test electromotor matches the Tilear model learned from good electromotors. A reliable decision can be mad
Bounded Invariance and the Formation of Place Fields
One current explanation of the view independent representation of space by the place-cells of the hippocampus is that they arise out of the summation of view dependent Gaussians. This proposal assumes that visual representations show bounded invariance. Here we investigate whether a recently proposed visual encoding scheme called the temporal population code can provide such representations