36 research outputs found

    PrvnĂ­ ÄŤeskĂ˝ chmel v kvalitÄ› bio

    Get PDF
    Historie organického pěstování chmele ve světě začala v polovině 80. let minulého století v Bavorsku. V roce 2011 se organickému pěstování chmele věnovalo 55 farem v deseti zemích světa. Obdělávají celkem 187 ha chmelnic s roční produkcí 240 t certifikovaného biochmele

    NASYP: Online expert tool on the control of major-accident hazards involving dangerous substances

    Get PDF
    NASYP is an online Geoportal tool being developed in cooperation with state and regional authorities to improve insufficient practices based on implementation of Directive nr. 2003/105/ES on the control of major-accident hazards involving dangerous substances. The tool is applicable for managing the permits, reporting and regular monitoring issues. Furthermore, it’s applicable for a risk assessment and a rapid management of disasters in the initial phase. There’re simple modeling tools included to simulate early stages of the contamination caused by disasters occurred to be used for decision making and effective use of emergency services. In this manner, there’re low atmospheric and surface water pollutions taken into account. For the study area, Liberec region was chosen covering the area of 3,163km2 and containing 533 potentially dangerous objects categorized accordingly to the Directive nr. 2003/105/ES. The model simulations are responding to daily hydrological and meteorological situation, a capability of automated updates from databases operated by the Czech Hydro Meteorological Institute, and communicate with databases of substances operated by the regional authorities. NASYP is suitable especially for the “N†class of the operators defined in the Directive, where because of smaller amounts of stored dangerous substances the safety measures and regular inspections are limited.Spatial data, geoportal, risk management, modelling, Research and Development/Tech Change/Emerging Technologies, Research Methods/ Statistical Methods, Risk and Uncertainty, GA, IN,

    Theta-paced flickering between place-cell maps in the hippocampus

    Get PDF
    The ability to recall discrete memories is thought to depend on the formation of attractor states in recurrent neural networks. In such networks, representations can be reactivated reliably from subsets of the cues that were present when the memory was encoded, at the same time as interference from competing representations is minimized. Theoretical studies have pointed to the recurrent CA3 system of the hippocampus as a possible attractor network. Consistent with predictions from these studies, experiments have shown that place representations in CA3 and downstream CA1 tolerate small changes in the configuration of the environment but switch to uncorrelated representations when dissimilarities become larger. The kinetics supporting such network transitions, at the subsecond time scale, is poorly understood, however. Here we show that instantaneous transformation of the spatial context (\u2018teleportation\u2019) does not change the hippocampal representation all at once but is followed by temporary bistability in the discharge activity of CA3 ensembles. Rather than sliding through a continuum of intermediate activity states, the CA3 network undergoes a short period of competitive flickering between pre-formed representations for past and present environment, before settling on the latter. Network flickers are extremely fast, often with complete replacement of the active ensemble from one theta cycle to the next. Within individual cycles, segregation is stronger towards the end, when firing starts to decline, pointing to the theta cycle as a temporal unit for expression of attractor states in the hippocampus. Repetition of pattern-completion processes across successive theta cycles may facilitate error correction and enhance discriminative power in the presence of weak and ambiguous input cues

    Sentence Compression for the LSA-based Summarizer Josef Steinberger ⋆

    No full text
    Abstract: We present a simple sentence compression approach for our summarizer based on latent semantic analysis (LSA). The summarization method assesses each sentence by an LSA score. The compression algorithm removes unimportant clauses from a full sentence. Firstly, a sentence is divided into clauses by Charniak parser, then compression candidates are generated and finally, the best candidate is selected to represent the sentence. The candidates gain an importance score which is directly proportional to its LSA score and indirectly to its length. We evaluated the approach in two ways. By intrinsic evaluation we found that the compressions produced by our algorithm are better than baseline ones but still worse than what humans can make. Then we compared the resulting summaries with human abstracts by a standard n-gram based ROUGE measure

    Query Optimization in Deductive Programs with Aggregates

    No full text
    This paper addresses the problem of aggregates in an extended version of Datalog
    corecore