82 research outputs found

    The PiLLoW/Ciao library for INTERNET/WWW programming using computational logic systems

    Full text link
    We discuss from a practical point of view a number of issues involved in writing Internet and WWW applications using LP/CLP systems. We describe PiLLoW, an Internet and WWW programming library for LP/CLP systems which we argüe significantly simplifies the process of writing such applications. PiLLoW provides facilities for generating HTML structured documents, producing HTML forms, writing form handlers, accessing and parsing WWW documents, and accessing code posted at HTTP addresses. We also describe the architecture of some application classes, using a high-level model of client-server interaction, active modules. Finally we describe an architecture for automatic LP/CLP code downloading for local execution, using generic browsers. The PiLLoW library has been developed in the context of the &-Prolog and CIAO systems, but it has been adapted to a number of popular LP/CLP systems, supporting most of its functionality

    Quantifying the Interactions of Nickel with Marine Dissolved Organic Matter Using Fluorescence

    Get PDF
    Nickel (Ni) is a versatile metal with an abundance of applications, namely its role in stainless steel, electronics, and batteries, making it a popular choice in industry. Unfortunately, with increasing demand and production comes higher amounts of Ni pollution. Nickel enters ocean waters - either directly or indirectly - and can have profound effects on marine life. Nickel has been established as a toxicant to a variety of aquatic biota, with the divalent cation (Ni2+) thought to be the most bioavailable fraction and thus the most toxic. Having a reliable means of quantifying free Ni ion is pertinent toward establishing appropriate water quality recommendations for aquatic life protection. The objective of this study was to compare two speciation techniques to quantify Ni2+ in natural samples. The methods studied in this work were ion-selective electrode (ISE) and fluorescence quenching (FQ) titrations. Results indicated that a Ni-ISE is more easily applicable in low ionic strength samples since electrode potential changes to added Ni were only seen in freshwater. Fluorescence excitation-emission matrices were scanned to identify fluorophores within the samples, and variable angle synchronous spectra were used to monitor titrations. Binding constants (log K) as well as complexing capacities (LT) were derived using nonlinear regression, and Monte Carlo analysis was used to relate these values to EC50 Ni levels from toxicity tests (conducted by a collaborative group) on the same samples. Results showed that the predicted Ni2+ concentrations at EC50 levels had overlapping 95% confidence intervals for Mytilus edulis. The free Ni2+ concentration did not overlap for Strongylocentrotus purpuratus, though it should be noted that there was only one data point. The Mytilus edulis results also agreed with the artificial seawater (ASW) control, highlighting the validity and usefulness of a Biotic Ligand Model (BLM) for marine Ni

    Analytical and numerical seismic assessment of heritage masonry towers

    Get PDF
    Abstract The new Italian building code, published in 2018 [MIT in NTC 2018: D.M. del Ministero delle Infrastrutture e dei trasporti del 17/01/2018. Aggiornamento delle Norme Tecniche per le Costruzioni (in Italian), 2018], explicitly refers to the Italian “Guidelines for the assessment and mitigation of the seismic risk of the cultural heritage” [PCM in DPCM 2011: Direttiva del Presidente del Consiglio dei Ministri per valutazione e riduzione del rischio sismico del patrimonio culturale con riferimento alle norme tecniche per le costruzioni, G.U. n. 47 (in Italian), 2011] as a reliable source of guidance that can be employed for the vulnerability assessment of heritage buildings under seismic loads. According to these guidelines, three evaluation levels are introduced to analyse and assess the seismic capacity of historic masonry structures, namely: (1) simplified global static analyses; (2) kinematic analyses based on local collapse mechanisms, (3) detailed global analyses. Because of the complexity and the large variety of existing masonry typologies, which makes it particularly problematic to adopt a unique procedure for all existing structures, the guidelines provide different simplified analysis approaches for different structural configurations, e.g. churches, palaces, towers. Among the existing typologies of masonry structures there considered, this work aims to deepen validity, effectiveness and scope of application of the Italian guidelines with respect to heritage masonry towers. The three evaluation levels proposed by the guidelines are here compared by discussing the seismic risk assessment of a representative masonry tower: the Cugnanesi tower located in San Gimignano (Italy). The results show that global failure modes due to local stress concentrations cannot be identified if only simplified static and kinematic analyses are performed. Detailed global analyses are in fact generally needed for a reliable prediction of the seismic performance of such structures.</jats:p

    A compiler for natural semantics

    Full text link

    A Few Applications of Seismic Waves: Anisotropy Tomography and All That

    Get PDF
    Seismic anisotropy, the variation of seismic wave speed with direction, is an extremely important physical phenomena. When a certain type of seismic wave (shear wave) propagates in an anisotropic medium, the component polarized parallel to the fast direction (along which the speed is higher) begins to lead and the component polarized to the slow direction lags behind (analogous to the optical birefringence). This observation of seismic anisotropy may be used to infer several physical properties of the medium through which these waves are propagating. Fortunately, Earth\u27s upper mantle shows significant seismic anisotropy due to preferred crystallographic orientation of the constituent minerals. Therefore, it can provide crucial information regarding the convective flow and stress patterns in the upper mantle. To be more precise, seismic anisotropy can shed light on detail inner working of several geodynamic processes which are inherently anisotropic in nature and therefore insensitive to isotropic seismology. \\Owing to its simplicity, the classical ray theory based formulation is widely used to infer anisotropic structures of the upper mantle. However, due to the lack of vertical resolution of infinite frequency ray theory based methods and its numerous other shortcomings even in the simplified studies assuming isotropy, it is undesirable to use a ray theory based method in a fully anisotropic framework. The major portion of this thesis is devoted to developing anisotropy tomography method in a perturbative framework where the `finite-frequency\u27 or the full `wave\u27 feature is taken into account. Such technique is proven to be a substantial improvement in terms of localization of the anisotropy of upper mantle. After benchmarking, it is applied to infer the anisotropic structures beneath the High Lava Plains of Oregon and as such was able to provide an avenue for reconciling apparently contradictory constraints on anisotropic structures from different measurements. \\ In the last part of the thesis, we briefly discuss a technique (slightly tangential to the main theme of anisotropy however seems to enjoy a connection at a more fundamental level) we develop to obtain an effective description of the physical properties of a general heterogeneous medium (including pure randomness). This is motivated by the fact that when propagating through small heterogeneities, seismic waves naturally average the elastic properties of the medium and therefore only an effective physics is realized

    Secure Computation Protocols for Privacy-Preserving Machine Learning

    Get PDF
    Machine Learning (ML) profitiert erheblich von der Verfügbarkeit großer Mengen an Trainingsdaten, sowohl im Bezug auf die Anzahl an Datenpunkten, als auch auf die Anzahl an Features pro Datenpunkt. Es ist allerdings oft weder möglich, noch gewollt, mehr Daten unter zentraler Kontrolle zu aggregieren. Multi-Party-Computation (MPC)-Protokolle stellen eine Lösung dieses Dilemmas in Aussicht, indem sie es mehreren Parteien erlauben, ML-Modelle auf der Gesamtheit ihrer Daten zu trainieren, ohne die Eingabedaten preiszugeben. Generische MPC-Ansätze bringen allerdings erheblichen Mehraufwand in der Kommunikations- und Laufzeitkomplexität mit sich, wodurch sie sich nur beschränkt für den Einsatz in der Praxis eignen. Das Ziel dieser Arbeit ist es, Privatsphäreerhaltendes Machine Learning mittels MPC praxistauglich zu machen. Zuerst fokussieren wir uns auf zwei Anwendungen, lineare Regression und Klassifikation von Dokumenten. Hier zeigen wir, dass sich der Kommunikations- und Rechenaufwand erheblich reduzieren lässt, indem die aufwändigsten Teile der Berechnung durch Sub-Protokolle ersetzt werden, welche auf die Zusammensetzung der Parteien, die Verteilung der Daten, und die Zahlendarstellung zugeschnitten sind. Insbesondere das Ausnutzen dünnbesetzter Datenrepräsentationen kann die Effizienz der Protokolle deutlich verbessern. Diese Beobachtung verallgemeinern wir anschließend durch die Entwicklung einer Datenstruktur für solch dünnbesetzte Daten, sowie dazugehöriger Zugriffsprotokolle. Aufbauend auf dieser Datenstruktur implementieren wir verschiedene Operationen der Linearen Algebra, welche in einer Vielzahl von Anwendungen genutzt werden. Insgesamt zeigt die vorliegende Arbeit, dass MPC ein vielversprechendes Werkzeug auf dem Weg zu Privatsphäre-erhaltendem Machine Learning ist, und die von uns entwickelten Protokolle stellen einen wesentlichen Schritt in diese Richtung dar.Machine learning (ML) greatly benefits from the availability of large amounts of training data, both in terms of the number of samples, and the number of features per sample. However, aggregating more data under centralized control is not always possible, nor desirable, due to security and privacy concerns, regulation, or competition. Secure multi-party computation (MPC) protocols promise a solution to this dilemma, allowing multiple parties to train ML models on their joint datasets while provably preserving the confidentiality of the inputs. However, generic approaches to MPC result in large computation and communication overheads, which limits the applicability in practice. The goal of this thesis is to make privacy-preserving machine learning with secure computation practical. First, we focus on two high-level applications, linear regression and document classification. We show that communication and computation overhead can be greatly reduced by identifying the costliest parts of the computation, and replacing them with sub-protocols that are tailored to the number and arrangement of parties, the data distribution, and the number representation used. One of our main findings is that exploiting sparsity in the data representation enables considerable efficiency improvements. We go on to generalize this observation, and implement a low-level data structure for sparse data, with corresponding secure access protocols. On top of this data structure, we develop several linear algebra algorithms that can be used in a wide range of applications. Finally, we turn to improving a cryptographic primitive named vector-OLE, for which we propose a novel protocol that helps speed up a wide range of secure computation tasks, within private machine learning and beyond. Overall, our work shows that MPC indeed offers a promising avenue towards practical privacy-preserving machine learning, and the protocols we developed constitute a substantial step in that direction

    A NEW GENERATION CHEMICAL FLOODING SIMULATOR Semi-annual Report for the Period

    Get PDF
    ABSTRACT 4 SUMMARY 4 Task 1: Formulation and development of Solution Scheme

    Tietuekimppujen indeksointi flash-muistilla

    Get PDF
    In database applications, bulk operations which affect multiple records at once are common. They are performed when operations on single records at a time are not efficient enough. They can occur in several ways, both by applications naturally having bulk operations (such as a sales database which updates daily) and by applications performing them routinely as part of some other operation. While bulk operations have been studied for decades, their use with flash memory has been studied less. Flash memory, an increasingly popular alternative/complement to magnetic hard disks, has far better seek times, low power consumption and other desirable characteristics for database applications. However, erasing data is a costly operation, which means that designing index structures specifically for flash disks is useful. This thesis will investigate flash memory on data structures in general, identifying some common design traits, and incorporate those traits into a novel index structure, the bulk index. The bulk index is an index structure for bulk operations on flash memory, and was experimentally compared to a flash-based index structure that has shown impressive results, the Lazy Adaptive Tree (LA-tree for short). The bulk insertion experiments were made with varying-sized elementary bulks, i.e. maximal sets of inserted keys that fall between two consecutive keys in the existing data. The bulk index consistently performed better than the LA-tree, and especially well on bulk insertion experiments with many very small or a few very large elementary bulks, or with large inserted bulks. It was more than 4 times as fast at best. On range searches, it performed up to 50 % faster than the LA-tree, performing better on large ranges. Range deletions were also shown to be constant-time on the bulk index.Tietokantasovelluksissa kimppuoperaatiot jotka vaikuttavat useampaan alkioon kerralla ovat yleisiä, ja niitä käytetään tehostamaan tietokannan toimintaa. Niitä voi käyttää kun data lisätään tietokantaan suuressa erässä (esimerkiksi myyntidata jota päivitetään kerran päivässä)tai osana muita tietokantaoperaatioita. Kimppuoperaatioita on tutkittu jo vuosikymmeniä, mutta niiden käyttöä flash-muistilla on tutkittu vähemmän. Flash-muisti on yleistyvä muistiteknologiajota käytetään magneettisten kiintolevyjen sijaan tai niiden rinnalla. Sen tietokannoille hyödyllisiin ominaisuuksiin kuuluvat mm. nopeat hakuajat ja alhainen sähkönkulutus. Kuitenkin datan poisto levyltä on työläs operaatio flash-levyillä, mistä johtuen tietorakenteet kannattaa suunnitella erikseen flash-levyille. Tämä työ tutkii flashin käyttöä tietorakenteissa ja koostaa niistä flashille soveltuvia suunnitteluperiaatteita. Näitä periaatteita edustaa myös työssä esitetty uusi rakenne, kimppuhakemisto (bulk index). Kimppuhakemisto on tietorakenne kimppuoperaatioille flash-muistilla, ja sitä verrataan kokeellisesti LA-puuhun (Lazy Adaptive Tree, suom. laiska adaptiivinen puu), joka on suoriutunut hyvin kokeissa flash-muistilla. Kokeissa käytettiin vaihtelevan kokoisia alkeiskimppuja, eli maksimaalisia joukkoja lisätyssä datassa jotka sijoittuvat kahden olemassaolevan avaimen väliin. Kimppuhakemisto oli nopeampi kuin LA-puu, ja erityisen paljon nopeampi kimppulisäyksissä pienellä määrällä hyvin suuria tai suurella määrällä hyvin pieniä alkeiskimppuja, tai suurilla kimppulisäyksillä. Parhaimmillaan se oli yli neljä kertaa nopeampi. Välihauissa se oli jopa 50 % nopeampi kuin LA-puu, ja parempi suurten välien kanssa. Välipoistot näytettiin vakioaikaisiksi kimppuhakemistossa

    Extending the Finite Domain Solver of GNU Prolog

    No full text
    International audienceThis paper describes three significant extensions for the Finite Domain solver of GNU Prolog. First, the solver now supports negative integers. Second, the solver detects and prevents integer overflows from occurring. Third, the internal representation of sparse domains has been redesigned to overcome its current limitations. The preliminary performance evaluation shows a limited slowdown factor with respect to the initial solver. This factor is widely counterbalanced by the new possibilities and the robustness of the solver. Furthermore these results are preliminary and we propose some directions to limit this overhead
    corecore