3,700 research outputs found

    Determination of trans fatty acid levels by FTIR in processed foods in Australia

    Full text link
    Health authorities around the world advise &lsquo;limiting consumption of trans&nbsp;&nbsp; fatty acids&rsquo;, however in Australia the trans fatty acid (TFA) content is not&nbsp; required to be listed in the nutrition information panel unless a declaration or nutrient claim is made about fatty acids or cholesterol. Since there is limited knowledge about trans fatty acid levels in processed foods available in Australia, this study aimed to determine the levels of TFA in selected food items known to be sources of TFA from previously published studies. Food items (n=92) that contain vegetable oil and a total fat content greater than 5% were included. This criterion was used in conjunction with a review of similar studies where food items were found to contain high levels of trans fatty acids. Lipids were extracted using solvents. Gravimetric methods were used to determine total fat content and trans fatty acid levels were quantified by Attenuated Total Reflectance Fourier Transform Infrared spectroscopy. High levels of trans fatty acids were found in certain items in the Australian food supply, with a high degree of variability. Of the samples analysed, 13&nbsp; contained greater than 1 g of trans fatty acids per serving size, the highest value was 8.1 g/serving. Apart from when the nutrition information panel states that the content is less than a designated low level, food labels sold in Australia do not indicate trans fatty acid levels. We suggested that health authorities seek ways to assist consumers to limit their intakes of trans fatty acids.<br /

    A Framework for Constraint-Based Deployment and Autonomic Management of Distributed Applications

    Get PDF
    We propose a framework for deployment and subsequent autonomic management of component-based distributed applications. An initial deployment goal is specified using a declarative constraint language, expressing constraints over aspects such as component-host mappings and component interconnection topology. A constraint solver is used to find a configuration that satisfies the goal, and the configuration is deployed automatically. The deployed application is instrumented to allow subsequent autonomic management. If, during execution, the manager detects that the original goal is no longer being met, the satisfy/deploy process can be repeated automatically in order to generate a revised deployment that does meet the goal.Comment: Submitted to ICAC-0

    An automated ETL for online datasets

    Get PDF
    While using online datasets for machine learning is commonplace today, the quality of these datasets impacts on the performance of prediction algorithms. One method for improving the semantics of new data sources is to map these sources to a common data model or ontology. While semantic and structural heterogeneities must still be resolved, this provides a well established approach to providing clean datasets, suitable for machine learning and analysis. However, when there is a requirement for a close to real time usage of online data, a method for dynamic Extract-Transform-Load of new sources data must be developed. In this work, we present a framework for integrating online and enterprise data sources, in close to real time, to provide datasets for machine learning and predictive algorithms. An exhaustive evaluation compares a human built data transformation process with our system’s machine generated ETL process, with very favourable results, illustrating the value and impact of an automated approach

    A Middleware Framework for Constraint-Based Deployment and Autonomic Management of Distributed Applications

    Get PDF
    We propose a middleware framework for deployment and subsequent autonomic management of component-based distributed applications. An initial deployment goal is specified using a declarative constraint language, expressing constraints over aspects such as component-host mappings and component interconnection topology. A constraint solver is used to find a configuration that satisfies the goal, and the configuration is deployed automatically. The deployed application is instrumented to allow subsequent autonomic management. If, during execution, the manager detects that the original goal is no longer being met, the satisfy/deploy process can be repeated automatically in order to generate a revised deployment that does meet the goal.Comment: Submitted to Middleware 0

    An automated ETL for online datasets

    Get PDF
    While using online datasets for machine learning is commonplace today, the quality of these datasets impacts on the performance of prediction algorithms. One method for improving the semantics of new data sources is to map these sources to a common data model or ontology. While semantic and structural heterogeneities must still be resolved, this provides a well established approach to providing clean datasets, suitable for machine learning and analysis. However, when there is a requirement for a close to real time usage of online data, a method for dynamic Extract-Transform-Load of new sources data must be developed. In this work, we present a framework for integrating online and enterprise data sources, in close to real time, to provide datasets for machine learning and predictive algorithms. An exhaustive evaluation compares a human built data transformation process with our system’s machine generated ETL process, with very favourable results, illustrating the value and impact of an automated approach

    Applying constraint solving to the management of distributed applications

    Get PDF
    Submitted to DOA08We present our approach for deploying and managing distributed component-based applications. A Desired State Description (DSD), written in a high-level declarative language, specifies requirements for a distributed application. Our infrastructure accepts a DSD as input, and from it automatically configures and deploys the distributed application. Subsequent violations of the original requirements are detected and, where possible, automatically rectified by reconfiguration and redeployment of the necessary application components. A constraint solving tool is used to plan deployments that meet the application requirements.Postprin

    An automated ETL for online datasets

    Get PDF
    While using online datasets for machine learning is commonplace today, the quality of these datasets impacts on the performance of prediction algorithms. One method for improving the semantics of new data sources is to map these sources to a common data model or ontology. While semantic and structural heterogeneities must still be resolved, this provides a well established approach to providing clean datasets, suitable for machine learning and analysis. However, when there is a requirement for a close to real time usage of online data, a method for dynamic Extract-Transform-Load of new sources data must be developed. In this work, we present a framework for integrating online and enterprise data sources, in close to real time, to provide datasets for machine learning and predictive algorithms. An exhaustive evaluation compares a human built data transformation process with our system’s machine generated ETL process, with very favourable results, illustrating the value and impact of an automated approach

    A Peer-to-Peer Middleware Framework for Resilient Persistent Programming

    Get PDF
    The persistent programming systems of the 1980s offered a programming model that integrated computation and long-term storage. In these systems, reliable applications could be engineered without requiring the programmer to write translation code to manage the transfer of data to and from non-volatile storage. More importantly, it simplified the programmer's conceptual model of an application, and avoided the many coherency problems that result from multiple cached copies of the same information. Although technically innovative, persistent languages were not widely adopted, perhaps due in part to their closed-world model. Each persistent store was located on a single host, and there were no flexible mechanisms for communication or transfer of data between separate stores. Here we re-open the work on persistence and combine it with modern peer-to-peer techniques in order to provide support for orthogonal persistence in resilient and potentially long-running distributed applications. Our vision is of an infrastructure within which an application can be developed and distributed with minimal modification, whereupon the application becomes resilient to certain failure modes. If a node, or the connection to it, fails during execution of the application, the objects are re-instantiated from distributed replicas, without their reference holders being aware of the failure. Furthermore, we believe that this can be achieved within a spectrum of application programmer intervention, ranging from minimal to totally prescriptive, as desired. The same mechanisms encompass an orthogonally persistent programming model. We outline our approach to implementing this vision, and describe current progress.Comment: Submitted to EuroSys 200

    COM-313 (101-103-109): Technical Writing

    Get PDF

    Shari’a Arbitration Courts and Constitutional Democracy: A 21st Century Dilemma

    Get PDF
    This project examines the impact of the growing Islamic public in Western democracies, namely in the context of third-party arbitration courts based on Shari\u27a law. The project examines the controversies surrounding the introduction of Shari\u27a law into Western legal systems through the non-territorial federalism of arbitration courts
    corecore