2,249 research outputs found

    Removal of chlorinated hydrocarbons from water by air stripping and solvent sublation

    Get PDF
    Removal of trichloroethylene, monochlorobenzene and 1,3 dichlorobenzene from water by air stripping and solvent sublation into an organic phase was investigated. The sublation solvents used were paraffin oil and decyl alcohol. The rate of removal from water by solvent sublation and air stripping was highest for trichloroethylene, followed by chlorobenzene and finally 1,3 dichlorobenzene. For the three compounds, solvent sublation had the greatest advantage over air stripping in the reduction of emission of the compounds to the atmosphere. For the three compounds, the removal was enhanced at higher flowrate in both air stripping and solvent sublation. For the removal of monochlorobenzene and 1,3 dichlorobenzene from water, solvent sublation showed a marked improvement over air stripping at air flowrates of 60 ml/min and 94 ml/min. Solvent sublation did not show any significant improvement in the removal of trichloroethylene from water over air stripping. Solvent sublation was found to be relatively independent of the thickness of the organic solvent floated on top of the aqueous solution. Solvent sublation for the removal of monochlorobenzene, and 1,3 dichlorobenzene from water gave better results with decyl alcohol than with mineral oil. Addition of emulsions to water decreased the rate of removal of rnonochlorobenzene and 1,3 dichlorobenzene from the aqueous phase

    Sentiment Classification Bias In User Generated Content

    Get PDF
    Interactive websites generate terabytes of data on a daily basis. This data canbe used in multiple analytical applications to teach computers more about human behavior. Text classification is such an application. Multiple freely available user-generated text data can be used to teach computers to identify the sentiments behind a user’s on-screen interactions without the need of any human intervention. Sentiment analysis is an interesting problem, solving which would theoretically get a computer closer to passing the Turing test. Through this thesis, we test the ability of a classifier to accurately identify user sentiments. However, we do not focus on standard classification settings and the aim is to train the classifier in such a way that it would also be effective in identifying sentiment behind user generated text generated from a completely new social media platform. To be able to do this, we must first identify behavioral bias based on user interactions in two different social media sites as well as websites that accept user reviews. This bias must then be mitigated in order to obtain an unbiased classifier that can then be used to identify user sentiments on any social media platform. For the research in this thesis, such user-generated text is obtained from the social media sites Reddit and Twitter. We also obtain product review data related to both books and wine. Various natural language processing techniques are then employed to process the data and extract similar and dissimilar trends. Vectorized user text would be used to train sentiment classifiers. Finally, classification bias would be identified and mitigated in order to obtain classifiers that can identify human sentiments in real-time with an improved accuracy with limited dependency on source information

    LazySusan: A Flexible, Scalable Digital Repository Ingest System

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PostersWe present LazySusan, an architecture for a digital repository ingest system designed for flexibility and performance. The elements of LazySusan are: 1) A distributed processing model - digital objects are processed and stored by multiple processes on multiple machines interacting with a central job store. 2) Scalability and flexibility - processing agents are spawned and terminated on-demand to respond to changing load conditions. 3) Storage optimzation scheduling - storage bandwidth is the main bottleneck in most large-scale digital repositories. LazySusan is designed to optimize use of the storage channel. 4) Flexible workflow - LazySusan uses a Fedora workflow datastream to track processing objects. 5) Metadata management is performed by Fedora

    Measuring The Effectiveness Of Course Content And Learning Goals Of The Core Undergraduate Information Systems Course

    Get PDF
    The Association to Advance Collegiate Schools of Business (AACSB), text books, and the IS 2002 Model Curriculum and Guidelines for Undergraduate Degree Programs in Information Systems (IS 2002) recommend standards and provide guidelines for course content and learning goals for the core undergraduate Information Systems (IS) course. However course content and learning goals often need to be revised due to high pressure on academic institutions from a rapidly changing Information Technology (IT) market. In order to constantly refine the IS course curricula to meet the needs of industry and government, it is imperative that there be proven methods to measure the effectiveness of course content and learning goals. Analysis of such data should ultimately feed into designing the curriculum of the core undergraduate IS course. This paper focuses on the role of surveys as a tool for measuring the effectiveness of course content and learning goals for the core undergraduate IS course. First, the role of IS 2002 is reviewed in setting standards for the course content and learning goals for this course. Next, data from three surveys conducted to measure the effectiveness of course content and learning goals is analyzed. The paper then assesses surveys’ implications for refining course content and learning goals of the core undergraduate IS course. Finally, recommendations along with a framework for conducting future surveys are presented

    EVALUATION OF BIOCHEMICAL AND HISTOCHEMICAL CHANGES FOLLOWING THE COMBINED TREATMENT OF MERCURY AND CADMIUM IN A FRESH WATER CAT FISH, CLARIAS BATRACHUS (LINN)

    Get PDF
    Objective: The main objective of this study was to determine the combined effects of cadmium (Cd) and mercury (Hg) at sub-lethal concentrations for 32 days on histochemical localization of heavy metals and on serum biochemical parameters including serum glutamic-pyruvic transaminase (SGPT) enzyme activity; glucose, triglyceride, cholesterol and total protein concentrations in Clarias batrachus. Methods: Histochemical demonstration of Hg and Cd salts in liver and kidney was determined by sulphide–silver method and SGPT, glucose, triglyceride and cholesterol in the serum were measured using the standard protocols provided in the commercial kits purchased from Reckon diagnostics Pvt. Ltd., India. Results: Serum SGPT, glucose, triglyceride, cholesterol and total protein levels were significantly altered in fish exposed to Cd or Hg salt alone. However, combined exposure of Cd and Hg normalized all the above mentioned biochemical parameters. Histochemical analysis demonstrated enormous amount of metals in the liver and kidney tissues exposed to Hg and Cd alone. Mercury accumulation in C. batracus was comparatively more than that of cadmium in both the tissues. Conclusion: While exposure Hg or Cd adversely altered the biochemical parameters in the test catfish, following the combined exposure of both the metals, the concentrations of metal accumulation were decreased in both the tissues of C. batracus

    Thermoelastic Behavior of Orientationally Disordered Ammonium Iodide

    Get PDF
    We have investigated the second order elastic constants of orientationally disordered NH4I using an Extended Three Body Force Shell Model (ETSM) in the temperature range 250K≤T≤350K.The second order elastic constants (C11, C12 and C44) obtained by us show an anomalous behaviour with the variation of temperature. The variation of the second order elastic constants with temperature is in good agreement with the measured data

    Sentiment Classification Bias in User Generated Content

    Get PDF
    Interactive websites generate terabytes of data on a daily basis. This data canbe used in multiple analytical applications to teach computers more about human behavior. Text classification is such an application. Multiple freely available user-generated text data can be used to teach computers to identify the sentiments behind a user\u27s on-screen interactions without the need of any human intervention. Sentiment analysis is an interesting problem, solving which would theoretically get a computer closer to passing the Turing test. Through this thesis, we test the ability of a classifier to accurately identify user sentiments. However, we do not focus on standard classification settings and the aim is to train the classifier in such a way that it would also be effective in identifying sentiment behind user generated text generated from a completely new social media platform. To be able to do this, we must first identify behavioral bias based on user interactions in two different social media sites as well as websites that accept user reviews. This bias must then be mitigated in order to obtain an unbiased classifier that can then be used to identify user sentiments on any social media platform. For the research in this thesis, such user-generated text is obtained from the social media sites Reddit and Twitter. We also obtain product review data related to both books and wine. Various natural language processing techniques are then employed to process the data and extract similar and dissimilar trends. Vectorized user text would be used to train sentiment classifiers. Finally, classification bias would be identified and mitigated in order to obtain classifiers that can identify human sentiments in real-time with an improved accuracy with limited dependency on source information

    Electron Energy Regression in the CMS High-Granularity Calorimeter Prototype

    Full text link
    We present a new publicly available dataset that contains simulated data of a novel calorimeter to be installed at the CERN Large Hadron Collider. This detector will have more than six-million channels with each channel capable of position, ionisation and precision time measurement. Reconstructing these events in an efficient way poses an immense challenge which is being addressed with the latest machine learning techniques. As part of this development a large prototype with 12,000 channels was built and a beam of high-energy electrons incident on it. Using machine learning methods we have reconstructed the energy of incident electrons from the energies of three-dimensional hits, which is known to some precision. By releasing this data publicly we hope to encourage experts in the application of machine learning to develop efficient and accurate image reconstruction of these electrons.Comment: 7 pages, 6 figure

    Group Collaboration And Group Decision Making Information Technologies In Petroleum Industry

    Get PDF
    The technical management of important natural resources such as oil and gas resources is a challenging responsibility that faces oil companies. The increasing global demand for oil and gas coupled with declining oil and gas reserves has forced the oil industry to make significant changes in its business processes. Major oil companies have exploration and production operations that span several continents. Massive amount of data that is generated at all levels in an oil company has to be stored, analyzed and disseminated. In this paper, the changes in the management practices and business processes in the oil industry are traced over the past several decades. The use and application of information technology as change agents is also explored and evaluated. In particular, this paper focuses on the role of visualization centers in the oil and gas industry in revolutionizing effective group decision making that has enabled teams to be more productive, innovative, and outcome-focused
    • …
    corecore