1,584 research outputs found

    Cost Entropy and Expert System Approach to Modeling Cost Smoothing System in Reinforced Concrete Office Building Projects Procurement

    Get PDF
    The main aim of this research work is to develop an expert system approach to cost smoothing model in reinforced concrete office building project procurement. An econometric model which incorporates exigency escalator and inflation buffer, with entropy threshold for a typical reinforced concrete office building, useful at tendering and construction stages of building projects was developed in this study. As built and bill of quantity value of twenty (20) building projects initiated and completed within 2008 and 2009 were used at random. Elemental dichotomies within the context of early and late constructible elements with speculated prediction period was used, taken into consideration the present value of cost. This attributes would enable a builder or contactor load cost implication of an unseen circumstance even on occasion of deferred cost reimbursement with the aid of average entropy index developed for each project elements. The model was further validated with new samples and discovered to be of high Eigen and contingency coefficient values. The model could help in cost smoothing at different stages of reinforced concrete office building which could further aid cost overrun prevention

    Denial of Service: Techniques of Attacks and Mitigation

    Get PDF
    As cloud computing technology has many advantages but cloud security or cloud software security threats and attacks at various levels are also a big concern of all the organizations. Those systems are connected to the internet in the cloud network can be effected by different types of attacks and one of the prominent attack is DoS (denial of service) attack. DoS attack has been considered as one of the important security threat in cloud computing systems at various level that has proven difficult to alleviate. This attack perpetrated in many ways such as consuming computational resources, disruption of information and obstructing the communication media. Once the attack is successful in consuming resources on the victim computers, the attacker then could control and direct them to attack as a group. This means DoS attack also allows the attacker to get the administrative control of the systems. Dos attack can be launched for sending the flood or crashes the services of the system. In this paper, we present the different types of DoS attacks and techniques for launched the DoS attack at various level and the techniques applied to mitigate the harmful effects of the DoS attack

    Theoretical Engineering and Satellite Comlink of a PTVD-SHAM System

    Full text link
    This paper focuses on super helical memory system's design, 'Engineering, Architectural and Satellite Communications' as a theoretical approach of an invention-model to 'store time-data'. The current release entails three concepts: 1- an in-depth theoretical physics engineering of the chip including its, 2- architectural concept based on VLSI methods, and 3- the time-data versus data-time algorithm. The 'Parallel Time Varying & Data Super-helical Access Memory' (PTVD-SHAM), possesses a waterfall effect in its architecture dealing with the process of voltage output-switch into diverse logic and quantum states described as 'Boolean logic & image-logic', respectively. Quantum dot computational methods are explained by utilizing coiled carbon nanotubes (CCNTs) and CNT field effect transistors (CNFETs) in the chip's architecture. Quantum confinement, categorized quantum well substrate, and B-field flux involvements are discussed in theory. Multi-access of coherent sequences of 'qubit addressing' in any magnitude, gained as pre-defined, here e.g., the 'big O notation' asymptotically confined into singularity while possessing a magnitude of 'infinity' for the orientation of array displacement. Gaussian curvature of k(k<0) is debated in aim of specifying the 2D electron gas characteristics, data storage system for defining short and long time cycles for different CCNT diameters where space-time continuum is folded by chance for the particle. Precise pre/post data timing for, e.g., seismic waves before earthquake mantle-reach event occurrence, including time varying self-clocking devices in diverse geographic locations for radar systems is illustrated in the Subsections of the paper. The theoretical fabrication process, electromigration between chip's components is discussed as well.Comment: 50 pages, 10 figures (3 multi-figures), 2 tables. v.1: 1 postulate entailing hypothetical ideas, design and model on future technological advances of PTVD-SHAM. The results of the previous paper [arXiv:0707.1151v6], are extended in order to prove some introductory conjectures in theoretical engineering advanced to architectural analysi

    Cost Entropy and Expert System Approach to Modeling Cost Smoothing System in Reinforced Concrete Office Building Projects Procurement

    Get PDF
    The main aim of this research work is to develop an expert system approach to cost smoothing model in reinforced concrete office building project procurement. An econometric model which incorporates exigency escalator and inflation buffer, with entropy threshold for a typical reinforced concrete office building, useful at tendering and construction stages of building projects  was developed in this study. As built and bill of quantity value of twenty (20) building projects initiated and completed within 2008 and 2009 were used at random. Elemental dichotomies within the context of early and late constructible elements with speculated prediction period was used, taken into consideration the present value of cost. This attributes would enable a builder or contactor load cost implication of an unseen circumstance even on occasion of deferred cost reimbursement with the aid of average entropy index developed for each project elements. The model was further validated with new samples and discovered to be of high Eigen and contingency coefficient values. The model could help in cost smoothing at different stages of reinforced concrete office building which could further aid cost overrun prevention.   Keywords: Expert system, Smoothing, Entropy, Dichotomy

    Analysis and Optimization for Pipelined Asynchronous Systems

    Get PDF
    Most microelectronic chips used today--in systems ranging from cell phones to desktop computers to supercomputers--operate in basically the same way: they synchronize the operation of their millions of internal components using a clock that is distributed globally. This global clocking is becoming a critical design challenge in the quest for building chips that offer increasingly greater functionality, higher speed, and better energy efficiency. As an alternative, asynchronous or clockless design obviates the need for global synchronization; instead, components operate concurrently and synchronize locally only when necessary. This dissertation focuses on one class of asynchronous circuits: application specific stream processing systems (i.e. those that take in a stream of data items and produce a stream of processed results.) High-speed stream processors are a natural match for many high-end applications, including 3D graphics rendering, image and video processing, digital filters and DSPs, cryptography, and networking processors. This dissertation aims to make the design, analysis, optimization, and testing of circuits in the chosen domain both fast and efficient. Although much of the groundwork has already been laid by years of past work, my work identifies and addresses four critical missing pieces: i) fast performance analysis for estimating the throughput of a fine-grained pipelined system; ii) automated and versatile design space exploration; iii) a full suite of circuit level modules that connect together to implement a wide variety of system behaviors; and iv) testing and design for testability techniques that identify and target the types of errors found only in high-speed pipelined asynchronous systems. I demonstrate these techniques on a number of examples, ranging from simple applications that allow for easy comparison to hand-designed alternatives to more complex systems, such as a JPEG encoder. I also demonstrate these techniques through the design and test of a fully asynchronous GCD demonstration chip

    Pilot Study: Evaluating the Risk of Allergen Cross-Contact in Ice Cream Scoop Shop Dipper Wells

    Get PDF
    Food allergies are a serious and growing problem in developed countries. Allergen cross-contact at foodservice establishments is a common cause of food allergic reactions. Therefore, this study sought to determine if dipper wells used in ice cream scoop shops pose a relevant risk to food allergy sufferers. First, a matrix study was conducted to evaluate if peanut detection by real-time PCR was inhibited by the ice cream matrix, as fat and proteins are known PCR inhibitors. Frozen ice cream, liquid ice cream mix, and water matrices were tested. Second, a controlled time trial was conducted to evaluate the efficacy of allergen removal in ice cream dipper well water. Peanut butter ice cream was added to a dipper well and water samples were collected at various rinse times. A continuous use scenario and two dipper well basin cleaning techniques were also evaluated. Finally, a survey of ice cream scoop shop owners was conducted to collect relevant information regarding current dipper well practices and policies. Results of the matrix study showed low peanut recovery in all matrices, with recovery rates of 23.9%, 17.7%, and 6.2% in frozen ice cream, liquid ice cream mix, and water matrices, respectively. The recovery rate of plain peanut butter was 5.6%. PCR inhibitors, the physio-chemical properties of ice cream, and the PCR extraction and quantification kit were all believed to be factors in the recovery rate. Based on these results, we recommend using a DNA extraction technique designed specifically for fatty food matrices for future peanut butter sample analysis, and either a matrix-calibrated or a matrix-independent PCR system for future ice cream sample analysis. Results of the controlled time trial showed that peanut removal followed an exponential decay pattern. Quantitative results showed that while it is possible for peanut levels to be above the threshold dose, it is extremely unlikely. Dipper well basin cleaning techniques were not able to remove all traces of allergens, so more robust cleaning procedures are necessary to deal with high loads of allergens. Results of the survey showed that while most ice cream scoop shop owners had a good understanding of allergen cross-contact, advisory allergen signs were not prevalent in ice cream scoop shops. We conclude that ice cream dipper wells do not pose a significant risk to food-allergic consumers, but as a precaution for a worst case scenario, we recommend that ice cream scoop shops post allergen advisory signs and avoid using scoops from the dipper well to serve customers with a food allergy

    Evaluation of Active Queue Management (AQM) Models in Low Latency Networks

    Get PDF
    Abstract: Low latency networks require the modification of the actual queuing management in order to avoid large queuing delay. Nowadays, TCP’s congestion control maximizes the throughput of the link providing benefits to large flow packets. However, nodes’ buffers may get fully filled, which would produce large time delays and packet dropping situations, named as bufferbloat problem. For actual time-sensitive applications demand, such as VoIP, online gaming or financial trading, these queueing times cause bad quality of service being directly noticed in user’s utilization. This work studies the different alternatives for active queue management (AQM) in the nodes links, optimizing the latency of the small flow packets and, therefore, providing better quality for low latency networks in congestion scenarios. AQM models are simulated in a dumbbell topology with ns3 software, which shows the diverse latency values (measured in RTT) according to network situations and the algorithm that has been installed. In detail, RED, CoDel, PIE, and FQ_CoDel algorithms are studied, plus the modification of the TCP sender’s congestion control with Alternative Backoff with ECN (ABE) algorithm. The simulations will display the best queueing times for the implementation that mixes FQ_CoDel with ABE, the one which maximizes the throughput reducing the latency of the packets. Thus, the modification of queueing management with FQ_CoDel and the implementation of ABE in the sender will solve the bufferbloat problem offering the required quality for low latency networks.Resumen Las redes de baja latencia requieren la modificación de la actual gestión de las colas con el fin de eludir los extensos tiempos de retardo. Hoy en d´ıa, el control de congestión de TCP maximiza el rendimiento (throughput) del enlace otorgando beneficio a los grandes flujos de datos, sin embargo, los buffers son plenamente cargados generando altos tiempos de retardo y fases de retirada de paquetes, llamada a esta situación el problema de Bufferbloat. Par las aplicaciones contempor´aneas como las llamadas VoIP, los juegos on-line o los intercambios financieros; estos tiempos de cola generan una mala calidad de servicio detectada directamente por los usuarios finales. Este trabajo estudia las diferentes alternativas de la gestión activa de colas (AQM), optimizando la latencia de los peque˜nos flujos y, por lo tanto, brindando una mejor calidad para las redes de baja latencia en situaciones de congestión. Los modelos AQM han sido evaluados en una topolog´ıa ’dumbbell’ mediante el simulador ns3, entregando resultados de latencia (medidos en RTT) de acuerdo con la situación del enlace y el algoritmo instalado en la cola. Concretamente, los algoritmos estudiados han sido RED, CoDel, PIE y FQ_CoDel; adem´as de la modificación del control de congestión TCP del emisor denominada ABE (Alternative Backoff with ECN). Las simulaciones que mejor resultados ofrecen son las que implementan combinación de FQ_CoDel con el algoritmo ABE, maximizando el rendimiento y reduciendo la latencia de los paquetes. Por lo tanto, la modificación con FQ_CoDel en las colas y la de ABE en el emisor ofrecen una solución al problema del Bufferbloat altamente solicitada por las redes de baja latencia

    Sensory and chemical/nutritional characteristics of concept foods made from underutilized sweet potato roots and greens

    Get PDF
    Frozen desserts and a smoothie were developed from underutilized sweet potato roots and from greens, respectively. Frozen desserts were formulated with mashed sweet potato, coconut oil, and dairy, almond, or soy milk. Sweet potato greens were blanched and frozen before being made into a smoothie. Increased mash in the frozen desserts resulted in better (p≤0.05) color, overall intensity of flavor, and sweet potato flavor. Descriptive and consumer panelists found no differences (p\u3e0.05) in frozen desserts with difference base milk products. Almond milk frozen dessert was lower in total solids, protein and Brix (p≤0.05), compared to dairy and soy milk. Greens blanched for 30s showed complete peroxidase inhibition and acceptable texture. Blanching decreased carbohydrates and soluble minerals of greens mainly due to water. The results showed that consumers liked lactoseree sweet potato-based frozen desserts and showed that properly blanched greens could be used in valueded products like smoothies

    Valuation of risk and complexity attributes causing delays in Australian Transport infrastructure projects for optimal contingency Estimation

    Get PDF
    The transportation projects have historically experienced significant delays and cost overruns from the time decision to build has been taken by the owner. This thesis addresses the problem of why this delay occurs by looking at the drivers from a risk management perspective. It identifies and analyses the owner risk attributes that contribute to significant delays related to transportation projects from an Australian context. After a literature review of current risks causing delays in transportation projects from across the globe, risk and complexity related to transport projects in Australia are identified from an Australian context using a questionnaire survey completed by participants with relevant experience in the transport industry. The risks are ranked using the Relative Importance Index (RII) based on likelihood and impact score. The results obtained include many attributes which are condensed to factors based on correlations using factor analysis. This gives us a big picture of the main risk and complexity factors affecting delays on transportation projects. Once done, a predictive model is obtained between the overall delay as the dependent variable and risk attributes as the independent variable. This is obtained using the statistical technique of Ordinal Multivariate regression. Lastly, a working framework that allows the methodology used in the thesis to be applied to all projects to understand the risk and complexity factors. This result can be useful for owners in obtaining a realistic design to build estimate in transport infrastructure projects by allocation of suitable contingency on risk and complexity drivers causing delays in transport infrastructure project

    Shadow Honeypots

    Get PDF
    We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network or service. Traffic that is considered anomalous is processed by a "shadow honeypot" to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular ("production") instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20% for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives
    corecore