48 research outputs found

    The implementation of warehouse management system at small and medium sized entreprises

    Get PDF
    A combination of research methodology approaches has been employed in this paper. This includes a theoretical framework that elaborates the problem identification and the existing supply chain process for introducing an automated Warehouse Management System, followed by a detailed literature review regarding the complemented supply chain software and hardware to ensure the success of the new architecture within the warehouse. The work project involves the critical success factors as well as the key challenges towards a smart Warehouse Management System. A practical application of a Tunisian medium-sized textile company illustrates the logistics dynamics after integrating the new management process

    Optimization of extraction process of Typha leaf fibres

    Get PDF
    The influence of temperature, duration and soda (NaOH) concentration on the extraction yield, linear density, diameter, tenacity and lignin ratio of Typha leaf fibres has been studied. A factorial design of experience has been used to identify the optimum operating conditions, and equations relating to the dependent variables to the operational variables of the extraction process are established. The optimum extraction condition has been determined by the statistical study using response surface and desirability function. The morphology of the obtained fibres and chemical constituents are determined. Fibres, extracted from leaves of Typha with the optimum process, have a lignin content value of about 14% like jute, alpha-cellulose value of about 67% similar to pineapple and jute fibres, extractives content value of about 1%, starches content value of about 2% and ash content value of about 1%. Finally, the characteristics of the optimum Typha fibre are compared with those of other vegetable fibres, showing large diameter and low mechanical properties as compared to other vegetable fibres

    Keeping Authorities "Honest or Bust" with Decentralized Witness Cosigning

    Get PDF
    The secret keys of critical network authorities - such as time, name, certificate, and software update services - represent high-value targets for hackers, criminals, and spy agencies wishing to use these keys secretly to compromise other hosts. To protect authorities and their clients proactively from undetected exploits and misuse, we introduce CoSi, a scalable witness cosigning protocol ensuring that every authoritative statement is validated and publicly logged by a diverse group of witnesses before any client will accept it. A statement S collectively signed by W witnesses assures clients that S has been seen, and not immediately found erroneous, by those W observers. Even if S is compromised in a fashion not readily detectable by the witnesses, CoSi still guarantees S's exposure to public scrutiny, forcing secrecy-minded attackers to risk that the compromise will soon be detected by one of the W witnesses. Because clients can verify collective signatures efficiently without communication, CoSi protects clients' privacy, and offers the first transparency mechanism effective against persistent man-in-the-middle attackers who control a victim's Internet access, the authority's secret key, and several witnesses' secret keys. CoSi builds on existing cryptographic multisignature methods, scaling them to support thousands of witnesses via signature aggregation over efficient communication trees. A working prototype demonstrates CoSi in the context of timestamping and logging authorities, enabling groups of over 8,000 distributed witnesses to cosign authoritative statements in under two seconds.Comment: 20 pages, 7 figure

    Glomus tumor of the leg: a case report

    Get PDF
    Glomus tumors are uncommon benign tumors developing from the neuro-myo-arterial glomus body. They are typically located in the fingers. The extra-digital involvement is unusual and makes diagnosis difficult. Only few cases have been reported in literature. We report an exceptional case of a glomus tumor of the lower leg in a 65-year-old male. The diagnosis was clinically suspected and confirmed by a biopsy. Surgical excision gave immediate pain relief. The aim of this report is to make the surgical community more aware of this entity based on the analysis of our own experience and a review of the literature

    Accountable Safety for Rollups

    Full text link
    Accountability, the ability to provably identify protocol violators, gained prominence as the main economic argument for the security of proof-of-stake (PoS) protocols. Rollups, the most popular scaling solution for blockchains, typically use PoS protocols as their parent chain. We define accountability for rollups, and present an attack that shows the absence of accountability on existing designs. We provide an accountable rollup design and prove its security, both for the traditional `enshrined' rollups and for sovereign rollups, an emergent alternative built on lazy blockchains, tasked only with ordering and availability of the rollup data.Comment: 28 pages, 4 figure

    Managing Identities Using Blockchains and CoSi

    Get PDF
    We combine collective signing and blockchains to create a secure and easy-to-use, decentralized SSH-key management system

    CHAINIAC: Proactive Software-Update Transparency via Collectively Signed Skipchains and Verified Builds

    Get PDF
    Software-update mechanisms are critical to the security of modern systems, but their typically centralized design presents a lucrative and frequently attacked target. In this work, we propose CHAINIAC, a decentralized software-update framework that eliminates single points of failure, enforces transparency, and provides efficient verifiability of integrity and authenticity for software-release processes. Independent witness servers\textit{witness servers} collectively verify conformance of software updates to release policies, build verifiers\textit{build verifiers} validate the source-to-binary correspondence, and a tamper-proof release log stores collectively signed updates, thus ensuring that no release is accepted by clients before being widely disclosed and validated. The release log embodies a skipchain\textit{skipchain}, a novel data structure, enabling arbitrarily out-of-date clients to efficiently validate updates and signing keys. Evaluation of our CHAINIAC prototype on reproducible Debian packages shows that the automated update process takes the average of 5 minutes per release for individual packages, and only 20 seconds for the aggregate timeline. We further evaluate the framework using real-world data from the PyPI package repository and show that it offers clients security comparable to verifying every single update themselves while consuming only one-fifth of the bandwidth and having a minimal computational overhead

    Scalable Bias-Resistant Distributed Randomness

    Get PDF
    Bias-resistant public randomness is a critical component in many (distributed) protocols. Existing solutions do not scale to hundreds or thousands of participants, as is needed in many decentralized systems. We propose two large-scale distributed protocols, RandHound and RandHerd, which provide publicly-verifiable, unpredictable, and unbiasable randomness against Byzantine adversaries. RandHound relies on an untrusted client to divide a set of randomness servers into groups for scalability, and it depends on the pigeonhole principle to ensure output integrity, even for non-random, adversarial group choices. RandHerd implements an efficient, decentralized randomness beacon. RandHerd is structurally similar to a BFT protocol, but uses RandHound in a one-time setup to arrange participants into verifiably unbiased random secret-sharing groups, which then repeatedly produce random output at predefined intervals. Our prototype demonstrates that RandHound and RandHerd achieve good performance across hundreds of participants while retaining a low failure probability by properly selecting protocol parameters, such as a group size and secret-sharing threshold. For example, when sharding 512 nodes into groups of 32, our experiments show that RandHound can produce fresh random output after 240 seconds. RandHerd, after a setup phase of 260 seconds, is able to generate fresh random output in intervals of approximately 6 seconds. For this configuration, both protocols operate at a failure probability of at most 0.08% against a Byzantine adversary
    corecore