757 research outputs found

    PRIORITIZED TASK SCHEDULING IN FOG COMPUTING

    Get PDF
    Cloud computing is an environment where virtual resources are shared among the many users over network. A user of Cloud services is billed according to pay-per-use model associated with this environment. To keep this bill to a minimum, efficient resource allocation is of great importance. To handle the many requests sent to Cloud by the clients, the tasks need to be processed according to the SLAs defined by the client. The increase in the usage of Cloud services on a daily basis has introduced delays in the transmission of requests. These delays can cause clients to wait for the response of the tasks beyond the deadline assigned. To overcome these concerns, Fog Computing is helpful as it is physically placed closer to the clients. This layer is placed between the client and the Cloud layer, and it reduces the delay in the transmission of the requests, processing and the response sent back to the client greatly. This paper discusses an algorithm which schedules tasks by calculating the priority of a task in the Fog layer. The tasks with higher priority are processed first so that the deadline is met, which makes the algorithm practical and efficient

    Career Engine [Jobesy]

    Get PDF
    The Career Engine software is focused with job posting and registration. We may also use this application to look for employment openings in a certain location, and the name of the firm will be presented. Timings and other details are also available online. It will assist you in applying for a better job than the one you now have. Position seekers may select and register for positions that interest them, and the information received from the recruiter providing the job will be shown on the job details site. We chose the MERN stack to develop our JobEsy career engine website, which comprises Express.js, React.js, and Node.js, and I utilize MySQL for the database. The administrator may use Career Engine to get into the system, authorize a vacancy, and post it on the web. Job searchers and recruiters are managed by the administration. Job searchers may also log in and search for open opportunities. They can apply for work by presenting their stuff, such as a resume. Our career engine online application will be responsively maintained. This will adjust web pages to fit the size of the device\u27s window. On this website, all job searchers, recruiters, and administrators have their own logins

    Strategies to control lipase activity and selectivity

    Full text link
    Immobilisation of lipases on solid supports is important for the industrial use of these enzymes. This study investigated how support hydrophobicity and surfactants can affect the activity/selectivity of immobilised lipases. It was demonstrated that a combination of surfactant and tuned support hydrophobicity affords control of lipase activity and substrate selectivity

    Identifying Cancer Subtypes Using Unsupervised Deep Learning

    Get PDF
    Glioblastoma multiforme (GBM) is the most fatal malignant type of brain tumor with a very poor prognosis with a median survival of around one year. Numerous studies have reported tumor subtypes that consider different characteristics on individual patients, which may play important roles in determining the survival rates in GBM. In this study, we present a pathway-based clustering method using Restricted Boltzmann Machine (RBM), called R-PathCluster, for identifying unknown subtypes with pathway markers of gene expressions. In order to assess the performance of R-PathCluster, we conducted experiments with several clustering methods such as k-means, hierarchical clustering, and RBM models with different input data. R-PathCluster showed the best performance in clustering longterm and short-term survivals, although its clustering score was not the highest among them in experiments. R-PathCluster provides a solution to interpret the model in biological sense, since it takes pathway markers that represent biological process of pathways. We discussed that our findings from R-PathCluster are supported by many biological literatures. Keywords. Glioblastoma multiforme, tumor subtypes, clustering, Restricted Boltzmann Machin

    A Comprehensive Approach to Automated Sign Language Translation

    Get PDF
    Many sign languages are bonafide natural languages with grammatical rules and lexicons, hence can benefit from neural machine translation methods. As significant advances are being made in natural language processing (specifically neural machine translation) and in computer vision processes, specifically image and video captioning, related methods can be further researched to boost automated sign language understanding. This is an especially challenging AI research area due to the involvement of a continuous visual-spatial modality, where meaning is often derived from context. To this end, this thesis is focused on the study and development of new computational methods and training mechanisms to enhance sign language translation in two directions, signs to texts and texts to signs. This work introduces a new, realistic phrase-level American Sign Language dataset (ASL/ ASLing), and investigates the role of different types of visual features (CNN embeddings, human body keypoints, and optical flow vectors) in translating ASL to spoken American English. Additionally, the research considers the role of multiple features for improved translation, via various fusion architectures. As an added benefit, with continuous sign language being challenging to segment, this work also explores the use of overlapping scaled visual segments, across the video, for simultaneously segmenting and translating signs. Finally, a quintessential interpreting agent not only understands sign language and translates to text, but also understands the text and translates to signs. Hence, to facilitate two-way sign language communication, i.e. visual sign to spoken language translation and spoken to visual sign language translation, a dual neural machine translation model, SignNet, is presented. Various training paradigms are investigated for improved translation, using SignNet. By exploiting the notion of similarity (and dissimilarity) of visual signs, a metric embedding learning process proved most useful in training SignNet. The resulting processes outperformed their state-of-the-art counterparts by showing noteworthy improvements in BLEU 1 - BLEU 4 scores

    Using Predictive Maintenance techniques and Business Intelligence to develop smarter factory systems for the digital age

    Get PDF
    The aim of the project is to increase the productivity of Danfoss’ assembly lines across the manufacturing facility at Ames, Iowa by continuously monitoring the performance with the help of a real-time tracking tool. The efficiency of the employees at each of the stations in the assembly lines, the time taken to procure the products or parts, the test time and the paint time, all impact the performance and production rate within the facility. Also, keeping few attributes constant, the time taken for the machines to pass a particular station within an assembly line determines the health of the assembly line and requires continuous monitoring. Minute errors on the assembly lines could stall production for hours resulting in an immense loss to the giant manufacturing companies like Danfoss. Therefore, the project is about implementing an algorithm that will monitor both the number of workers per shift per day at the assembly lines and the status of the machines on the line as they pass through each station on the assembly lines. On the other hand, a user-interactive dashboard allowsthe workers to monitor their progress versus the expected progress for the day. The live dashboard is scalable to other Danfoss assembly lines as a daily monitoring syste

    Re-estimation of Lexical Parameters for Treebank PCFGs

    Get PDF
    We present procedures which pool lexical information estimated from unlabeled data via the Inside-Outside algorithm, with lexical information from a treebank PCFG. The procedures produce substantial improvements (up to 31.6 % error reduction) on the task of determining subcategorization frames of novel verbs, relative to a smoothed Penn Treebank-trained PCFG. Even with relatively small quantities of unlabeled training data, the re-estimated models show promising improvements in labeled bracketing f-scores on Wall Street Journal parsing, and substantial benefit in acquiring the subcategorization preferences of low-frequency verbs.

    Enabling the Integration of Sustainable Design Methodological Frameworks and Computational Life Cycle Assessment Tools into Product Development Practice

    Get PDF
    Environmental sustainability has gained critical importance in product development (PD) due to increased regulation, market competition, and consumer awareness, leading companies to set ambitious climate targets . To meet these goals, PD practitioners (engineers and designers) are often left to adapt their practices to reduce the impacts of the products they manufacture. Literature review and interviews with practitioners show that they highly valued using quantitative life cycle assessment (LCA) results to inform decision making. LCA is a technique to measure the environmental impacts across various stages of a product life cycle. Existing LCA software tools, however, are designed for dedicated experts to use at the end of PD using detailed product information. This creates the “ecodesign paradox”, a tension between opportunity for change in the early-stages of PD and availability of data in later stages to make reliable decisions. Further, my research identified that novice users of LCA face additional barriers including: cumbersome user interfaces, unfamiliar terminology, and complicated information visualization. To address these challenges, I developed a tool called EcoSketch for use during early-stage PD by novice users. Practitioners, however, also struggle with translating environmental impact information into actionable design decisions. Hence, I co-created methodological frameworks of sustainable design strategies with industry partners: Synapse Product Development Inc. and Stanley Black and Decker Inc. Despite contextual differences, a key commonality was that practitioners at both firms sought “structured” and “data-driven\u27\u27 processes for sustainable design. Through multiple, extended internships, I also identified important drivers and barriers to sustainable design integration. Overall, my research demonstrates that co-creation improves receptivity, long-term adoption, and produces tangible improvements to sustainable outcomes in practice. In summary, my research pursues two key pathways to enable sustainable design integration: Developing human-centered life cycle assessment (LCA) tools that are designed for decision-making during the early stages of PD. Creating methodological frameworks to support the application of appropriate sustainable design strategies in PD practice. This thesis elaborates on my proposed coupling of robust frameworks with human-centered LCA tools, which I argue together comprise a transformative solution for industry professionals to effectively integrate sustainability considerations in their product development practices

    Deconstructing woronin body formation

    Get PDF
    Master'sMASTER OF SCIENC
    corecore