22 research outputs found

    Optimization of Quantity Discounts Using JIT Technique under Alternate Cost Policies

    Get PDF
    In traditional economic order quantity modeling technique, as per the storage in a warehouse, the rate of demand is considered to be fixed, whereas in real world practice rate of demand may be dependent on time, price and stock. This paper studies problems based on allocation of order quantity under quantity discounts by revising mathematical models already studied in this area. For example, in a multi warehouse system like a super departmental store, the rate of demand is mostly subjective on the basis of stock demand. In industry, the maintenance of large stock of goods in warehouses has a higher probability of consumers as compared to an industry with small quantity of stock. Such procedures implied in single warehouses systems may be logical for level of stock that is dependent on demand. Hence, a good and large stock level mostly results in a higher profits and larger sales. The objective is to optimize profit under the effect of price variations in the form of quantity discounts based on an alternative cost functions, with the help of JIT inventory technique and analyzing a mathematical model based on it

    Interactively solving school timetabling problems using extensions of constraint programming

    Get PDF
    Timetabling problems have been frequently studied due to their wide range of applications. However, they are often solved manually because of the lack of appropriate computer tools. Although many approaches mainly based on local search or constraint programming seem to have been quite successful in recent years, they are often highly dedicated to specific problems and encounter difficulties to take the dynamic and over-constrained nature of such problems. We were confronted with such an over-constrained and dynamic problem in our institution. This paper deals with a timetabling system based on constraint programming with the use of explanations to offer a dynamic behaviour and to allow automatic relaxations of constraints. Our tool has successfully answered the needs of the current planner by providing solutions in a few minutes instead of a week of manual design.We present in this paper the techniques used, the results obtained and a discussion on the effects of the automation of the timetabling process

    MINIMUM FLOW TIME SCHEDULE GENETIC ALGORITHM FOR MASS CUSTOMIZATION MANUFACTURING USING MINICELLS

    Get PDF
    Minicells are small manufacturing cells dedicated to an option family and organized in a multi-stage configuration for mass customization manufacturing. Product variants, depending on the customization requirements of each customer, are routed through the minicells as necessary. For successful mass customization, customized products must be manufactured at low cost and with short turn around time. Effective scheduling of jobs to be processed in minicells is essential to quickly deliver customized products. In this research, a genetic algorithm based approach is developed to schedule jobs in a minicell configuration by considering it as a multi-stage flow shop. A new crossover strategy is used in the genetic algorithm to obtain a minimum flow time schedule

    Estimating production losses from disruptions based on stock market returns: Applications to 9/11 attacks, the Deepwater Horizon oil spill, and Hurricane Sandy

    Get PDF
    The threats to human life and infrastructure are ever growing due to global terrorism, conflicts and climate change as well as the omnipresent threat of natural disruptions like earthquakes, volcanos, tsunamis etc. Disruptions or disasters lead to sudden changes in demand, production and supply. In case of such scenarios it is essential to optimize the utilization of available resources and avoid further wastage. In this study a model is presented to measure the changes in production due to changes in supply and demand of goods and services, and measure possible losses to industries during such disruptions. It is anticipated that there is a strong economic correlation of growth among the industries and there is a ripple effect causing losses to interdependent industries and economies in such scenarios. It is believed that, variability in the economy is preceded by stock market price fluctuations. The trend of any economy is reflected in the stock markets that it encompasses and these markets provide instantaneous feedback to changes in a state of normalcy. These stock markets have been used to study the variability in economic output of industries, and measure the dynamic changes in production or output of industries. The results of the study justify the existence of such a correlation between the gross output of industries and the stock indices that are related to these industries. Study of past disruptions is performed through a deterministic model and a stochastic model and the results obtained resonate with the existing estimates published by studies measuring the economic impacts of these disruptions. Such a study would enable governments, corporations and individual businesses to make informed decisions regarding the allocation of resources and contingency plans in case of such a disruption. The risk of monetary and market losses can be substantially reduced thus enabling faster recovery and higher resilience

    Linux package dependency visualization

    Get PDF

    Stock market predictions using machine learning

    Get PDF
    2021 Spring.Includes bibliographical references.In this thesis, an attempt is made to try and establish the impact of news articles and correlated stocks on any one stock. Stock prices are dependent on many factors, some of which are common for most stocks, and some are specific to a type of company. For instance, a product-based company's stocks are dependent on sales and profit, while a research-based company's stocks are based on the progress made in their research over a specified time period. The main idea behind this thesis is that using news articles, we can potentially estimate how much each of these factors can impact the stock prices and how much of it is based on common factors like momentum. This thesis is split into three parts. The first part is finding the correlated stocks for a selected stock ticker. Correlated stocks can have a significant impact on stock prices; having a diverse portfolio of non-correlated stocks is very important for a stock trader, and yet very little research has been done on this part from a computer science point of view. The second part is to use Long-Short Term Memory on a pre-compiled list of news articles for the selected stock ticker; this enables us to understand which articles might have some influence on the stock prices. The third part is to combine the two and compare the result to stock predictions made using the deep neural network on the stock prices during the same period. The selected companies for the experiment are - Microsoft, Google, Netflix, Apple, Nvidia, AMD, Amazon. The companies were selected based on their popularity on the Internet, which makes it easier to get more articles on the companies. If we look at the day to day movement in stock prices, a typical regression approach can give reasonably accurate results on stock prices, but where this method fails is in predicting the significant changes in prices that are not based on trends or momentum. For instance, if a company releases a faulty product but the hype for the product is high prior to the release, the trends would show a positive direction for the stocks and a regression approach would most likely not predict the fall in the prices right after the news of the fault is made public. It will eventually correct itself, but it would not be instantaneous. Using a news-based approach, it is possible to predict the fall in stocks before the change is noticed in the actual stock price. This approach seems to show success to a varying degree with Microsoft showing the best accuracy of 91.46%, and AMD had the lowest at 40.59% on the test dataset. This was probably because of the volatility of AMD's stock prices, and this volatility could be caused by factors other than the news such as the impact of some other third-party companies. While the news articles can help predict specific stock movements, we still need a trend based regression approach for the day to day stock movements. The second part of the thesis is focused on this part of the stock predictions. It incorporates the results from these news articles into another neural network to predict the actual stock prices of each of the companies. The second neural network takes the percentage change in stock price from one day to the next as the input along with the predicted values from the news articles to predict the value of the stock for the next day. This approach seems to produce mixed results. AMD's predicted values seem to be worse when incorporated with only the news articles

    Continuous optimization via simulation using Golden Region search

    Get PDF
    Simulation Optimization (SO) is the use of mathematical optimization techniques in which the objective function (and/or constraints) could only be numerically evaluated through simulation. Many of the proposed SO methods in the literature are rooted in or originally developed for deterministic optimization problems with available objective function. We argue that since evaluating the objective function in SO requires a simulation run which is more computationally costly than evaluating an available closed form function, SO methods should be more conservative and careful in proposing new candidate solutions for objective function evaluation. Based on this principle, a new SO approach called Golden Region (GR) search is developed for continuous problems. GR divides the feasible region into a number of (sub) regions and selects one region in each iteration for further search based on the quality and distribution of simulated points in the feasible region and the result of scanning the response surface through a metamodel. The experiments show the GR method is efficient compared to three well-established approaches in the literature. We also prove the convergence in probability to global optimum for a large class of random search methods in general and GR in particular

    Analysis and Classification of Current Trends in Malicious HTTP Traffic

    Get PDF
    Web applications are highly prone to coding imperfections which lead to hacker-exploitable vulnerabilities. The contribution of this thesis includes detailed analysis of malicious HTTP traffic based on data collected from four advertised high-interaction honeypots, which hosted different Web applications, each in duration of almost four months. We extract features from Web server logs that characterize malicious HTTP sessions in order to present them as data vectors in four fully labeled datasets. Our results show that the supervised learning methods, Support Vector Machines (SVM) and Decision Trees based J48 and PART, can be used to efficiently distinguish attack sessions from vulnerability scan sessions, as well as efficiently classify twenty-two different types of malicious activities with high probability of detection and very low probability of false alarms for most cases. Furthermore, feature selection methods can be used to select important features in order to improve the computational complexity of the learners

    Det regionale i det internasjonale : en internasjonaliseringsstrategi for høyskolene på Vestlandet

    Get PDF
    Fra å starte som rene undervisningsinstitusjoner har høyskolene nå fått krav om å være relevante i en forskningssammenheng. Spørsmålet er da – hvordan kan små institusjoner i utkanten av verden operere internasjonalt? Mitt svar på dette er at en må arbeide tett med lokale institusjoner, både private og offentlige, og løse deres problemer. Som vi skal se fra flere case er løsningen av disse problemene interessante for publisering i en internasjonal sammenheng. Nøkkelord: internasjonalisering, vitenskapelig publisering, internasjonalt samarbeidpublishedVersio
    corecore