959 research outputs found

    Scalable Deep Traffic Flow Neural Networks for Urban Traffic Congestion Prediction

    Full text link
    Tracking congestion throughout the network road is a critical component of Intelligent transportation network management systems. Understanding how the traffic flows and short-term prediction of congestion occurrence due to rush-hour or incidents can be beneficial to such systems to effectively manage and direct the traffic to the most appropriate detours. Many of the current traffic flow prediction systems are designed by utilizing a central processing component where the prediction is carried out through aggregation of the information gathered from all measuring stations. However, centralized systems are not scalable and fail provide real-time feedback to the system whereas in a decentralized scheme, each node is responsible to predict its own short-term congestion based on the local current measurements in neighboring nodes. We propose a decentralized deep learning-based method where each node accurately predicts its own congestion state in real-time based on the congestion state of the neighboring stations. Moreover, historical data from the deployment site is not required, which makes the proposed method more suitable for newly installed stations. In order to achieve higher performance, we introduce a regularized Euclidean loss function that favors high congestion samples over low congestion samples to avoid the impact of the unbalanced training dataset. A novel dataset for this purpose is designed based on the traffic data obtained from traffic control stations in northern California. Extensive experiments conducted on the designed benchmark reflect a successful congestion prediction

    Investigation of IoT applications in supply chain management with fuzzy hierarchical analysis

    Get PDF
    The IoT is currently growing rapidly and uses technologies such as smart barcode sensors, RFID, wireless communications, cloud computing, and more. The Internet of Things, in addition to being a revolutionary technology for all industries; has also demonstrated its potential in processes such as supply chain. Management, forecasting, and monitoring applications help managers improve the operational efficiency of their company distribution and increase transparency in their decisions. So more than ever, the benefits of using the Internet of Things are evident in the supply chain. The existence of comprehensive and valid information platforms is one of the requirements of supply chain management. Therefore, the most accurate use of integrated information devices such as Internet technology of objects in this part of the management of the organization is important. Coverage of this information accurately and in an instant facilitates matters and makes the process progress more transparent. To improve this process, cloud computing is used as a solution. In addition, other cloud computing capabilities can be used, such as facilitating object communication, integrating monitoring devices, and IoT storage, analyzing data, and paving the way for cyberspace to provide the customer with supply chain management. This requires a model that defines how Internet technology relates to objects, cloud computing, and supply chain management. The purpose of this study is to identify and prioritize IoT applications in the supply chain management sector with a multi-criteria decision-making approach. The results show that applications such as intelligent control and intelligent maintenance have the highest priorities

    ALIGNMENT-FREE METHODS AND ITS APPLICATIONS

    Get PDF
    Comparing biological sequences remains one of the most vital activities in Bioinformatics. Comparing biological sequences would address the relatedness between species, and find similar structures that might lead to similar functions. Sequence alignment is the default method, and has been used in the domain for over four decades. It gained a lot of trust, but limitations and even failure has been reported, especially with the new generated genomes. These new generated genomes have bigger size, and to some extent suffer errors. Such errors come mainly as a result from the sequencing machine. These sequencing errors should be considered when submitting sequences to GenBank, for sequence comparison, it is often hard to address or even trace this problem. Alignment-based methods would fail with such errors, and even if biologists still trust them, reports showed failure with these methods. The poor results of alignment-based methods with erratic sequences, motivated researchers in the domain to look for alternatives. These alternative methods are alignment-free, and would overcome the shortcomings of alignment-based methods. The work of this thesis is based on alignment-free methods, and it conducts an in-depth study to evaluate these methods, and find the right domain’s application for them. The right domain for alignment-free methods could be by applying them to data that were subjected to manufactured errors, and test the methods provide better comparison results with data that has naturally severe errors. The two techniques used in this work are compression-based and motif-based (or k-mer based, or signal based). We also addressed the selection of the used motifs in the second technique, and how to progress the results by selecting specific motifs that would enhance the quality of results. In addition, we applied an alignment-free method to a different domain, which is gene prediction. We are using alignment-free in gene prediction to speed up the process of providing high quality results, and predict accurate stretches in the DNA sequence, which would be considered parts of genes

    Influence of Aspect Ratio on the Bearing Capacity and Correlated Modulus of Subgrade Reaction for Shallow Foundations on Dense Sand

    Get PDF
    One of the most fundamental problems in the field of geotechnical engineering is the prediction of bearing capacity and settlement of shallow foundations on cohesion-less soil subjected to vertical central loading. The fact is that commonly used "SHAPE FACTORS" in the current design practice for estimating the bearing capacity and modulus of subgrade reaction of a shallow foundation are partially empirical values proposed by (Terzaghi, 1943). On the other hand, the current design of shallow foundation on cohesionless soil does not take into consideration the scale effect between the soil particles and foundation geometry. This may result in an excessively conservative design, which in turn results in unnecessary costs of the foundation. Therefore, and to take into account the realistic effect of the three-dimensional mechanism of soil deformation for foundations with various aspect ratios B/L on the shape factor in the classical formulas and theories of ultimate bearing capacity of shallow foundation, a series of small-scale model foundation tests were carried out on a practical type of sand (Toyoura sand) and the three-dimensional mechanism of deformation has been closely monitored and recorded at the end of each performed test. This research presents the main observations and results of the square, rectangular and strip model tests with constant foundation base width B conducted on compacted sand. Finally, the results are presented and compared to those from literature and preliminary conclusions and recommendations are drawn

    Provide a model for an e-commerce system with the impact of artificial intelligence

    Get PDF
    Purpose: In less industrialized today, competition is as fierce as in e-commerce. Not just online and physical stores, but the entire Internet space is competing with online retailers. Today, AI-based platforms are a vital element for e-commerce success. Artificial intelligence in digital marketing plays a constructive role in data-based decisions, because through deep learning (Deep learning) can predict user behavior from beginning to end of the purchase path. In today's world, customer behavior has changed. Methodology: When a customer feels a need, they first search for it on the Internet. Accordingly, many e-commerce retailers, with artificial intelligence capabilities, try to integrate textual, visual, and audio capabilities; Especially through "conversation business" to attract more customer attention. Retailers because customer needs are growing rapidly; They are always trying to have the best sales. Accordingly, if brands want to be more durable, the principle is to consider the needs of customers who are growing rapidly; One of the important priorities is business strategies. Findings: Therefore, the role of chat bots, which are actually computer programs designed to simulate conversations with human users on the Internet, is very important in "conversation business”. Originality/Value: In this study, the effect of artificial intelligence on e-commerce is investigated and the most important functions of this tool are analyzed

    Exploring the adoption of a conceptual data analytics framework for subsurface energy production systems: a study of predictive maintenance, multi-phase flow estimation, and production optimization

    Get PDF
    Als die Technologie weiter fortschreitet und immer stĂ€rker in der Öl- und Gasindustrie integriert wird, steht eine enorme Menge an Daten in verschiedenen Wissenschaftsdisziplinen zur VerfĂŒgung, die neue Möglichkeiten bieten, informationsreiche und handlungsorientierte Informationen zu gewinnen. Die Konvergenz der digitalen Transformation mit der Physik des FlĂŒssigkeitsflusses durch poröse Medien und Pipeline hat die Entwicklung und Anwendung von maschinellem Lernen (ML) vorangetrieben, um weiteren Mehrwert aus diesen Daten zu gewinnen. Als Folge hat sich die digitale Transformation und ihre zugehörigen maschinellen Lernanwendungen zu einem neuen Forschungsgebiet entwickelt. Die Transformation von Brownfields in digitale Ölfelder kann bei der Energieproduktion helfen, indem verschiedene Ziele erreicht werden, einschließlich erhöhter betrieblicher Effizienz, Produktionsoptimierung, Zusammenarbeit, Datenintegration, EntscheidungsunterstĂŒtzung und Workflow-Automatisierung. Diese Arbeit zielt darauf ab, ein Rahmenwerk fĂŒr diese Anwendungen zu prĂ€sentieren, insbesondere durch die Implementierung virtueller Sensoren, Vorhersageanalytik mithilfe von Vorhersagewartung fĂŒr die Produktionshydraulik-Systeme (mit dem Schwerpunkt auf elektrischen Unterwasserpumpen) und prĂ€skriptiven Analytik fĂŒr die Produktionsoptimierung in Dampf- und Wasserflutprojekten. In Bezug auf virtuelle Messungen ist eine genaue SchĂ€tzung von Mehrphasenströmen fĂŒr die Überwachung und Verbesserung von Produktionsprozessen entscheidend. Diese Studie prĂ€sentiert einen datengetriebenen Ansatz zur Berechnung von Mehrphasenströmen mithilfe von Sensormessungen in elektrischen untergetauchten Pumpbrunnen. Es wird eine ausfĂŒhrliche exploratorische Datenanalyse durchgefĂŒhrt, einschließlich einer Ein Variablen Studie der ZielausgĂ€nge (FlĂŒssigkeitsrate und Wasseranteil), einer Mehrvariablen-Studie der Beziehungen zwischen Eingaben und Ausgaben sowie einer Datengruppierung basierend auf Hauptkomponentenprojektionen und Clusteralgorithmen. Feature Priorisierungsexperimente werden durchgefĂŒhrt, um die einflussreichsten Parameter in der Vorhersage von Fließraten zu identifizieren. Die Modellvergleich erfolgt anhand des mittleren absoluten Fehlers, des mittleren quadratischen Fehlers und des Bestimmtheitskoeffizienten. Die Ergebnisse zeigen, dass die CNN-LSTM-Netzwerkarchitektur besonders effektiv bei der Zeitreihenanalyse von ESP-Sensordaten ist, da die 1D-CNN-Schichten automatisch Merkmale extrahieren und informative Darstellungen von Zeitreihendaten erzeugen können. Anschließend wird in dieser Studie eine Methodik zur Umsetzung von Vorhersagewartungen fĂŒr kĂŒnstliche Hebesysteme, insbesondere bei der Wartung von Elektrischen Untergetauchten Pumpen (ESP), vorgestellt. Conventional maintenance practices for ESPs require extensive resources and manpower, and are often initiated through reactive monitoring of multivariate sensor data. Um dieses Problem zu lösen, wird die Verwendung von Hauptkomponentenanalyse (PCA) und Extreme Gradient Boosting Trees (XGBoost) zur Analyse von Echtzeitsensordaten und Vorhersage möglicher AusfĂ€lle in ESPs eingesetzt. PCA wird als unsupervised technique eingesetzt und sein Ausgang wird weiter vom XGBoost-Modell fĂŒr die Vorhersage des Systemstatus verarbeitet. Das resultierende Vorhersagemodell hat gezeigt, dass es Signale von möglichen AusfĂ€llen bis zu sieben Tagen im Voraus bereitstellen kann, mit einer F1-Bewertung grĂ¶ĂŸer als 0,71 im Testset. Diese Studie integriert auch Model-Free Reinforcement Learning (RL) Algorithmen zur UnterstĂŒtzung bei Entscheidungen im Rahmen der Produktionsoptimierung. Die Aufgabe, die optimalen Injektionsstrategien zu bestimmen, stellt Herausforderungen aufgrund der KomplexitĂ€t der zugrundeliegenden Dynamik, einschließlich nichtlinearer Formulierung, zeitlicher Variationen und ReservoirstrukturheterogenitĂ€t. Um diese Herausforderungen zu bewĂ€ltigen, wurde das Problem als Markov-Entscheidungsprozess reformuliert und RL-Algorithmen wurden eingesetzt, um Handlungen zu bestimmen, die die Produktion optimieren. Die Ergebnisse zeigen, dass der RL-Agent in der Lage war, den Netto-Barwert (NPV) durch kontinuierliche Interaktion mit der Umgebung und iterative Verfeinerung des dynamischen Prozesses ĂŒber mehrere Episoden signifikant zu verbessern. Dies zeigt das Potenzial von RL-Algorithmen, effektive und effiziente Lösungen fĂŒr komplexe Optimierungsprobleme im Produktionsbereich zu bieten.As technology continues to advance and become more integrated in the oil and gas industry, a vast amount of data is now prevalent across various scientific disciplines, providing new opportunities to gain insightful and actionable information. The convergence of digital transformation with the physics of fluid flow through porous media and pipelines has driven the advancement and application of machine learning (ML) techniques to extract further value from this data. As a result, digital transformation and its associated machine-learning applications have become a new area of scientific investigation. The transformation of brownfields into digital oilfields can aid in energy production by accomplishing various objectives, including increased operational efficiency, production optimization, collaboration, data integration, decision support, and workflow automation. This work aims to present a framework of these applications, specifically through the implementation of virtual sensing, predictive analytics using predictive maintenance on production hydraulic systems (with a focus on electrical submersible pumps), and prescriptive analytics for production optimization in steam and waterflooding projects. In terms of virtual sensing, the accurate estimation of multi-phase flow rates is crucial for monitoring and improving production processes. This study presents a data-driven approach for calculating multi-phase flow rates using sensor measurements located in electrical submersible pumped wells. An exhaustive exploratory data analysis is conducted, including a univariate study of the target outputs (liquid rate and water cut), a multivariate study of the relationships between inputs and outputs, and data grouping based on principal component projections and clustering algorithms. Feature prioritization experiments are performed to identify the most influential parameters in the prediction of flow rates. Model comparison is done using the mean absolute error, mean squared error and coefficient of determination. The results indicate that the CNN-LSTM network architecture is particularly effective in time series analysis for ESP sensor data, as the 1D-CNN layers are capable of extracting features and generating informative representations of time series data automatically. Subsequently, the study presented herein a methodology for implementing predictive maintenance on artificial lift systems, specifically regarding the maintenance of Electrical Submersible Pumps (ESPs). Conventional maintenance practices for ESPs require extensive resources and manpower and are often initiated through reactive monitoring of multivariate sensor data. To address this issue, the study employs the use of principal component analysis (PCA) and extreme gradient boosting trees (XGBoost) to analyze real-time sensor data and predict potential failures in ESPs. PCA is utilized as an unsupervised technique and its output is further processed by the XGBoost model for prediction of system status. The resulting predictive model has been shown to provide signals of potential failures up to seven days in advance, with an F1 score greater than 0.71 on the test set. In addition to the data-driven modeling approach, The present study also in- corporates model-free reinforcement learning (RL) algorithms to aid in decision-making in production optimization. The task of determining the optimal injection strategy poses challenges due to the complexity of the underlying dynamics, including nonlinear formulation, temporal variations, and reservoir heterogeneity. To tackle these challenges, the problem was reformulated as a Markov decision process and RL algorithms were employed to determine actions that maximize production yield. The results of the study demonstrate that the RL agent was able to significantly enhance the net present value (NPV) by continuously interacting with the environment and iteratively refining the dynamic process through multiple episodes. This showcases the potential for RL algorithms to provide effective and efficient solutions for complex optimization problems in the production domain. In conclusion, this study represents an original contribution to the field of data-driven applications in subsurface energy systems. It proposes a data-driven method for determining multi-phase flow rates in electrical submersible pumped (ESP) wells utilizing sensor measurements. The methodology includes conducting exploratory data analysis, conducting experiments to prioritize features, and evaluating models based on mean absolute error, mean squared error, and coefficient of determination. The findings indicate that a convolutional neural network-long short-term memory (CNN-LSTM) network is an effective approach for time series analysis in ESPs. In addition, the study implements principal component analysis (PCA) and extreme gradient boosting trees (XGBoost) to perform predictive maintenance on ESPs and anticipate potential failures up to a seven-day horizon. Furthermore, the study applies model-free reinforcement learning (RL) algorithms to aid decision-making in production optimization and enhance net present value (NPV)

    Variations on Δ11\Delta^1_1 Determinacy and ℔ω1\aleph_{\omega_1}

    Full text link
    We consider a weaker form of Δ11\Delta^1_1 Turing determinacy. Let 2⩜ρ<ω1CK2 \leqslant \rho < \omega_{1}^{\mathsf{CK}}, Weak-Turing-Detρ(Δ11)\textrm{Weak-Turing-Det}_{\rho}(\Delta^1_1) is the statement: Every Δ11\Delta^1_1 set of reals cofinal in the Turing degrees contains two Turing distinct Δρ0\Delta^0_\rho-equivalent reals. We show in ZF−\mathsf{ZF}^-:  ~~ Weak-Turing-Detρ(Δ11)\textrm{Weak-Turing-Det}_{\rho}(\Delta^1_1) implies for every Îœ<ω1CK\nu < \omega_{1}^{\mathsf{CK}} there is a transitive model: M⊹ZF−+"℔Μ exists"M \models \textrm{ZF}^- + "\aleph_\nu \textrm{ exists}". As a corollary: If every cofinal Δ11\Delta^1_1 set of Turing degrees contains both a degree and its jump, then for every Îœ<ω1CK\nu < \omega_{1}^{\mathsf{CK}}, there is a transitive model: M⊹ZF−+"℔Μ exists"M \models \mathsf{ZF}^- + "\aleph_\nu \textrm{ exists}". ∙\bullet With a simple proof, this improves upon a well-known result of Harvey Friedman on the strength of Borel determinacy (though not assessed level-by-level). ∙\bullet Invoking Tony Martin's proof of Borel determinacy, Weak-Turing-Detρ(Δ11)\textrm{Weak-Turing-Det}_{\rho}(\Delta^1_1) implies Δ11\Delta^1_1 determinacy. ∙\bullet We show further that, assuming Δ11\Delta^1_1 Turing determinacy, or Borel Turing determinacy, as needed: −- Every cofinal ÎŁ11\Sigma^1_1 set of Turing degrees contains a "hyp-Turing cone": {x∈D∣d0â©œTxâ©œhd0}\{x \in \mathcal{D} \mid d_0 \leqslant_T x \leqslant_h d_0 \}. −- For a sequence (Ak)k<ω(A_{k})_{k < \omega} of analytic sets of Turing degrees, cofinal in D\mathcal{D}, ⋂kAk\bigcap_{k} A_{k} is cofinal in D\mathcal{D}.Comment: 10 page
    • 

    corecore