56 research outputs found
A Machine Learning and Computer Vision Application to Robustly Extract Winnings from Multiple Lottery Tickets in One Shot
Mega Millions and Powerball are among the most popular American lottery games. This article provides a practical software application that can conveniently examine and evaluate several lottery tickets for prizes using just the images. The application accepts as input a directory containing the images of lottery tickets and utilizes machine learning and computer vision to extract lottery ticket data, lottery name, lottery draw date, 5-digit lottery numbers, 2-digit lottery "ball" numbers, and the lottery multiplier. The application also retrieves winning lottery data that corresponds to the lottery draw date using a public database API. This is compared with data collected from each lottery ticket image to establish matches, and the corresponding prize amount is computed. The current version of the application supports GPU usage, and image orientation has no impact on its functionality. It is believed that a considerable portion of the U.S. public participating in the Powerball and Mega Millions lotteries will find such an application beneficial and handy
A Machine Learning and Computer Vision Application to Robustly Extract Winnings from Multiple Lottery Tickets in One Shot
Mega Millions and Powerball are among the most popular American lottery games. This article provides a practical software application that can conveniently examine and evaluate several lottery tickets for prizes using just the images. The application accepts as input a directory containing the images of lottery tickets and utilizes machine learning and computer vision to extract lottery ticket data, lottery name, lottery draw date, 5-digit lottery numbers, 2-digit lottery "ball" numbers, and the lottery multiplier. The application also retrieves winning lottery data that corresponds to the lottery draw date using a public database API. This is compared with data collected from each lottery ticket image to establish matches, and the corresponding prize amount is computed. The current version of the application supports GPU usage, and image orientation has no impact on its functionality. It is believed that a considerable portion of the U.S. public participating in the Powerball and Mega Millions lotteries will find such an application beneficial and handy
Novel solar forecasting scheme modelled by mixer dual path network and based on sky images
The prediction of global horizontal irradiance has become an effective technique to address the intermittence issue of photovoltaic (PV) power generation. This article proposes a novel deep neural network(DNN), named Mixer Dual Path Network (Mixer-DPN), for promising solar forecasting. It shares common features of cloud images and maintains the flexibility to explore new features through dual-path architecture by combining the Mixer layer and Dual Path Network. Therefore, the proposed model can provide more accurate prediction results compared to the classical DNN-based predictors. Moreover, the proposed model shows a faster convergence speed and smaller model size, which makes it suitable for a practical global horizontal irradiance. The merits of the proposed model are verified by testing it with the data from National Renewable Energy Laboratory comparing it with other DNN-based prediction models. Studies have shown that the new model has achieved excellent results in MSE, MAE and other indicators, and the R2 prediction accuracy rate has increased by 14% compared with the baseline model
Most Neural Networks Are Almost Learnable
We present a PTAS for learning random constant-depth networks. We show that
for any fixed and depth , there is a poly-time algorithm that
for any distribution on learns random Xavier
networks of depth , up to an additive error of . The algorithm
runs in time and sample complexity of
, where is the size of the
network. For some cases of sigmoid and ReLU-like activations the bound can be
improved to , resulting in a
quasi-poly-time algorithm for learning constant depth random networks.Comment: Fixing small typo
Underwater target detection based on improved YOLOv7
Underwater target detection is a crucial aspect of ocean exploration.
However, conventional underwater target detection methods face several
challenges such as inaccurate feature extraction, slow detection speed and lack
of robustness in complex underwater environments. To address these limitations,
this study proposes an improved YOLOv7 network (YOLOv7-AC) for underwater
target detection. The proposed network utilizes an ACmixBlock module to replace
the 3x3 convolution block in the E-ELAN structure, and incorporates jump
connections and 1x1 convolution architecture between ACmixBlock modules to
improve feature extraction and network reasoning speed. Additionally, a
ResNet-ACmix module is designed to avoid feature information loss and reduce
computation, while a Global Attention Mechanism (GAM) is inserted in the
backbone and head parts of the model to improve feature extraction.
Furthermore, the K-means++ algorithm is used instead of K-means to obtain
anchor boxes and enhance model accuracy. Experimental results show that the
improved YOLOv7 network outperforms the original YOLOv7 model and other popular
underwater target detection methods. The proposed network achieved a mean
average precision (mAP) value of 89.6% and 97.4% on the URPC dataset and
Brackish dataset, respectively, and demonstrated a higher frame per second
(FPS) compared to the original YOLOv7 model. The source code for this study is
publicly available at https://github.com/NZWANG/YOLOV7-AC. In conclusion, the
improved YOLOv7 network proposed in this study represents a promising solution
for underwater target detection and holds great potential for practical
applications in various underwater tasks
Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets
In this note, we demonstrate a first-of-its-kind provable convergence of SGD
to the global minima of appropriately regularized logistic empirical risk of
depth nets -- for arbitrary data and with any number of gates with
adequately smooth and bounded activations like sigmoid and tanh. We also prove
an exponentially fast convergence rate for continuous time SGD that also
applies to smooth unbounded activations like SoftPlus. Our key idea is to show
the existence of Frobenius norm regularized logistic loss functions on
constant-sized neural nets which are "Villani functions" and thus be able to
build on recent progress with analyzing SGD on such objectives.Comment: 18 Pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:2210.1145
Convergence Analysis of Deep Residual Networks
Various powerful deep neural network architectures have made great
contribution to the exciting successes of deep learning in the past two
decades. Among them, deep Residual Networks (ResNets) are of particular
importance because they demonstrated great usefulness in computer vision by
winning the first place in many deep learning competitions. Also, ResNets were
the first class of neural networks in the development history of deep learning
that are really deep. It is of mathematical interest and practical meaning to
understand the convergence of deep ResNets. We aim at characterizing the
convergence of deep ResNets as the depth tends to infinity in terms of the
parameters of the networks. Toward this purpose, we first give a matrix-vector
description of general deep neural networks with shortcut connections and
formulate an explicit expression for the networks by using the notions of
activation domains and activation matrices. The convergence is then reduced to
the convergence of two series involving infinite products of non-square
matrices. By studying the two series, we establish a sufficient condition for
pointwise convergence of ResNets. Our result is able to give justification for
the design of ResNets. We also conduct experiments on benchmark machine
learning data to verify our results
- …