152 research outputs found
Predicting pharmaceutical particle size distributions using kernel mean embedding
In the pharmaceutical industry, the transition to continuous manufacturing of solid dosage forms is adopted by more and more companies. For these continuous processes, high-quality process models are needed. In pharmaceutical wet granulation, a unit operation in the ConsiGmaTM-25 continuous powder-to-tablet system (GEA Pharma systems, Collette, Wommelgem, Belgium), the product under study presents itself as a collection of particles that differ in shape and size. The measurement of this collection results in a particle size distribution. However, the theoretical basis to describe the physical phenomena leading to changes in this particle size distribution is lacking. It is essential to understand how the particle size distribution changes as a function of the unit operation's process settings, as it has a profound effect on the behavior of the fluid bed dryer. Therefore, we suggest a data-driven modeling framework that links the machine settings of the wet granulation unit operation and the output distribution of granules. We do this without making any assumptions on the nature of the distributions under study. A simulation of the granule size distribution could act as a soft sensor when in-line measurements are challenging to perform. The method of this work is a two-step procedure: first, the measured distributions are transformed into a high-dimensional feature space, where the relation between the machine settings and the distributions can be learnt. Second, the inverse transformation is performed, allowing an interpretation of the results in the original measurement space. Further, a comparison is made with previous work, which employs a more mechanistic framework for describing the granules. A reliable prediction of the granule size is vital in the assurance of quality in the production line, and is needed in the assessment of upstream (feeding) and downstream (drying, milling, and tableting) issues. Now that a validated data-driven framework for predicting pharmaceutical particle size distributions is available, it can be applied in settings such as model-based experimental design and, due to its fast computation, there is potential in real-time model predictive control
Sign rank versus VC dimension
This work studies the maximum possible sign rank of sign
matrices with a given VC dimension . For , this maximum is {three}. For
, this maximum is . For , similar but
slightly less accurate statements hold. {The lower bounds improve over previous
ones by Ben-David et al., and the upper bounds are novel.}
The lower bounds are obtained by probabilistic constructions, using a theorem
of Warren in real algebraic topology. The upper bounds are obtained using a
result of Welzl about spanning trees with low stabbing number, and using the
moment curve.
The upper bound technique is also used to: (i) provide estimates on the
number of classes of a given VC dimension, and the number of maximum classes of
a given VC dimension -- answering a question of Frankl from '89, and (ii)
design an efficient algorithm that provides an multiplicative
approximation for the sign rank.
We also observe a general connection between sign rank and spectral gaps
which is based on Forster's argument. Consider the adjacency
matrix of a regular graph with a second eigenvalue of absolute value
and . We show that the sign rank of the signed
version of this matrix is at least . We use this connection to
prove the existence of a maximum class with VC
dimension and sign rank . This answers a question
of Ben-David et al.~regarding the sign rank of large VC classes. We also
describe limitations of this approach, in the spirit of the Alon-Boppana
theorem.
We further describe connections to communication complexity, geometry,
learning theory, and combinatorics.Comment: 33 pages. This is a revised version of the paper "Sign rank versus VC
dimension". Additional results in this version: (i) Estimates on the number
of maximum VC classes (answering a question of Frankl from '89). (ii)
Estimates on the sign rank of large VC classes (answering a question of
Ben-David et al. from '03). (iii) A discussion on the computational
complexity of computing the sign-ran
Development and realization of energy management system in process industry, Part II: Advanced functions
One of the specific functions of the energy management system is the prediction of energy consumption for the purpose of the timely reaction to and prevention of undesirable situations. Moreover, the energy consumption prediction enables better planning of the energy production and contributes to the reduction of all production costs. This paper shows a new original algorithm for energy consumption forecasting based on support vector machines. The developed energy management system is realized and it operates in Sojaprotein AD Bečej, soybean processing factory
On Polynomial Time Constructions of Minimum Height Decision Tree
A decision tree T in B_m:={0,1}^m is a binary tree where each of its internal nodes is labeled with an integer in [m]={1,2,...,m}, each leaf is labeled with an assignment a in B_m and each internal node has two outgoing edges that are labeled with 0 and 1, respectively. Let A subset {0,1}^m. We say that T is a decision tree for A if (1) For every a in A there is one leaf of T that is labeled with a. (2) For every path from the root to a leaf with internal nodes labeled with i_1,i_2,...,i_k in[m], a leaf labeled with a in A and edges labeled with xi_{i_1},...,xi_{i_k}in {0,1}, a is the only element in A that satisfies a_{i_j}=xi_{i_j} for all j=1,...,k.
Our goal is to write a polynomial time (in n:=|A| and m) algorithm that for an input A subseteq B_m outputs a decision tree for A of minimum depth. This problem has many applications that include, to name a few, computer vision, group testing, exact learning from membership queries and game theory.
Arkin et al. and Moshkov [Esther M. Arkin et al., 1998; Mikhail Ju. Moshkov, 2004] gave a polynomial time (ln |A|)- approximation algorithm (for the depth). The result of Dinur and Steurer [Irit Dinur and David Steurer, 2014] for set cover implies that this problem cannot be approximated with ratio (1-o(1))* ln |A|, unless P=NP. Moshkov studied in [Mikhail Ju. Moshkov, 2004; Mikhail Ju. Moshkov, 1982; Mikhail Ju. Moshkov, 1982] the combinatorial measure of extended teaching dimension of A, ETD(A). He showed that ETD(A) is a lower bound for the depth of the decision tree for A and then gave an exponential time ETD(A)/log(ETD(A))-approximation algorithm and a polynomial time 2(ln 2)ETD(A)-approximation algorithm.
In this paper we further study the ETD(A) measure and a new combinatorial measure, DEN(A), that we call the density of the set A. We show that DEN(A) <=ETD(A)+1. We then give two results. The first result is that the lower bound ETD(A) of Moshkov for the depth of the decision tree for A is greater than the bounds that are obtained by the classical technique used in the literature. The second result is a polynomial time (ln 2)DEN(A)-approximation (and therefore (ln 2)ETD(A)-approximation) algorithm for the depth of the decision tree of A.
We then apply the above results to learning the class of disjunctions of predicates from membership queries [Nader H. Bshouty et al., 2017]. We show that the ETD of this class is bounded from above by the degree d of its Hasse diagram. We then show that Moshkov algorithm can be run in polynomial time and is (d/log d)-approximation algorithm. This gives optimal algorithms when the degree is constant. For example, learning axis parallel rays over constant dimension space
SQ Lower Bounds for Learning Mixtures of Linear Classifiers
We study the problem of learning mixtures of linear classifiers under
Gaussian covariates. Given sample access to a mixture of distributions on
of the form , , where
and
for an unknown
unit vector , the goal is to learn the underlying distribution
in total variation distance. Our main result is a Statistical Query (SQ) lower
bound suggesting that known algorithms for this problem are essentially best
possible, even for the special case of uniform mixtures. In particular, we show
that the complexity of any SQ algorithm for the problem is
, where is a lower bound on the
pairwise -separation between the 's. The key technical
ingredient underlying our result is a new construction of spherical designs
that may be of independent interest.Comment: To appear in NeurIPS 202
Derivation of Constraints from Machine Learning Models and Applications to Security and Privacy
This paper shows how we can combine the power of machine learning with the flexibility of constraints. More specifically, we show how machine learning models can be represented by first-order logic theories, and how to derive these theories. The advantage of this representation is that it can be augmented with additional formulae, representing constraints of some kind on the data domain. For instance, new knowledge, or potential attackers, or fairness desiderata. We consider various kinds of learning algorithms (neural networks, k-nearest-neighbours, decision trees, support vector machines) and for each of them we show how to infer the FOL formulae. Then we focus on one particular application domain, namely the field of security and privacy. The idea is to represent the potentialities and goals of the attacker as a set of constraints, then use a constraint solver (more precisely, a solver modulo theories) to verify the satisfiability. If a solution exists, then it means that an attack is possible, otherwise, the system is safe. We show various examples from different areas of security and privacy; specifically, we consider a side-channel attack on a password checker, a malware attack on smart health systems, and a model-inversion attack on a neural network
- …