140 research outputs found
Magneto hydrodynamic flow with viscous dissipation effects in the presence of suction and injection
Gyarmati's variational principle developed on the thermodynamic theory of irreversible pro-cesses is employed to study the viscous dissipation effects with uniform suction and injection on the infinite flat plate. The velocity and temperature fields inside the boundary layer are approximated as simple polynomial functions, and the functional of the variational principle is constructed. The Euler Lagrange equations are reduced to simple polynomial equations in terms of velocity and thermal boundary layer thicknesses. The velocity, temperature pro-files, skin friction and heat transfer with the viscous dissipation effects are analyzed and are compared with known numerical solutions. The comparison of the present solution with the existing solutions establishes the fact that the accuracy is remarkable
Private Multiplicative Weights Beyond Linear Queries
A wide variety of fundamental data analyses in machine learning, such as
linear and logistic regression, require minimizing a convex function defined by
the data. Since the data may contain sensitive information about individuals,
and these analyses can leak that sensitive information, it is important to be
able to solve convex minimization in a privacy-preserving way.
A series of recent results show how to accurately solve a single convex
minimization problem in a differentially private manner. However, the same data
is often analyzed repeatedly, and little is known about solving multiple convex
minimization problems with differential privacy. For simpler data analyses,
such as linear queries, there are remarkable differentially private algorithms
such as the private multiplicative weights mechanism (Hardt and Rothblum, FOCS
2010) that accurately answer exponentially many distinct queries. In this work,
we extend these results to the case of convex minimization and show how to give
accurate and differentially private solutions to *exponentially many* convex
minimization problems on a sensitive dataset
Semantic segmentation of conjunctiva region for non-invasive anemia detection applications
Technology is changing the future of healthcare, technology-supported non-invasive medical procedures are more preferable in the medical diagnosis. Anemia is one of the widespread diseases affecting the wellbeing of individuals around the world especially childbearing age women and children and addressing this issue with the advanced technology will reduce the prevalence in large numbers. The objective of this work is to perform segmentation of the conjunctiva region for non-invasive anemia detection applications using deep learning. The proposed U-Net Based Conjunctiva Segmentation Model (UNBCSM) uses fine-tuned U-Net architecture for effective semantic segmentation of conjunctiva from the digital eye images captured by consumer-grade cameras in an uncontrolled environment. The ground truth for this supervised learning was given as Pascal masks obtained by manual selection of conjunctiva pixels. Image augmentation and pre-processing was performed to increase the data size and the performance of the model. UNBCSM showed good segmentation results and exhibited a comparable value of Intersection over Union (IoU) score between the ground truth and the segmented mask of 96% and 85.7% for training and validation, respectively
Time-varying Learning and Content Analytics via Sparse Factor Analysis
We propose SPARFA-Trace, a new machine learning-based framework for
time-varying learning and content analytics for education applications. We
develop a novel message passing-based, blind, approximate Kalman filter for
sparse factor analysis (SPARFA), that jointly (i) traces learner concept
knowledge over time, (ii) analyzes learner concept knowledge state transitions
(induced by interacting with learning resources, such as textbook sections,
lecture videos, etc, or the forgetting effect), and (iii) estimates the content
organization and intrinsic difficulty of the assessment questions. These
quantities are estimated solely from binary-valued (correct/incorrect) graded
learner response data and a summary of the specific actions each learner
performs (e.g., answering a question or studying a learning resource) at each
time instance. Experimental results on two online course datasets demonstrate
that SPARFA-Trace is capable of tracing each learner's concept knowledge
evolution over time, as well as analyzing the quality and content organization
of learning resources, the question-concept associations, and the question
intrinsic difficulties. Moreover, we show that SPARFA-Trace achieves comparable
or better performance in predicting unobserved learner responses than existing
collaborative filtering and knowledge tracing approaches for personalized
education
Private Incremental Regression
Data is continuously generated by modern data sources, and a recent challenge
in machine learning has been to develop techniques that perform well in an
incremental (streaming) setting. In this paper, we investigate the problem of
private machine learning, where as common in practice, the data is not given at
once, but rather arrives incrementally over time.
We introduce the problems of private incremental ERM and private incremental
regression where the general goal is to always maintain a good empirical risk
minimizer for the history observed under differential privacy. Our first
contribution is a generic transformation of private batch ERM mechanisms into
private incremental ERM mechanisms, based on a simple idea of invoking the
private batch ERM procedure at some regular time intervals. We take this
construction as a baseline for comparison. We then provide two mechanisms for
the private incremental regression problem. Our first mechanism is based on
privately constructing a noisy incremental gradient function, which is then
used in a modified projected gradient procedure at every timestep. This
mechanism has an excess empirical risk of , where is the
dimensionality of the data. While from the results of [Bassily et al. 2014]
this bound is tight in the worst-case, we show that certain geometric
properties of the input and constraint set can be used to derive significantly
better results for certain interesting regression problems.Comment: To appear in PODS 201
In-orbit Performance of UVIT on ASTROSAT
We present the in-orbit performance and the first results from the
ultra-violet Imaging telescope (UVIT) on ASTROSAT. UVIT consists of two
identical 38cm coaligned telescopes, one for the FUV channel (130-180nm) and
the other for the NUV (200-300nm) and VIS (320-550nm) channels, with a field of
view of 28 . The FUV and the NUV detectors are operated in the high
gain photon counting mode whereas the VIS detector is operated in the low gain
integration mode. The FUV and NUV channels have filters and gratings, whereas
the VIS channel has filters. The ASTROSAT was launched on 28th September 2015.
The performance verification of UVIT was carried out after the opening of the
UVIT doors on 30th November 2015, till the end of March 2016 within the
allotted time of 50 days for calibration. All the on-board systems were found
to be working satisfactorily. During the PV phase, the UVIT observed several
calibration sources to characterise the instrument and a few objects to
demonstrate the capability of the UVIT. The resolution of the UVIT was found to
be about 1.4 - 1.7 in the FUV and NUV. The sensitivity in various
filters were calibrated using standard stars (white dwarfs), to estimate the
zero-point magnitudes as well as the flux conversion factor. The gratings were
also calibrated to estimate their resolution as well as effective area. The
sensitivity of the filters were found to be reduced up to 15\% with respect to
the ground calibrations. The sensitivity variation is monitored on a monthly
basis. UVIT is all set to roll out science results with its imaging capability
with good resolution and large field of view, capability to sample the UV
spectral region using different filters and capability to perform variability
studies in the UV.Comment: 10 pages, To appear in SPIE conference proceedings, SPIE conference
paper, 201
Order-Revealing Encryption and the Hardness of Private Learning
An order-revealing encryption scheme gives a public procedure by which two
ciphertexts can be compared to reveal the ordering of their underlying
plaintexts. We show how to use order-revealing encryption to separate
computationally efficient PAC learning from efficient -differentially private PAC learning. That is, we construct a concept
class that is efficiently PAC learnable, but for which every efficient learner
fails to be differentially private. This answers a question of Kasiviswanathan
et al. (FOCS '08, SIAM J. Comput. '11).
To prove our result, we give a generic transformation from an order-revealing
encryption scheme into one with strongly correct comparison, which enables the
consistent comparison of ciphertexts that are not obtained as the valid
encryption of any message. We believe this construction may be of independent
interest.Comment: 28 page
On the Round Complexity of the Shuffle Model
The shuffle model of differential privacy was proposed as a viable model for
performing distributed differentially private computations. Informally, the
model consists of an untrusted analyzer that receives messages sent by
participating parties via a shuffle functionality, the latter potentially
disassociates messages from their senders. Prior work focused on one-round
differentially private shuffle model protocols, demonstrating that
functionalities such as addition and histograms can be performed in this model
with accuracy levels similar to that of the curator model of differential
privacy, where the computation is performed by a fully trusted party.
Focusing on the round complexity of the shuffle model, we ask in this work
what can be computed in the shuffle model of differential privacy with two
rounds. Ishai et al. [FOCS 2006] showed how to use one round of the shuffle to
establish secret keys between every two parties. Using this primitive to
simulate a general secure multi-party protocol increases its round complexity
by one. We show how two parties can use one round of the shuffle to send secret
messages without having to first establish a secret key, hence retaining round
complexity. Combining this primitive with the two-round semi-honest protocol of
Applebaun et al. [TCC 2018], we obtain that every randomized functionality can
be computed in the shuffle model with an honest majority, in merely two rounds.
This includes any differentially private computation. We then move to examine
differentially private computations in the shuffle model that (i) do not
require the assumption of an honest majority, or (ii) do not admit one-round
protocols, even with an honest majority. For that, we introduce two
computational tasks: the common-element problem and the nested-common-element
problem, for which we show separations between one-round and two-round
protocols
The number of matchings in random graphs
We study matchings on sparse random graphs by means of the cavity method. We
first show how the method reproduces several known results about maximum and
perfect matchings in regular and Erdos-Renyi random graphs. Our main new result
is the computation of the entropy, i.e. the leading order of the logarithm of
the number of solutions, of matchings with a given size. We derive both an
algorithm to compute this entropy for an arbitrary graph with a girth that
diverges in the large size limit, and an analytic result for the entropy in
regular and Erdos-Renyi random graph ensembles.Comment: 17 pages, 6 figures, to be published in Journal of Statistical
Mechanic
- …