131 research outputs found
Distributed Hypothesis Testing with Privacy Constraints
We revisit the distributed hypothesis testing (or hypothesis testing with
communication constraints) problem from the viewpoint of privacy. Instead of
observing the raw data directly, the transmitter observes a sanitized or
randomized version of it. We impose an upper bound on the mutual information
between the raw and randomized data. Under this scenario, the receiver, which
is also provided with side information, is required to make a decision on
whether the null or alternative hypothesis is in effect. We first provide a
general lower bound on the type-II exponent for an arbitrary pair of
hypotheses. Next, we show that if the distribution under the alternative
hypothesis is the product of the marginals of the distribution under the null
(i.e., testing against independence), then the exponent is known exactly.
Moreover, we show that the strong converse property holds. Using ideas from
Euclidean information theory, we also provide an approximate expression for the
exponent when the communication rate is low and the privacy level is high.
Finally, we illustrate our results with a binary and a Gaussian example
Semantic Wide and Deep Learning for Detecting Crisis-Information Categories on Social Media
When crises hit, many flog to social media to share or consume information related to the event. Social media posts during crises tend to provide valuable reports on affected people, donation offers, help requests, advice provision, etc. Automatically identifying the category of information (e.g., reports on affected individuals, donations and volunteers) contained in these posts is vital for their efficient handling and consumption by effected communities and concerned organisations. In this paper, we introduce Sem-CNN; a wide and deep Convolutional Neural Network (CNN) model designed for identifying the category of information contained in crisis-related social media content. Unlike previous models, which mainly rely on the lexical representations of words in the text, the proposed model integrates an additional layer of semantics that represents the named entities in the text, into a wide and deep CNN network. Results show that the Sem-CNN model consistently outperforms the baselines which consist of
statistical and non-semantic deep learning models
Organizational Pathology in Staff Department of Shiraz University of Medical Sciences based on Three-branch Model
Introduction: Pathology of organizations is one of the most key actions that make the university officials aware of the strengths, weaknesses, opportunities and challenges of an organization and allows these officials to take the right actions accordingly. Therefore, this study examined the pathology of human resources in staff departments of Shiraz University of Medical Sciences.
Methods: This was a cross-sectional study. Two hundred employees participated in this study. Random sampling was used and the data were collected by an organizational pathology questionnaire based on the three branch model. Data analyzed by SPSS software.
Results: The results showed that behavioral factors (mean = 2.73) and contextual factors (mean= 3.34) had the highest and the lowest impacts on human resources malfunction in Shiraz University of Medical Sciences. Among subsets of structural factors, the payment system (average rank of 2.5, mean=2.21±0.72 and p-value ≤0.001), behavioral factors, and job security (mean rank = 2.03, mean=2.28±0.93 and p-value ≤0.001) and among subsets of contextual factors, customer-orientation (mean rank= 1.31, mean=3.07±0.70 and p-value 0.19) had the most impact on organizational pathology in this university.
Conclusion: Considering that the behavioral factors had the greatest impact to the damages, it seems that the university has to adopt programs and policies in terms of creating appropriate organizational culture, increasing motivation and job satisfaction, adopting effective leadership, providing opportunities for training and development of employees and improving job security for the staff. Indeed, it is worth mentioning that solving some of these issues is beyond the powers of the executive organs, so solving these types of problems at macro-organizational level in the country is of vital importance
Multi-spectral Entropy Constrained Neural Compression of Solar Imagery
Missions studying the dynamic behaviour of the Sun are defined to capture
multi-spectral images of the sun and transmit them to the ground station in a
daily basis. To make transmission efficient and feasible, image compression
systems need to be exploited. Recently successful end-to-end optimized neural
network-based image compression systems have shown great potential to be used
in an ad-hoc manner. In this work we have proposed a transformer-based
multi-spectral neural image compressor to efficiently capture redundancies both
intra/inter-wavelength. To unleash the locality of window-based self attention
mechanism, we propose an inter-window aggregated token multi head self
attention. Additionally to make the neural compressor autoencoder shift
invariant, a randomly shifted window attention mechanism is used which makes
the transformer blocks insensitive to translations in their input domain. We
demonstrate that the proposed approach not only outperforms the conventional
compression algorithms but also it is able to better decorrelates images along
the multiple wavelengths compared to single spectral compression.Comment: Accepted to IEEE 22 International Conference on Machine
Learning and Applications 2023 (ICMLA
Context-Aware Neural Video Compression on Solar Dynamics Observatory
NASA's Solar Dynamics Observatory (SDO) mission collects large data volumes
of the Sun's daily activity. Data compression is crucial for space missions to
reduce data storage and video bandwidth requirements by eliminating
redundancies in the data. In this paper, we present a novel neural
Transformer-based video compression approach specifically designed for the SDO
images. Our primary objective is to efficiently exploit the temporal and
spatial redundancies inherent in solar images to obtain a high compression
ratio. Our proposed architecture benefits from a novel Transformer block called
Fused Local-aware Window (FLaWin), which incorporates window-based
self-attention modules and an efficient fused local-aware feed-forward (FLaFF)
network. This architectural design allows us to simultaneously capture
short-range and long-range information while facilitating the extraction of
rich and diverse contextual representations. Moreover, this design choice
results in reduced computational complexity. Experimental results demonstrate
the significant contribution of the FLaWin Transformer block to the compression
performance, outperforming conventional hand-engineered video codecs such as
H.264 and H.265 in terms of rate-distortion trade-off.Comment: Accepted to IEEE 22 International Conference on Machine
Learning and Applications 2023 (ICMLA) - Selected for Oral Presentatio
Neural-based Compression Scheme for Solar Image Data
Studying the solar system and especially the Sun relies on the data gathered
daily from space missions. These missions are data-intensive and compressing
this data to make them efficiently transferable to the ground station is a
twofold decision to make. Stronger compression methods, by distorting the data,
can increase data throughput at the cost of accuracy which could affect
scientific analysis of the data. On the other hand, preserving subtle details
in the compressed data requires a high amount of data to be transferred,
reducing the desired gains from compression. In this work, we propose a neural
network-based lossy compression method to be used in NASA's data-intensive
imagery missions. We chose NASA's SDO mission which transmits 1.4 terabytes of
data each day as a proof of concept for the proposed algorithm. In this work,
we propose an adversarially trained neural network, equipped with local and
non-local attention modules to capture both the local and global structure of
the image resulting in a better trade-off in rate-distortion (RD) compared to
conventional hand-engineered codecs. The RD variational autoencoder used in
this work is jointly trained with a channel-dependent entropy model as a shared
prior between the analysis and synthesis transforms to make the entropy coding
of the latent code more effective. Our neural image compression algorithm
outperforms currently-in-use and state-of-the-art codecs such as JPEG and
JPEG-2000 in terms of the RD performance when compressing extreme-ultraviolet
(EUV) data. As a proof of concept for use of this algorithm in SDO data
analysis, we have performed coronal hole (CH) detection using our compressed
images, and generated consistent segmentations, even at a compression rate of
bits per pixel (compared to 8 bits per pixel on the original data)
using EUV data from SDO.Comment: Accepted for publication in IEEE Transactions on Aerospace and
Electronic Systems (TAES). arXiv admin note: text overlap with
arXiv:2210.0647
The Astrophysical Distance Scale: V. A 2% Distance to the Local Group Spiral M33 via the JAGB Method, Tip of the Red Giant Branch, and Leavitt Law
The J-region asymptotic giant branch (JAGB) method is a new standard candle
that is based on the stable intrinsic J-band magnitude of color-selected carbon
stars, and has a precision comparable to other primary distance indicators such
as Cepheids and the TRGB. We further test the accuracy of the JAGB method in
the Local Group Galaxy M33. M33's moderate inclination, low metallicity, and
nearby proximity make it an ideal laboratory for tests of systematics in local
distance indicators. Using high-precision optical BVI and near-infrared JHK
photometry, we explore the application of three independent distance
indicators: the JAGB method, the Cepheid Leavitt Law, and the TRGB. We find:
(TRGB I) = 24.72 +/- 0.02 (stat) +/- 0.07 (sys) mag, (TRGB NIR)
= 24.72 +/- 0.04 (stat) +/- 0.10 (sys) mag, (JAGB) = 24.67 +/- 0.03
(stat) +/- 0.04 (sys) mag, (Cepheid) = 24.71 +/- 0.04 (stat) +/- 0.01
(sys) mag. For the first time, we also directly compare a JAGB distance using
ground-based and space-based photometry. We measure: (JAGB F110W) =
24.71 +/- 0.06 (stat) +/- 0.05 (sys) mag using the (F814-F110W) color
combination to effectively isolate the JAGB stars. In this paper, we measure a
distance to M33 accurate to 2% and provide further evidence that the JAGB
method is a powerful extragalactic distance indicator that can effectively
probe a local measurement of the Hubble constant using spaced-based
observations. We expect to measure the Hubble constant via the JAGB method in
the near future, using observations from JWST.Comment: 23 pages, 14 figures, accepted to the ApJ. v2 is exactly the same as
v1 except for a fixed minor typo found while looking at the proof
A Comparison of Fear of Childbirth and Labor Pain Intensity among Primiparous and Multiparous Women: A Cross-Sectional Study
Background & aim: Fear of Childbirth (FOC) can be seen as an anxiety disorder or as a phobia that women experience in relation to pregnancy and childbirth. We conducted this study to compare the intensity of labor pain with the FOC in multiparous and primiparous women.
Methods: This cross-sectional study was conducted using convenience sampling on 432 pregnant women in Babol, Iran, between 2018 and 2019. All pregnant women completed the demographic and FOC questionnaires, as well as labor pain intensity, four times measured using a visual analog scale (VAS). Data were analyzed using SPSS version 16 software and descriptive and analytical indices.
Results: FOC in primiparous women was significantly higher than in multiparous mothers (MD: 12.08<0.001, P). The multivariate linear regression test showed that after adjusting the intervening and obstetric variables, in terms of the intensity of pain in the active phase of labor (MD: 0.07; CI 95% -0.32, 0.47; P=0.71) and the expulsion of the fetus (MD: 0.02; CI 95% -0.38, 0.44; P=0.89), there was no statistically significant difference between primiparous and multiparous women. However, a statistically significant difference was found in placental discharge (MD: 0.52; CI 95% 0.01, 1.02; P = 0.043).
Conclusion: FOC in primiparous women was significantly higher than that of multiparous women after adjusting the intervening variables. The results of this study provide basic information for policy makers to pay more attention to reducing the fear of childbirth, especially in primiparous women
Real-time traffic event detection using Twitter data
Incident detection is an important component of intelligent transport systems and plays a key role in urban traffic management and provision of traveller information services. Due to its importance, a wide number of researchers have developed different algorithms for real-time incident detection. However, the main limitation of existing techniques is that they do not work well in conditions where random factors could influence traffic flows. Twitter is a valuable source of information as its users post events as they happen or shortly after. Therefore, Twitter data have been used to predict a wide variety of real-time outcomes. This paper aims to present a methodology for a real-time traffic event detection using Twitter. Tweets are obtained through the Twitter streaming application programming interface in real time with a geolocation filter. Then, the author used natural language processing techniques to process the tweets before they are fed into a text classification algorithm that identifies if it is traffic related or not. The authors implemented their methodology in the West Midlands region in the UK and obtained an overall accuracy of 92·86%
Dapagliflozin and Days of Full Health Lost in the DAPA-HF Trial
Background: Conventional time-to-first-event analyses cannot incorporate recurrent hospitalizations and patient well-being in a single outcome. Objectives: To overcome this limitation, we tested an integrated measure that includes days lost from death and hospitalization, and additional days of full health lost through diminished well-being. Methods: The effect of dapagliflozin on this integrated measure was assessed in the DAPA-HF (Dapagliflozin and Prevention of Adverse Outcomes in Heart Failure) trial, which examined the efficacy of dapagliflozin, compared with placebo, in patients with NYHA functional class II to IV heart failure and a left ventricular ejection fraction ≤40%. Results: Over 360 days, patients in the dapagliflozin group (n = 2,127) lost 10.6 ± 1.0 (2.9%) of potential follow-up days through cardiovascular death and heart failure hospitalization, compared with 14.4 ± 1.0 days (4.0%) in the placebo group (n = 2,108), and this component of all measures of days lost accounted for the greatest between-treatment difference (−3.8 days [95% CI: −6.6 to −1.0 days]). Patients receiving dapagliflozin also had fewer days lost to death and hospitalization from all causes vs placebo (15.5 ± 1.1 days [4.3%] vs 20.3 ± 1.1 days [5.6%]). When additional days of full health lost (ie, adjusted for Kansas City Cardiomyopathy Questionnaire–overall summary score) were added, total days lost were 110.6 ± 1.6 days (30.7%) with dapagliflozin vs 116.9 ± 1.6 days (32.5%) with placebo. The difference in all measures between the 2 groups increased over time (ie, days lost by death and hospitalization −0.9 days [−0.7%] at 120 days, −2.3 days [−1.0%] at 240 days, and −4.8 days [−1.3%] at 360 days). Conclusions: Dapagliflozin reduced the total days of potential full health lost due to death, hospitalizations, and impaired well-being, and this benefit increased over time during the first year.</p
- …