Publications

Bias Mitigated Learning from Differentially Private Synthetic Data: A Cautionary Tale

Published in Uncertainty in Artificial Intelligence 2022 (Oral), 2021

Increasing interest in privacy-preserving machine learning has led to new models for synthetic private data generation from undisclosed real data. However, mechanisms of privacy preservation introduce artifacts in the resulting synthetic data that have a significant impact on downstream tasks such as learning predictive models or inference. In particular, bias can affect all analyses as the synthetic data distribution is an inconsistent estimate of the real-data distribution. We propose several bias mitigation strategies using privatized likelihood ratios that have general applicability to differentially private synthetic data generative models. Through large-scale empirical evaluation, we show that bias mitigation provides simple and effective privacy-compliant augmentation for general applications of synthetic data. However, the work highlights that even after bias correction significant challenges remain on the usefulness of synthetic private data generators for tasks such as prediction and inference.

Recommended citation: S. Ghalebikesabi, H. Wilde, J. Jewson, S. Vollmer, A. Doucet, C. Holmes (2021). " Bias Mitigated Learning from Differentially Private Synthetic Data: A Cautionary Tale." arXiv preprint arXiv:2108.10934.. https://arxiv.org/pdf/2108.10934.pdf

On Locality of Local Explanation Models

Published in NeurIPS 2021, 2021

Shapley values provide model agnostic feature attributions for model outcome at a particular instance by simulating feature absence under a global population distribution. The use of a global population can lead to potentially misleading results when local model behaviour is of interest. Hence we consider the formulation of neighbourhood reference distributions that improve the local interpretability of Shapley values. By doing so, we find that the Nadaraya-Watson estimator, a well-studied kernel regressor, can be expressed as a self-normalised importance sampling estimator. Empirically, we observe that Neighbourhood Shapley values identify meaningful sparse feature relevance attributions that provide insight into local model behaviour, complimenting conventional Shapley analysis. They also increase on-manifold explainability and robustness to the construction of adversarial classifiers.

Recommended citation: S. Ghalebikesabi, L. Ter-Minassian, K. Díaz-Ordaz, C. Holmes (2021). "On Locality of Local Explanation Models." 35th Conference on Neural Information Processing Systems (NeurIPS 2021). 1(2). https://arxiv.org/pdf/2106.14648.pdf

Identification of Underlying Disease Domains by Longitudinal Latent Factor Analysis for Secukinumab Treated Patients in Psoriatic Arthritis and Rheumatoid Arthritis Trials

Published in ACR Convergence 2021, 2021

Secukinumab is a fully monoclonal antibody approved for the treatment of several related autoinflammatory diseases, including psoriasis, psoriatic arthritis (PsA) and ankylosing spondylitis.1 While a single clinical endpoint may be chosen to evaluate treatment effect, the natural extension of this sets out to capture a clinical trials entire longitudinal response profile, made up of multifaceted signs and symptoms. The objective of this analysis is to characterize disease progression and treatment response to secukinumab, across a wide range of clinical variables, thereby complementing traditional analyses of standard endpoints in PsA and rheumatoid arthritis (RA).

Recommended citation: Zhu X, Falck F, Ghalebikesabi S, Kormaksson M, Vandemeulebroecke M, Zhang C, Santos L, Hei Kwok C, West D, Mallon A, Martin R, Readie A, Gandhi K, Ligozio G, Nicholson G. (2021). "Identification of Underlying Disease Domains by Longitudinal Latent Factor Analysis for Secukinumab Treated Patients in Psoriatic Arthritis and Rheumatoid Arthritis Trials." ACR Convergence 2021. 1(2). https://acrabstracts.org/abstract/identification-of-underlying-disease-domains-by-longitudinal-latent-factor-analysis-for-secukinumab-treated-patients-in-psoriatic-arthritis-and-rheumatoid-arthritis-trials/

Density Estimation with Autoregressive Bayesian Predictives

Published in arXiv, 2021

Bayesian methods are a popular choice for statistical inference in small-data regimes due to the regularization effect induced by the prior, which serves to counteract overfitting. In the context of density estimation, the standard Bayesian approach is to target the posterior predictive. In general, direct estimation of the posterior predictive is intractable and so methods typically resort to approximating the posterior distribution as an intermediate step. The recent development of recursive predictive copula updates, however, has made it possible to perform tractable predictive density estimation without the need for posterior approximation. Although these estimators are computationally appealing, they tend to struggle on non-smooth data distributions. This is largely due to the comparatively restrictive form of the likelihood models from which the proposed copula updates were derived. To address this shortcoming, we consider a Bayesian nonparametric model with an autoregressive likelihood decomposition and Gaussian process prior, which yields a data-dependent bandwidth parameter in the copula update. Further, we formulate a novel parameterization of the bandwidth using an autoregressive neural network that maps the data into a latent space, and is thus able to capture more complex dependencies in the data. Our extensions increase the modelling capacity of existing recursive Bayesian density estimators, achieving state-of-the-art results on tabular data sets. .

Recommended citation: Ghalebikesabi, S., Holmes, C., Fong, E., & Lehmann, B. (2022). Density Estimation with Autoregressive Bayesian Predictives. arXiv preprint arXiv:2206.06462. https://arxiv.org/abs/2206.06462

Deep Generative Pattern-Set Mixture Models for Nonignorable Missingness Imputation

Published in Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS), 2021

We propose a variational autoencoder architecture to model both ignorable and nonignorable missing data using pattern-set mixtures as proposed by Little (1993). Our model explicitly learns to cluster the missing data into missingness pattern sets based on the observed data and missingness masks. Underpinning our approach is the assumption that the data distribution under missingness is probabilistically semi-supervised by samples from the observed data distribution. Our setup trades off the characteristics of ignorable and nonignorable missingness and can thus be applied to data of both types. We evaluate our method on a wide range of data sets with different types of missingness and achieve state-of-the-art imputation performance. Our model outperforms many common imputation algorithms, especially when the amount of missing data is high and the missingness mechanism is non-ignorable.

Recommended citation: S. Ghalebikesabi, R. Cornish, L. Kelly, C. Holmes (2021). " Deep Generative Pattern-Set Mixture Models." Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS). http://proceedings.mlr.press/v130/ghalebikesabi21a/ghalebikesabi21a.pdf