The most consistently-updated list of my publications is my Google Scholar page.
My research effort is focused around the intersection of machine learning and medicine/healthcare. (I'd rather it be the union, but I only have so much time in a day).
I'm interested in representation learning, probabilistic modelling, and deep learning on time series. I'm interested in critical care medicine and characterising organ function in ICU patients. This fits into a broader interest in patient state modelling and forecasting, which is a natural application area for machine learning.
A non-exhaustive list of projects in roughly reverse-chronological order:
Organ failure prediction in intensive care
- Hyland, Faltys, Hüser, Lyu, Gumbsch et al., "Early prediction of circulatory failure in the intensive care unit using machine learning", Nature Medicine, 2020
Explanation: The objective is to identify patients at risk of near-term (next 8 hours) circulatory failure in an intensive care unit (ICU) setting. Although ICU patients are highly monitored, it's not always possible for clinicians to carefully watch all of them. We specualted that a continuously-running early warning system would be useful for aiding in decisions around monitoring priority, especially if it was more precise and more timely than existing alarm systems. The idea was therefore to use a large observational dataset from our collaborators at Bern University Hospital to develop a model for risk of circulatory failure in ICU patients, focusing on precisely and timeliness. After many months (years?) of careful data preprocessing and model development we produced such a system, and performed various evaluations to help understand its limitations. Ultimately a prospective trial will be needed to assess its medical impact, which I hope my Swiss collaborators in the project are pursuing (I've since gone elsewhere and am not actively working on this project any more).
This was funded by a grant from the Swiss National Science Foundation.
Lots, maybe I'll upload them some time.
Recurrent conditional GANs for synthetic medical data
- (preprint) Hyland, Esteban, and Rätsch "Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs", arXiv 2017.
Explanation: Generative adversarial networks have been used to generate realistic data in domains like images and text, so we used the approach to create synthetic medical timeseries (specifically ICU data). We did this because sharing data in medicine is challenging (for good reason), so if one had a sufficiently realistic synthetic dataset it could be used for benchmarking and high-level model development. To achieve this we had to build a GAN for time-series (a recurrent GAN), come up with a way of evaluating the synthetic data, and consider privacy implications. We empirically evaluated memorisation in the GAN, and also trained it in a differentially private manner to ensure the sensitive training data would not be compromised.
- Poster at the Machine Learning for Health workshop at NIPS 2017, also a single-slide spotlight. I also presented essentially the same poster at the WiML workshop at NIPS 2017.
- Internal talk I gave at the Max Planck - ETH Centre for Learning Systems retreat in October 2017 - some gifs are broken, so you can look on the github repo.
Learning unitary RNNs
- Hyland and Rätsch. "Learning Unitary Operators with Help From u(n)", AAAI 2017.
Explanation: Recurrent neural networks can suffer from exploding/vanishing gradients, especially on long input sequences. Using a unitary or orthogonal matrix for the transition weight matrix in the RNN can help address this problem, but then you have to enforce unitarity somehow, and the unitary group is not closed under addition. Existing work had used a restricted parametrisation of the unitary group, so in our paper we came up with a full parametrisation using the Lie algebra (u(n)) associated to the group of n x n unitary matrices.
- Talk I gave at AAAI 2017.
- Poster at the Women in Machine Learning Workshop at NIPS 2016. I also presented a near-identical poster earlier that year at the Geometry in Machine Learning workshop at ICML 2016.
Word and relationship embeddings for medicine
- Hyland, Karaletsos, and Rätsch. "A Generative Model of Words and Relationships from Multiple Sources", AAAI. 2016.
- Hyland, Karaletsos, and Rätsch. "Knowledge Transfer with Medical Language Embeddings", Workshop on Data Mining for Medicine and Healthcare. 2016.
Explanation: I wrote a non-expert explanation of this project/paper. The basic idea is to learn vector representations for concepts, and affine transformations for relationships between concepts to enable context-dependent similarity . This also enables us to exploit knowledge graphs (e.g. the UMLS MetaThesaurus) to learn concept embeddings even if we have limited unstructured data. We learn all the embeddings jointly maximising the likelihood of a Boltzmann distribution using persistent contrastive divergence.
- Internal talk I gave about this work, based on the talk I gave at the International Workshop on Embeddings and Semantics in September 2015 in Alicante, Spain. The talk also contains a small tutorial on distributional semantics.
- Poster at AAAI 2016.