Breckenridge. This week, our team attended the first conference for Artificial Intelligence in Epilepsy in Breckenridge, Colorado. I was honored to be one of the two speakers representing the epilepsy genetics field, trying to build the bridge between the impressive amount of research in machine learning and EEG analysis with our current progress and research efforts in the genetic epilepsies. In this blog post, I would like to summarize some of my impressions from this meeting and discuss two aspects where rare disease research and machine learning already intersect, namely seizure forecasting and virtual clinical trials.
AI in Epilepsy. The first congress on AI in Epilepsy and Neurological Disorders took place in Breckenridge, Colorado from March 7-10. While the focus of the conference may sound somewhat specialized at first, it is impressive to see that many fields in epilepsy research converge on the use of machine learning (ML) and artificial intelligence (AI). While I would have considered myself an outsider in this field, I was able to catch up with the progress in the field that is particularly prominent in imaging and EEG research. During this meeting, a particular terminology that is used in the field struck me: seizure forecasting. Seizure forecasting refers to the prediction of seizures in the future based on existing data and is used in AI and ML research to refer to long-term prediction. This stands in contrast to seizure prediction, which aims to calculate the likelihood of seizure in the immediate future and can be agnostic to the temporal dimension. The reason why this terminology struck me is because we had unknowingly picked the same language for our long-term seizure forecasting model based on phenotypic similarity that we presented at this conference.
Convergence. I cannot exclude that I had unconsciously picked language that I had heard at AES or at other meetings, but the parallel use of the term “forecasting” is telling. It shows that we have finally arrived at a place where the concepts used in the ML/AI field can be integrated with what I refer to as phenotype science. However, there are some important differences that I pointed out during my talk. I challenged the audience to imagine EEG research, however, with (1) scaling up the number of electrodes by the factor of 100, (2) making each electrode work only in 5% of the time, and (3) cutting out entire age ranges for individuals. These particular features of phenotype analysis, namely, (1) dimensionality, (2) open-world assumptions, and (3) EMR usage over time, make clinical data difficult to interpret. This is also the rationale behind why I typically emphasize the role of data harmonization and data integration in the rare disease space as opposed to the application of machine learning tools. If there is no harmonized data, there is no data for learning.
Language. I devoted a subset of my talk to the topic of phenotype language. In almost poetic terms, we need to learn to listen to the “language of the phenotypes,” understanding the patterns that emerge when harmonizing clinical data. This underlying thought explains the reason why our lab focusses so much on the use of the Human Phenotype Ontology (HPO). It is a “good-enough” ontology that lends itself to computational research due to its simplicity. However, even with this ontology, data can get complex quickly. Even for seemingly rare disorders such as STXBP1-related disorders or SYNGAP1-related disorders, we are looking at and attempting to make sense of hundred thousands of data points. For even more rare conditions such as CHD2-related disorders, we can use Real-World Data (RWD) mapped to HPO to provide a quick overview of an entire disease, as we demonstrated in our 3-week hackathon earlier this year.
Atomism. I also focused on the underlying philosophy of our lab when it comes to rare disease research. We prefer phenotypic atomism over machine learning mysticism and virtually none of the tools that we use are black boxes. We have built all of the tools and methodologies ourselves, such as our similarity algorithms or HPO analysis. The underlying sentiment of an AI conference is actually much different than you would think. The AI field is actually quite skeptical of some of the developments in the field as references to black boxes, lack of replication, and explainable AI (XAI) frequently occurred during the talks.
Seizure forecasting. During the end of my presentation, I demonstrated some of the applications of machine learning in the rare disease field we have recently developed, namely seizure forecasting and virtual clinical trials. Using RWD, we were able to make reasonable estimations about long-term seizure outcome in STXBP1-related disorders based on the individuals’ seizure histories solely in the first 12 months of life. Furthermore, we were able to demonstrate that future clinical trials will highly be dependent on the selection of the optimal age range given the wide range of outcomes in STXBP1-related disorders. As this information is currently not published or peer reviewed yet, I do not want to share too much of our results in this blog post but simply point out that we are finally arriving at a point where we can get the upper hand over the variability of neurodevelopmental disorders, which will be critical for trial readiness.
What you need to know. The progress of AI in epilepsy is unstoppable. However, the progress of genetics in epilepsy is also unstoppable. Therefore, it is just a matter of time that both fields complement each other, for example, by helping improve our ability to diagnose genetic epilepsies earlier and provide crucial information about genetically informed treatment responses and disease specific outcomes. Even by having genetics as a vital part of the first International Conference on Artificial Intelligence in Epilepsy and Neurological Disorders, we have taken the first step of making this a reality.