PyKale: open-source multimodal learning software library

PyKale is a library in the PyTorch ecosystem aiming to make machine learning more accessible to interdisciplinary research by bridging gaps between data, software, and end users. Both machine learning experts and end users can do better research with our accessible, scalable, and sustainable design, guided by green machine learning principles. PyKale has a unified pipeline-based API and focuses on multimodal learning and transfer learning for graphs, images, texts, and videos at the moment, with supporting models on deep learning and dimensionality reduction.

PyKale enforces standardization and minimalism, via green machine learning concepts of reducing repetitions and redundancy, reusing existing resources, and recycling learning models across areas. PyKale will enable and accelerate interdisciplinary, knowledge-aware machine learning research for graphs, images, texts, and videos in applications including bioinformatics, graph analysis, image/video recognition, and medical imaging, with an overarching theme of leveraging knowledge from multiple sources for accurate and interpretable prediction.

Haiping Lu
Haiping Lu
Director of the UK Open Multimodal AI Network, Professor of Machine Learning, and Head of AI Research Engineering

I am a Professor of Machine Learning. I develop translational multimodal AI technologies for advancing healthcare and scientific discovery.

Xianyuan Liu
Xianyuan Liu
Assistant Head of AI Research Engineering & Senior AI Research Engineer
Shuo Zhou
Shuo Zhou
Academic Fellow at University of Sheffield (past PhD Student)
Peizhen Bai
Peizhen Bai
PhD Student (now a Senior Machine Learning Scientist at AstraZeneca)
Raivo Koot
Raivo Koot
BSc Student (now an MLOps Engineer at Apple)
Lawrence Schobs
Lawrence Schobs
PhD Student
Hao Xu
Hao Xu
MSc Student (now a PhD student at UCSD)

Related