Jacopo de Berardinis, PhD

Music Technology, Machine Learning, Responsible Music AI

I am a Postdoctoral Research Associate in Computer Science at the University of Manchester, and incoming Lecturer at the University of Liverpool. Currently, I work on S+T+ARTS MUSAE, and on the EU H2020 Polifonia project, where I lead the INTERLINK pilot.

My research lies at the intersection of Machine Learning and Music Technology. I design methods for computational music analysis and information retrieval: from the detection of structures, emotions, and similarities in music, to the design of systems for personalised music discovery and recommendation.
Using a computational lens, I seek to investigate the link between music, memory, and emotions, to derive knowledge that can be used to improve our understanding of music, personalise music listening for wellbeing, support computational creativity and increase engagement in music education.

Research

My research spans across 3 areas: (1) Computational Creativity, (2) Music Personalisation, and (3) Semantic Music Web. All these directions share Music Information Retrieval as the core foundational level, providing computational methods to extract, search and relate knowledge from music. More information is given below.

Research Interests

In Music Information Retrieval (MIR) we develop algorithms for the automatic analysis of music. This can either operate at the musical content level (audio recordings, scores, etc.) and/or at the metadata level (tags, contextual descriptions, relationships, etc.), to automatically organise, search, and manipulate musical information. Examples include: music transcription, genre recognition, source separation, key detection, query by humming, etc.
Despite the general interest in MIR, my research focuses more on music structure analysis (detecting sections, phrases, motifs in a piece), music emotion recognition (predicting induced or expressed emotions from music) and music similarity. This encompasses the design of both computational models to address these tasks and any application built on top of them.

The recent rise of generative systems (e.g. DALL-E, MusicGen, etc.) has sparked commercial interest and opened up new opportunities for the creative sector. However, Generative AI is also creating serious implications and concerns for creative professionals: from the ethical dimensions and the ownership of the generated material, to the reuse of copyrighted material as training data for music models.
To address this challenge, my research focuses on responsible generative methods for computational creativity that are designed with the artists and for the artists, to preserve their active role in the creation process and give them proper recognition to the music that was contributed. I am also particularly interested in new evaluation methods and frameworks for music generation systems and their deployment in interactive workflows.

Music is known as a universal language that unites people, allowing us to manifest our feelings, recall memories and past experiences, and providing a highly expressive medium to unleash creativity. Among its benefits, music has been demonstrated to (1) improve mental and physical health (e.g. reducing stress, anxiety, depression, muscle pain, blood pressure), (2) increase memory capabilities, sleep quality, cognitive performance, and endurance, (3) while serving as an effective intervention to various disorders (Alzheimer's disease, dementia, etc.).
Current computational systems for music lack personalisation and interpretability - providing black-box solutions where humans are often seen as end users rather than active players. Here, I seek to creating personalised and engaging experiences where users are able to naturally search for music, navigate its related content and generate playlists depending on their interests, likings, mood, activities and goals, while interacting with the system to expand their awareness and knowledge behind this process.

Not only can knowledge be inferred from musical content, but informative and complementary types of knowledge are already available on the Web across different modalities (text, images, videos, links) and resources (MusicBrainz, Genius, YouTube, Songfacts, etc.). To date, this knowledge is scattered, poorly interconnected, and represented via different conventions and formats. Knowledge Graphs hold the potential to address this problem.
We are creating the largest interconnected Music Knowledge Graph by integrating music data from different sources and modalities. This provides the opportunity to study multimodality at scale and design Trustworthy AI applications for information retrieval, knowledge discovery, and generative machine learning.

In Machine Learning, the field of Representation Learning deals with designing computational models, paradigms, and training objectives to learn numerical representation (alias embeddings) that can be reused for a number of downstream tasks and applications. For example, in NLP, pre-trained text embeddings are used as starting point for semantic search, paraphrase detection, sentiment analysis, syntactic parsing, etc.
In the music domain, the goal is to learn general representations from musical content (audio, symbolic) and/or contextual information (e.g. metadata) to support Music Information Retrieval tasks (search, analysis, and recommendation). Here, I am interested in self-supervised methods to learn music embeddings from audio, symbolic, or multimodal data (e.g. text, images, videos, and Knowledge Graphs); with a focus on representations to address complex tasks requiring large amounts of expert annotations.

Selected Publications

For a more comprehensive list of publications, check out my Scholar page.

ChoCo: a Chord Corpus and a Data Transformation Workflow for Musical Harmony Knowledge Graphs
2023

J. de Berardinis, A. Meroño-Peñuela, A. Poltronieri, V. Presutti, in Scientific Data, vol. 10, 641, 2023.

Article | Code | Data | Slides
The Harmonic Memory: a Knowledge Graph of harmonic patterns as a trustworthy framework for computational creativity
2023

J. de Berardinis, A. Meroño-Peñuela, A. Poltronieri, V. Presutti, in Proceedings of the ACM Web Conference 2023, pp. 3873-3882.

Article | Code | Slides
Measuring the Structural Complexity of Music: From Structural Segmentations to the Automatic Evaluation of Models for Music Generation
2022

J. de Berardinis, E. Coutinho, A. Cangelosi, in IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), vol. 30, 2022.

Article | Code | Slides
Unveiling the Hierarchical Structure of Music by Multi-Resolution Community Detection
2020

J. de Berardinis, M. Vamvakaris, E. Coutinho, A. Cangelosi, in Transactions of the International Society for Music Information Retrieval (TISMIR), 3(1), 2020.

Article | Code | Slides
Polyhymnia Mood – Empowering People to Cope with Depression through Music Listening
2021

E. Coutinho, A. Alshukri, J. de Berardinis, C. Dowrick, in Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers (UbiComp), 2021.

Article | Webpage
The Multiple Voices of Musical Emotions: Source Separation for Improving Music Emotion Recognition Models and their Interpretability
2020

J. de Berardinis, E. Coutinho, A. Cangelosi, in Proceedings of the 21st International Society for Music Information Retrieval (ISMIR) conference, pp. 310-317, 2020.

Article | Code | Poster | Slides

About

I received a PhD in Machine Learning from the University of Manchester, under the supervision of Prof. Angelo Cangelosi (Machine Learning and Robotics), and Dr. Eduardo Coutinho (Applied Music Research Lab). My dissertation, "Structural complexity in music modelling and generation with deep neural networks" focused on the automatic evaluation of Generative Models in regard to their ability to compose music with realistic structure.

Jan 2024 - Now Research Associate University of Manchester, S+T+ARTS MUSAE project
May 2021 - 2023 Research Associate King's College London (Distributed AI group), under the EU H2020 Polifonia project
2019 - Now Honorary Researcher University of Liverpool (AMLab): Personalised music recommendation for mood regulation
2018 - 2021 PhD in Machine Learning University of Manchester (ELLIS unit), Machine Learning and Robotics (MLR) group
2017 - 2018 Research Assistant University of Camerino: Multi-agent systems for traffic modelling and simulation
2014 - 2016 MSc in Computer Science Reykjavik University, University of Camerino (Double Degree)

Contact

I am always happy to supervise students eager to embark on Music AI research projects and internships. Feel free to reach out if you have a keen interest in exploring any research topic from above, or if you have a specific project proposal to discuss. I really value motivation, a critical mindset, and perseverance.
Connect with me via email at jacodb@liverpool.ac.uk.