I am a PhD student in Computer Science at the University of Manchester, affiliated with the Machine Learning and Robotics (MLR) research group in Manchester and with the Applied Music Research Lab (AMLab) lead by Dr. Coutinho at the University of Liverpool. My current project consists in applying machine learning methods in the field of music information retrieval, with the aim of devising quantitative measures for describing structural properties of music related to the richness and complexity of musical form.

Prior to embarking on the insanity of the doctoral studies, I received an MSc in Computer Science from Reykjavik University and the University of Camerino, as part of a Double Degree Programme in Artificial Intelligence. In my master thesis work, together with my special friends and colleagues Carlo and Giorgio (see the cool guys in the picture), I designed and developed Tangramob, a simulation framework supporting public and private decision makers in the task of shaping smart mobility services in urban areas of interest. The Tangramob project is based on both Agent-Based modelling and Reinforcement Learning techniques.

A few years before, I received a BSc in Computer Science from the University of Camerino, with a thesis investigating analytical tools for comparing clustering algorithms on different datasets in order to spot their ideal contexts of application.


March 2019 - Now Research Assistant University of Liverpool (AMLab): music emotion recognition models for the Polyhymnia project.
Jan 2018 - Now PhD student University of Manchester: Machine Learning for Music Information Retrieval
2017 - 2018 Research Assistant University of Camerino: advancing and improving Tangramob (AB Simulation, RL)
2014 - 2016 MSc in Computer Science University of Camerino and Reykjavik University (Double Degree)
Fall 2015 Exchange Semester Reykjavik University (as part of the Double Degree Program)
2011 - 2014 BSc in Computer Science University of Camerino



Estimating the effects of introducing a range of smart mobility solutions within an urban area is a crucial concern in urban planning. Local public authorities and mobility service providers currently base their mobility decisions on common heuristics and best practices. However, these approaches struggle to generalize any urban context, thus it follows that their effectiveness and applicability are often compromised. In particular, these approaches might end up being risky for decision makers, since the resulting mobility initiatives may be unaccepted by the population and this would translate in a considerable waste of resources.

Tangramob is an agent-based simulation framework that allows users to assess the impacts, in terms of people adoption and system performance, of the introduction of new smart mobility services within an urban area of interest. Starting from the urban road network of the geographical area under study, a sample population and a smart mobility initiative (a list of mobility services together with their definition), Tangramob simulates how urban traffic is expected to evolve as citizens start experiencing the newly offered traveling solutions.

Tangramob relies on an Agent-Based Model (ABM) in which every person in the sample population is modeled as an autonomous reasoning agent. During the simulation, each agent will try to understand how to better move within the city in order to satisfy its daily mobility needs in the best way. To do this, agents are given the ability to use all the mobility services available to perform their travels, and to record their traveling experience in a personal (and persistent) data structure (a Knowledge Base). Following a Reinforcement Learning (RL) strategy, a Tangramob simulation is organized as a series of iterations, so that a single day is simulated multiple times: this approach allows citizens to accumulate experience so as to come up with better mobility decisions. At the end of a simulation, we will be able to understand how the mobility initiative is accepted by the population and how it affects both citizens and urban system.

This open-source computational tool can thus be thought of as a Decision Support System (DSS) for Smart Mobility, since it is meant to support urban planner and transport companies in understanding if the simulated results of a mobility initiative are expected to be in line with their objectives and plans. The peculiarities of this tool are: the adaptability to different geographical contexts; the support of multimodal trips and mobility services; the ability to reproduce real-life scenarios. Technically speaking, Tangramob is developed on MATSim, an open-source framework to implement large-scale agent-based transport simulations.

If you want to learn more on this project, please visit the Tangramob website, or take a look at these slides.

Started on 2016 Ongoing

In Social Network Analysis (SNA), the field of information diffusion has recently gained interest since it aims to understand how people react to new incoming information within an influential environment. Predicting how a piece of news can spread over a social network is quite challenging since it depends on many factors like its content, the users elected to start its spread, the topology of the network, the interest rates of the exposed users along with their social network behavior as well. However, most traditional models of diffusion are mainly concerned with the social network topology when identifying the causes of word of mouth, thereby disregarding that users behave differently during their social network experience; e.g. a person who is continuously sharing posts on his wall has a greater spreading effect than a reserved user.

jSeagull, a Java Simulation Engine for Agent-Graph Understanding in Local Lattices, is a new simulation model for information diffusion based on users' social network behaviors for both news exposure and sharing attitude. Given a piece of news (e.g. a post or a tweet) and a social network, the phenomenon of word of mouth is modeled in terms of an epidemiology problem as follows: susceptible agents are impersonated by social network users whereas a piece of news to be spread within the social network plays the role of the epidemics. More precisely, a user is considered infected if he/she believes on the piece of news once he/she receives it. Being able to predict such spread is not an easy task since we are dealing with humans characters, but an acceptable accuracy can be reached if user personal data are available.

On a technical viewpoint, jSeagull requires a social network (a directed graph where nodes represent users and edges denote their friendship or follow relationship) and a file describing the personal data and the interests of each user. Examples of these files used in jSeagull can be found on the SNAP website. Next, the tool will guide you through the procedure of news/post design, in order to create the specific content for which we want to measure its spread within the social network under study. During this procedure, it is possible to write some text, add hashtags and tag some social network users, like in common social networks. Once the post is created, jSeagull applies a coloring scheme to the nodes of the social network graph in such a way to emphasize the interest rates of the users for that piece of news. This allows the observer to iterate through the post creation step until he/she finds an impactful piece of news. Finally, the observer has just to select a set of initial spreaders, i.e. those social network users who are expected to initiate and trigger the spread of the previously designed piece of news. The simulation is then ready to start: during this process is quite compelling to observe how the "epidemics" propagates among users as they get in touch with the piece of news. Unlike other existent approaches, it is worth remarking that the actual news spread of the piece of news depends on users' interests and sharing behaviors, as well as the affinity and strength of their friendship ties.

Started on 2015 Closed

The availability of Natural Language Processing (NLP) tools and resources, together with the integration of some artificial intelligence techniques, allowed the advancement of traditional Computer-Assisted Language Learning (CALL) systems. The formula AI + CALL has brought to Intelligent CALL (ICALL), an emergent field aiming at developing learning platforms capable of analyzing language automatically, providing learners with individualized feedback on their mistakes. However, the state of the art of ICALL systems does not include any functional application for the Italian language.

To fill this gap, we developed ICALL-IT, a new intelligent computer-assisted language learning platform for the Italian language. The goal of this project is to help young Italian students and non-native Italian speakers improve their grammatical skills as well as the understanding of the language. For this purpose, the first version of ICALL-it provides five different exercises (plus variants) based on what is considered to be more difficult for an intermediate Italian student. Each exercise aims to improve a specific set of the user's skills and it comes with one or more learning outcomes.

At the moment, ICALL-IT makes use of the following three NLP resources and tools for enabling the 5 exercises: TUT as annotated corpus, Morph-it! as lexicon and TreeTagger as part-of-speech text tagger. The first one consists in a native representation format which captures the richness of the predicate-argument structure, i.e. a crucial layer of representation for several NLP tasks such as parsing information extraction, machine translation and question answering. The second resource is Morph-it!, a lexicon of inflected forms with their lemma and morphological features. It can be used as a data source for either a lemmatizer, a morphological analyzer or a morphological generator. The last NLP tool used in ICALL-IT is TreeTagger, a language-independent tool for annotating text with part-of-speech and lemma information. TreTagger suggests a new probabilistic tagging method based on decision trees, avoiding problems that Markov Model taggers face when they have to estimate transition probabilities from sparse data.

Started on 2015 Suspended