It is commonly accepted that machine learning requires large amounts of data. Fortunately, the sources of information are more and more numerous and the amount of data available in all domains is constantly increasing. However, this evolution has reached such a point that it is no longer realistic to think of storing all the data needed for a machine learning task on a single computer. This has led J. Konecny, H.B. MacMahan and D. Ramage to propose a new learning model in which the data is distributed on nodes and the model is learned in a distributed manner. This technique is known as Federated Learning.

Federated learning is classically conceived to learn neural networks. It implies the sharing of data, sometimes very sensitive, as in the medical field, and consequently, generates fears about privacy. Anonymization techniques have been proposed in this context, but they come up against attacks that can potentially de-anonymize the data used. Building upon symbolic artificial intelligence, inductive logic programming offers a more secure framework that naturally supports encryption approaches. The FILP project aims at establishing that it is possible to learn a mini-theory on each distributed node and to combine these mini-theories to produce a general theory.