POPILS 2025
I attended the workshop Parcimonie, Optimisation et Problèmes Inverses Lyon Saint-Etienne Savoir. It was held at Polytech Annecy. It was particularly hot in this mid June, and there were train issues so I had to take the car alone.
However it was well-worth it because I met with people from my research community, and 100% of the content of the day was in topics that I’m very interested in. Antoine Collas talked about distribution shift and benchmarking, it was very interesting, and the application was even related to the medical field (ECG analysis). Then Julie Digne talked about Implicit Neural Reprensentations (aka NeRFs), which is something I was supposed to be working on for my thesis. Rodolphe LeRiche talked about Bayesian Optimization. While the topic is not particularly easy, he was very good at explaining things with simple words. It’s a bit farther from my research domain but it was still very cool. Finally Romain Vo talked about unrolling applied to CT reconstruction, and new techniques to make it more memory efficient. I’m not really working on unrolling but these method are very competitive in terms of performances. However, I still find it a bit sad to have to train a neural network for each specific forward operator, I’m not sure that it’s able to generalize a lot out of its training domain. At the same time using the forward operator during training is obviously helpful to improve performances.
Ah of course, I also presented a poster of my work on convergent PnP methods for Poisson noise.
