Research
Main interests
- machine learning
- continuous optimization
- compressed sensing
- statistical methods
- operator learning
- mathematical modelling
- optimal transport
- reinforcement learning
Master's research
Deep neural networks for inverse problems in imaging
In the past decade, deep learning has been incredibly successful in a myriad of image processing applications. As a result, there has been growing interest in applying deep learning to solve inverse problems in imaging. For example, deep learning is being leveraged in medical imaging to optimize sampling (to reduce scan times or radiation dose) and speed up reconstruction and inference. Being able to tackle imaging problems is fundamental to the progress of science, engineering and medicine. Despite recent work indicating that deep learning performs better than state-of-the-art model-based methods for imaging, deep neural networks have significant issues with stability and generalization. This raises the key question: can we construct deep neural networks for inverse problems in imaging with state-of-the-art performance guarantees?
My supervisor Ben Adcock and I contribute towards answering this question by extending the work of Matthew Colbrook, et al. In a paper, we construct neural networks that achieve the same performance guarantees as state-of-the-art model-based methods to recover a class of analysis-sparse signals. In my master's thesis, recovery of gradient-sparse signals is considered instead. The neural network constructions are based on unrolling an optimization algorithm, which are made efficient by applying a restart scheme to accelerate the image reconstruction. This has led to interesting side work with Ben Adcock and Matthew Colbrook to examine general parameter-free restart schemes for continuous optimization.
Our work brings together several areas of mathematics, including convex optimization, compressed sensing, random matrix theory and deep learning.
Publications
Submitted work
-
B. Adcock, M. Colbrook & M. Neyra-Nesterenko
Restarts subject to approximate sharpness: a parameter-free and optimal scheme for first-order methods
Preprint: arXiv:2301.02268
Journal papers
-
M. Neyra-Nesterenko & B. Adcock
NESTANets: stable, accurate and efficient neural networks for analysis-sparse inverse problems
Sampl. Theory Signal Process. Data Anal. 21, 4 (2023) [link]
Preprint: arXiv:2203.00804
Conference abstracts
-
B. Adcock & M. Neyra-Nesterenko
Provably accurate, stable and efficient deep neural networks for compressive imaging
In: International Conference on Computational Harmonic Analysis, abstract nr 48 (13-17 Sep 2021) [link] [pdf]
Theses and dissertations
-
M. Neyra-Nesterenko
Unrolled NESTA: constructing stable, accurate and efficient neural networks for gradient-sparse imaging problems
MSc thesis, Simon Fraser University (Mar 2023) [link] [pdf]
Presentations
Scheduled talks
-
Parameter-free and optimal restart schemes for first-order methods via approximate sharpness
WCOM Autumn [link] - contributed talk (University of British Columbia, Vancouver, BC, CA - Sep 21, 2024)
Past talks
-
Unrolled NESTA: constructing stable, accurate and efficient neural networks for gradient-sparse imaging problems
Math Grad Social [link] - seminar presentation (Simon Fraser University, Burnaby, BC, CA - Feb 7, 2023) -
Restart schemes: a powerful parameter-free acceleration scheme for first-order methods
SFU Applied Math Seminar [link] - seminar presentation (Simon Fraser University, Burnaby, BC, CA - Nov 23, 2022) -
Stable, accurate and efficient deep neural networks for reconstruction of gradient-sparse images
SIAM Pacific Northwest Conference [link] - minisymposium talk (Washington State University, Vancouver, WA, US - May 21, 2022) -
Stable, accurate and efficient deep neural networks for gradient sparse imaging
SIAM Conference on Imaging Science (IS22) [link] - minisymposium talk (virtual - Mar 22, 2022) -
Stable, accurate and efficient deep neural networks for inverse problems with analysis sparse models
SFU Operations Research Seminars [link] - seminar presentation (virtual - Feb 14, 2022) -
Provably accurate, stable and efficient deep neural networks for compressive imaging
International Conference on Computational Harmonic Analysis (ICCHA) [link] - contributed talk (virtual - Sep 17, 2021) -
Provably accurate and stable deep neural networks for imaging
CAIMS Annual Meeting [link] - contributed talk (virtual - Jun 23, 2021) -
Provably accurate and stable deep neural networks for imaging
Ottawa Mathematics Conference (OMC) [link] - contributed talk (virtual - May 28, 2021)