Research
Note: my master's program is still in progress, so expect this page to update.
Main interests
- deep learning
- continuous optimization
- optimal transport
- compressed sensing
- numerical analysis
- partial differential equations
Current research
My ongoing master's research is about developing stable, accurate, and efficient deep neural networks for compressive imaging.
To motivate this, in the past decade, deep learning has taken the world by storm, with unforeseen success in a myriad of (formerly) challenging applications, like image classification and natural language processing. Indicating great potential, there has been growing interest in applying deep learning to solve compressive imaging problems. Compressive imaging encompasses a wide range of scientific tasks, such as medical imaging, seismic imaging and electron microscopy to name a few. Up until recently, model-based methods served as state-of-the-art tools to tackle these problems. Data-driven methods, like deep learning, have been found to empirically outperform model-based methods with superior accuracy on data, but suffer from hallucinations, instability, and poor generalization performance. This raises security and safety concerns for using deep learning in critical tasks like medical imaging.
A natural question to ask is: can we compute deep neural networks for compressive imaging with state-of-the-art performance guarantees?
In collaboration with my supervisor Ben Adcock, our research aims to address this question, taking inspiration from and building on FIRENETs, a closely related work authored by Antun, Colbrook and Hansen. Our work touches upon several areas of mathematics, including convex optimization, compressed sensing, random matrix theory, and machine learning.
Publications
Submitted work
-
M. Neyra-Nesterenko & B. Adcock
Stable, accurate and efficient deep neural networks for inverse problems with analysis-sparse models
Preprint: arXiv:2203.00804 (2022)
Conference abstracts
-
B. Adcock & M. Neyra-Nesterenko
Provably Accurate, Stable and Efficient Deep Neural Networks for Compressive Imaging
International Conference on Computational Harmonic Analysis (ICCHA) [abstract link] (2021)
Presentations
Past talks
-
Stable, accurate and efficient deep neural networks for reconstruction of gradient-sparse images
SIAM Pacific Northwest Conference [link] - minisymposium talk (Washington State University, Vancouver, WA, US - May 21, 2022) -
Stable, Accurate and Efficient Deep Neural Networks for Gradient Sparse Imaging
SIAM Conference on Imaging Science (IS22) [link] - minisymposium talk (virtual - Mar 22, 2022) -
Stable, accurate and efficient deep neural networks for inverse problems with analysis sparse models
SFU Operations Research Seminars [link] - seminar presentation (virtual - Feb 14, 2022) -
Provably Accurate, Stable and Efficient Deep Neural Networks for Compressive Imaging
International Conference on Computational Harmonic Analysis (ICCHA) [link] - contributed talk (virtual - Sep 17, 2021) -
Provably Accurate and Stable Deep Neural Networks for Imaging
CAIMS Annual Meeting [link] - contributed talk (virtual - Jun 23, 2021) -
Provably Accurate and Stable Deep Neural Networks for Imaging
Ottawa Mathematics Conference (OMC) [link] - contributed talk (virtual - May 28, 2021)