Effect of Training Set Discretization on the Accuracy of Parameter Estimation Neural Networks—Preliminary Results
ORAL
Abstract
Parameter estimation neural networks (PENN) are used extensively throughout the physical sciences; our main application is to the mapping of myelin, a critical macromolecule for normative signaling in the central nervous system. These PENN are often designed around a training set consisting of signals paired with their corresponding parameters. Defining the training set involves selecting parameter ranges, corresponding to the expected values for real-world signals in a given application, and corresponding discretization. However, while increased discretization presumably leads to increased interpolation accuracy, it also results in increased training time. This becomes especially problematic with an increasing number of parameters. Surprisingly, we have not found previous literature evaluating this tradeoff between accuracy and training time. In our current studies, we examine feedforward neural networks (NNs) for analysis of monoexponential and biexponential decay. We generate synthetic examples of decay curves within a fixed parameter range but with varying discretization within that range and evaluate interpolation performance. The framework developed here can be extended to a wide variety of models, both in magnetic resonance and other areas of application.
–
Presenters
-
Rajib Chowdhury
National Institutes of Health
Authors
-
Rajib Chowdhury
National Institutes of Health
-
Aaron Lee
National Institutes of Health
-
Richard G Spencer
National Institutes of Health (NIH)