ILTpy FAQ

Loading data

Numpy arrays can be passed as input to ilt.iltload(). If your data is saved as text files, it can be loaded as numpy arrays simply by :

import numpy as np
data = np.loadtxt('data.txt')
t = np.loadtxt('t.txt')

These numpy arrays can now be passed to ilt.iltload() :

import iltpy as ilt
dataILT = ilt.iltload(data=data,t=t)

For further options, see loading data.

Defining your own kernels

For defining your own kernel to use with ILTpy, use the following approach :

from iltpy.fit.kernels import IltKernel
class MyKernel(IltKernel): # use base IltKernel class
    def __init__(self):
        ## give the kernel a name
        self.name = 'my_kernel'
        ## Defining a stretched exponential function with a stretching parameter of 0.5
        self.kernel = lambda t,tau : np.exp(-t/tau)**0.5
        ## also include the kernel function as a string
        self.kernel_str = "lambda t,tau : np.exp(-t/tau)**0.5"

Read more about kernels in ILTpy.

Setting noise variance to unity

For inversion, ILTpy expects data which has Gaussian identically distributed noise with a variance of 1. Most experimental datasets need to be scaled with the noise level in order to fulfill this condition. A common practice in case of magnetic resonance data is to select a region with a flat baseline and with only noise. The data is then scaled with the standard deviation of data points in this region.

## Load data
data = np.loadtxt('data.txt')

## select a noise region, in this case the first 100 points of the data array
noise_level = np.std(data[0:100])

## scale data
data = data/noise_level

Ensuring the validity of a fit

Problems with the inversion can be diagnosed by analyzing the fit residuals. After an inversion, residuals can be accessed by IltData.residuals. If the fit residuals are random noise without any systematic features, the chosen kernel is compatible. If the residuals show systematic features, the original data should be inspected for outliers or non-uniform noise. The fit may be also be repeated with reduced value of alpha_00 to rule out problems with over-regularization.

ILTpy allows for generating uncertainty estimates of distributions by calling IltData.iltstats() after an initial inversion. This function takes the initial fit and generates multiple data samples by adding uniform, random noise to the initial fit. Each of these data samples is then inverted again with the same parameters as the initial inversion to generate uncertainty estimates.

See an example where IltData.iltstats() is used.

Specifying input sampling for uninverted dimension

In cases where only one dimension is inverted but both dimensions are regularized, the sampling vector for the non-inverted yet regularized dimension is, by default, internally set to an array with unit spacing and the same number of elements as that dimension’s size—unless explicitly provided otherwise. For majority of the cases, this is a suitable choice. However, depending how densely the data was sampled, the input sampling vector may need adjustment. User-defined input sampling vector in the dimension which is not inverted can be provided by passing the parameter dim_ndim during initialization.

Switching logging on

ILTpy uses Python’s built-in logging module to report internal information. By default, only ERROR level messages and above are shown. You can display internal debug logs by using the internal logger and setting the log level to DEBUG.

import iltpy as ilt # import iltpy
import logging
logger = logging.getLogger("iltpy_logger") # iltpy logger
logger.setLevel(logging.DEBUG)

Number of data points in a non-inverted but regularized dimension

In case of pseudo-2D inversions, such as pseudo-2D NMR datasets, only the first dimension is inverted but both dimensions are regularized. In this case, number of points in the uninverted dimension influences the computation time and memory requirements. Qualitatively, the number of data points along the spectral dimension (i.e., the dimension that is regularized but not inverted) should be sufficient to ensure that each resonance peak is sampled with greater density than its full width at half maximum (FWHM). With large number of points, the computation time and memory requirements also increase. See an example which tests the impact of number of data points. Note that the sampling array for the non-inverted but regularized dimension may require adjustment depending on sampling density of the original data.

Iterations take a lot of time

ILTpy uses an outer-product kernel structure, which allows the algorithm to scale to multiple dimensions but causes the kernel size to grow multiplicatively with both the dataset size and its dimensionality. As a result, the computation time per iteration increases with the number of data points and the size of the output sampling. A simple way to make the iterations faster is to do data reduction. See an example where data was reduced before inversion and another example which tests the impact of number of data points. Furthermore, ILTpy allows for providing custom solvers which might be faster on your system and use-case, as shown in this example. In case of multi-dimensional datasets, data compression using singular value decomposition may also be employed, as outlined in this 3D inversion example.

Inversion results in an error

Inversion errors after a few iterations may arise from an ill-conditioned matrix and from other causes, including:

  • Incompatible kernel: The selected kernel may not be suitable for modeling the underlying data.

  • Incorrect noise estimation: The noise variance might not be normalized to unity, or the noise level is not correctly estimated. Refer to the examples for guidance on data preparation before inversion and see the section on Setting noise variance to unity.

  • Multiplicative noise: This type of noise scales with the data, making uniform scaling insufficient for a successful inversion.

The following strategies may be attempted:

  • Noise estimation: In case of multiplicative noise, estimate the multiplicative noise directly and proceed with weighted inversion based on that estimate.

  • Iterative data scaling: Try scaling the data (e.g. by powers of 2) iteratively until an initial inversion is successful. Once achieved, you can attempt weighted inversions using the residuals from the first inversion. See an example where weighted inversion is used.

  • Increase regularization strength : Iteratively increase alpha_00.