Core Functions

Functions that are required to operate the package at a basic level

caiman.source_extraction.cnmf.CNMF(n_processes)

Source extraction using constrained non-negative matrix factorization.

caiman.source_extraction.cnmf.CNMF.fit(images)

This method uses the cnmf algorithm to find sources in data.

caiman.source_extraction.cnmf.online_cnmf.OnACID([...])

Source extraction of streaming data using online matrix factorization.

caiman.source_extraction.cnmf.online_cnmf.OnACID.fit_online(...)

Implements the caiman online algorithm on the list of files fls.

caiman.source_extraction.cnmf.params.CNMFParams([...])

Class for setting and changing the various parameters.

caiman.source_extraction.cnmf.estimates.Estimates([...])

Class for storing and reusing the analysis results and performing basic processing and plotting operations.

caiman.motion_correction.MotionCorrect(fname)

class implementing motion correction operations

caiman.motion_correction.MotionCorrect.motion_correct([...])

general function for performing all types of motion correction.

caiman.base.movies.load(file_name[, fr, ...])

load movie from file.

caiman.base.movies.movie.play([gain, fr, ...])

Play the movie using opencv

caiman.base.rois.register_ROIs(A1, A2, dims)

Register ROIs across different sessions using an intersection over union metric and the Hungarian algorithm for optimal matching

caiman.base.rois.register_multisession(A, dims)

Register ROIs across multiple sessions using an intersection over union metric and the Hungarian algorithm for optimal matching.

caiman.source_extraction.cnmf.utilities.detrend_df_f(A, ...)

Compute DF/F signal without using the original data.

Movie Handling

class caiman.base.movies.movie(input_arr, **kwargs)

Class representing a movie. This class subclasses timeseries, that in turn subclasses ndarray

movie(input_arr, fr=None,start_time=0,file_name=None, meta_data=None)

Example of usage:

input_arr = 3d ndarray fr=33; # 33 Hz start_time=0 m=movie(input_arr, start_time=0,fr=33);

See https://docs.scipy.org/doc/numpy/user/basics.subclassing.html for notes on objects that are descended from ndarray

Attributes:
T

View of the transposed array.

base

Base object if memory is from some other object.

ctypes

An object to simplify the interaction of the array with the ctypes module.

data

Python buffer object pointing to the start of the array’s data.

dtype

Data-type of the array’s elements.

flags

Information about the memory layout of the array.

flat

A 1-D iterator over the array.

imag

The imaginary part of the array.

itemsize

Length of one array element in bytes.

nbytes

Total bytes consumed by the elements of the array.

ndim

Number of array dimensions.

real

The real part of the array.

shape

Tuple of array dimensions.

size

Number of elements in the array.

strides

Tuple of bytes to step in each dimension when traversing an array.

time

Methods

IPCA([components, batch])

Iterative Principal Component analysis, see sklearn.decomposition.incremental_pca

IPCA_denoise([components, batch])

Create a denoised version of the movie using only the first 'components' components

IPCA_stICA([componentsPCA, componentsICA, ...])

Compute PCA + ICA a la Mukamel 2009.

NonnegativeMatrixFactorization([...])

See documentation for scikit-learn NMF

all([axis, out, keepdims, where])

Returns True if all elements evaluate to True.

any([axis, out, keepdims, where])

Returns True if any of the elements of a evaluate to True.

apply_shifts(shifts[, interpolation, ...])

Apply precomputed shifts to a movie, using subpixels adjustment (cv2.INTER_CUBIC function)

argmax([axis, out, keepdims])

Return indices of the maximum values along the given axis.

argmin([axis, out, keepdims])

Return indices of the minimum values along the given axis.

argpartition(kth[, axis, kind, order])

Returns the indices that would partition this array.

argsort([axis, kind, order])

Returns the indices that would sort this array.

astype(dtype[, order, casting, subok, copy])

Copy of the array, cast to a specified type.

bilateral_blur_2D([diameter, sigmaColor, ...])

performs bilateral filtering on each frame.

bin_median([window])

compute median of 3D array in along axis o by binning values

bin_median_3d([window])

compute median of 4D array in along axis o by binning values

byteswap([inplace])

Swap the bytes of the array elements

choose(choices[, out, mode])

Use an index array to construct a new array from a set of choices.

clip([min, max, out])

Return an array whose values are limited to [min, max].

compress(condition[, axis, out])

Return selected slices of this array along given axis.

computeDFF([secsWindow, quantilMin, method, ...])

compute the DFF of the movie or remove baseline

conj()

Complex-conjugate all elements.

conjugate()

Return the complex conjugate, element-wise.

copy([order])

Return a copy of the array.

cumprod([axis, dtype, out])

Return the cumulative product of the elements along the given axis.

cumsum([axis, dtype, out])

Return the cumulative sum of the elements along the given axis.

debleach()

Debleach by fiting a model to the median intensity.

diagonal([offset, axis1, axis2])

Return specified diagonals.

dump(file)

Dump a pickle of the array to the specified file.

dumps()

Returns the pickle of the array as a string.

extract_shifts([max_shift_w, max_shift_h, ...])

Performs motion correction using the opencv matchtemplate function.

extract_traces_from_masks(masks)

Args:

fill(value)

Fill the array with a scalar value.

flatten([order])

Return a copy of the array collapsed into one dimension.

gaussian_blur_2D([kernel_size_x, ...])

Compute gaussian blut in 2D.

getfield(dtype[, offset])

Returns a field of the given array as a certain type.

guided_filter_blur_2D(guide_filter[, ...])

performs guided filtering on each frame.

item(*args)

Copy an element of an array to a standard Python scalar and return it.

itemset(*args)

Insert scalar into an array (scalar is cast to array's dtype, if possible)

local_correlations([eight_neighbours, ...])

Computes the correlation image (CI) for the input movie.

max([axis, out, keepdims, initial, where])

Return the maximum along a given axis.

mean([axis, dtype, out, keepdims, where])

Returns the average of the array elements along given axis.

median_blur_2D([kernel_size])

Compute gaussian blut in 2D.

min([axis, out, keepdims, initial, where])

Return the minimum along a given axis.

motion_correct([max_shift_w, max_shift_h, ...])

Extract shifts and motion corrected movie automatically,

motion_correct_3d([max_shift_z, ...])

Extract shifts and motion corrected movie automatically,

newbyteorder([new_order])

Return the array with the same data viewed with a different byte order.

nonzero()

Return the indices of the elements that are non-zero.

online_NMF([n_components, method, lambda1, ...])

Method performing online matrix factorization and using the spams

partition(kth[, axis, kind, order])

Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array.

partition_FOV_KMeans([tradeoff_weight, fx, ...])

Partition the FOV in clusters that are grouping pixels close in space and in mutual correlation

play([gain, fr, magnification, offset, ...])

Play the movie using opencv

prod([axis, dtype, out, keepdims, initial, ...])

Return the product of the array elements over the given axis

ptp([axis, out, keepdims])

Peak to peak (maximum - minimum) value along a given axis.

put(indices, values[, mode])

Set a.flat[n] = values[n] for all n in indices.

ravel([order])

Return a flattened array.

removeBL([windowSize, quantilMin, in_place, ...])

Remove baseline from movie using percentiles over a window Args: windowSize: int window size over which to compute the baseline (the larger the faster the algorithm and the less granular

repeat(repeats[, axis])

Repeat elements of an array.

reshape(shape[, order])

Returns an array containing the same data with a new shape.

resize([fx, fy, fz, interpolation])

Resizing caiman movie into a new one.

return_cropped([crop_top, crop_bottom, ...])

Return a cropped version of the movie

round([decimals, out])

Return a with each element rounded to the given number of decimals.

save(file_name[, to32, order, imagej, ...])

Save the timeseries in single precision.

searchsorted(v[, side, sorter])

Find indices where elements of v should be inserted in a to maintain order.

setfield(val, dtype[, offset])

Put a value into a specified place in a field defined by a data-type.

setflags([write, align, uic])

Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively.

sort([axis, kind, order])

Sort an array in-place.

squeeze([axis])

Remove axes of length one from a.

std([axis, dtype, out, ddof, keepdims, where])

Returns the standard deviation of the array elements along given axis.

sum([axis, dtype, out, keepdims, initial, where])

Return the sum of the array elements over the given axis.

swapaxes(axis1, axis2)

Return a view of the array with axis1 and axis2 interchanged.

take(indices[, axis, out, mode])

Return an array formed from the elements of a at the given indices.

to2DPixelxTime([order])

Transform 3D movie into 2D

to3DFromPixelxTime(shape[, order])

Transform 2D movie into 3D

tobytes([order])

Construct Python bytes containing the raw data bytes in the array.

tofile(fid[, sep, format])

Write array to a file as text or binary (default).

tolist()

Return the array as an a.ndim-levels deep nested list of Python scalars.

tostring([order])

A compatibility alias for tobytes, with exactly the same behavior.

trace([offset, axis1, axis2, dtype, out])

Return the sum along diagonals of the array.

transpose(*axes)

Returns a view of the array with axes transposed.

var([axis, dtype, out, ddof, keepdims, where])

Returns the variance of the array elements, along given axis.

view([dtype][, type])

New view of array with the same data.

zproject([method, cmap, aspect])

Compute and plot projection across time:

calc_min

dot

to_2D

movie.play(gain: float = 1, fr=None, magnification: float = 1, offset: float = 0, interpolation=1, backend: str = 'opencv', do_loop: bool = False, bord_px=None, q_max: float = 99.75, q_min: float = 1, plot_text: bool = False, save_movie: bool = False, opencv_codec: str = 'H264', movie_name: str = 'movie.avi') None

Play the movie using opencv

Args:

gain: adjust movie brightness

fr: framerate, playing speed if different from original (inter frame interval in seconds)

magnification: float

magnification factor

offset: (undocumented)

interpolation:

interpolation method for ‘opencv’ and ‘embed_opencv’ backends

backend: ‘pylab’, ‘notebook’, ‘opencv’ or ‘embed_opencv’; the latter 2 are much faster

do_loop: Whether to loop the video

bord_px: int

truncate pixels from the borders

q_max, q_min: float in [0, 100]

percentile for maximum/minimum plotting value

plot_text: bool

show some text

save_movie: bool

flag to save an avi file of the movie

opencv_codec: str

FourCC video codec for saving movie. Check http://www.fourcc.org/codecs.php

movie_name: str

name of saved file

Raises:

Exception ‘Unknown backend!’

movie.resize(fx=1, fy=1, fz=1, interpolation=3)

Resizing caiman movie into a new one. Note that the temporal dimension is controlled by fz and fx, fy, fz correspond to magnification factors. For example to downsample in time by a factor of 2, you need to set fz = 0.5.

Args:
fx (float):

Magnification factor along x-dimension

fy (float):

Magnification factor along y-dimension

fz (float):

Magnification factor along temporal dimension

Returns:

self (caiman movie)

movie.computeDFF(secsWindow: int = 5, quantilMin: int = 8, method: str = 'only_baseline', in_place: bool = False, order: str = 'F') tuple[Any, Any]

compute the DFF of the movie or remove baseline

In order to compute the baseline frames are binned according to the window length parameter and then the intermediate values are interpolated.

Args:

secsWindow: length of the windows used to compute the quantile

quantilMin : value of the quantile

method=’only_baseline’,’delta_f_over_f’,’delta_f_over_sqrt_f’

in_place: compute baseline in a memory efficient way by updating movie in place

Returns:

self: DF or DF/F or DF/sqrt(F) movies

movBL=baseline movie

Raises:

Exception ‘Unknown method’

caiman.base.movies.get_file_size(file_name, var_name_hdf5: str = 'mov') tuple[tuple, int | tuple]

Computes the dimensions of a file or a list of files without loading it/them in memory. An exception is thrown if the files have FOVs with different sizes

Args:
file_name:

locations of file(s)

var_name_hdf5:

if loading from hdf5 name of the dataset to load

Returns:
dims: tuple

dimensions of FOV

T: int or tuple of int

number of timesteps in each file

caiman.base.movies.load(file_name: str | list[str], fr: float = 30, start_time: float = 0, meta_data: dict | None = None, subindices=None, shape: tuple[int, int] | None = None, var_name_hdf5: str = 'mov', in_memory: bool = False, is_behavior: bool = False, bottom=0, top=0, left=0, right=0, channel=None, outtype=<class 'numpy.float32'>, is3D: bool = False) Any

load movie from file. Supports a variety of formats. tif, hdf5, npy and memory mapped. Matlab is experimental.

Args:
file_name: string or List[str]

name of file. Possible extensions are tif, avi, npy, h5, n5, zarr (npz and hdf5 are usable only if saved by calblitz)

fr: float

frame rate

start_time: float

initial time for frame 1

meta_data: dict

dictionary containing meta information about the movie

subindices: iterable indexes

for loading only portion of the movie

shape: tuple of two values

dimension of the movie along x and y if loading from a two dimensional numpy array

var_name_hdf5: str

if loading from hdf5/n5 name of the dataset inside the file to load (ignored if the file only has one dataset)

in_memory: bool=False

This changes the behaviour of the function for npy files to be a readwrite rather than readonly memmap, And it adds a type conversion for .mmap files. Use of this flag is discouraged (and it may be removed in the future)

bottom,top,left,right: (undocumented)

channel: (undocumented)

outtype: The data type for the movie

Returns:

mov: caiman.movie

Raises:

Exception ‘Subindices not implemented’

Exception ‘Subindices not implemented’

Exception ‘Unknown file type’

Exception ‘File not found!’

caiman.base.movies.load_iter(file_name: str | list[str], subindices=None, var_name_hdf5: str = 'mov', outtype=<class 'numpy.float32'>, is3D: bool = False)

load iterator over movie from file. Supports a variety of formats. tif, hdf5, avi.

Args:
file_name: string or List[str]

name of file. Possible extensions are tif, avi and hdf5

subindices: iterable indexes

for loading only a portion of the movie

var_name_hdf5: str

if loading from hdf5 name of the variable to load

outtype: The data type for the movie

Returns:

iter: iterator over movie

Raises:

Exception ‘Subindices not implemented’

Exception ‘Unknown file type’

Exception ‘File not found!’

caiman.base.movies.load_movie_chain(file_list: list[str], fr: float = 30, start_time=0, meta_data=None, subindices=None, var_name_hdf5: str = 'mov', bottom=0, top=0, left=0, right=0, z_top=0, z_bottom=0, is3D: bool = False, channel=None, outtype=<class 'numpy.float32'>) Any

load movies from list of file names

Args:
file_list: list

file names in string format

the other parameters as in load_movie except

bottom, top, left, right, z_top, z_bottomint

to load only portion of the field of view

is3Dbool

flag for 3d data (adds a fourth dimension)

Returns:
movie: movie

movie corresponding to the concatenation og the input files

Timeseries Handling

class caiman.base.timeseries.timeseries(input_arr, fr=30, start_time=0, file_name=None, meta_data=None)

Class representing a time series.

Attributes:
T

View of the transposed array.

base

Base object if memory is from some other object.

ctypes

An object to simplify the interaction of the array with the ctypes module.

data

Python buffer object pointing to the start of the array’s data.

dtype

Data-type of the array’s elements.

flags

Information about the memory layout of the array.

flat

A 1-D iterator over the array.

imag

The imaginary part of the array.

itemsize

Length of one array element in bytes.

nbytes

Total bytes consumed by the elements of the array.

ndim

Number of array dimensions.

real

The real part of the array.

shape

Tuple of array dimensions.

size

Number of elements in the array.

strides

Tuple of bytes to step in each dimension when traversing an array.

time

Methods

all([axis, out, keepdims, where])

Returns True if all elements evaluate to True.

any([axis, out, keepdims, where])

Returns True if any of the elements of a evaluate to True.

argmax([axis, out, keepdims])

Return indices of the maximum values along the given axis.

argmin([axis, out, keepdims])

Return indices of the minimum values along the given axis.

argpartition(kth[, axis, kind, order])

Returns the indices that would partition this array.

argsort([axis, kind, order])

Returns the indices that would sort this array.

astype(dtype[, order, casting, subok, copy])

Copy of the array, cast to a specified type.

byteswap([inplace])

Swap the bytes of the array elements

choose(choices[, out, mode])

Use an index array to construct a new array from a set of choices.

clip([min, max, out])

Return an array whose values are limited to [min, max].

compress(condition[, axis, out])

Return selected slices of this array along given axis.

conj()

Complex-conjugate all elements.

conjugate()

Return the complex conjugate, element-wise.

copy([order])

Return a copy of the array.

cumprod([axis, dtype, out])

Return the cumulative product of the elements along the given axis.

cumsum([axis, dtype, out])

Return the cumulative sum of the elements along the given axis.

diagonal([offset, axis1, axis2])

Return specified diagonals.

dump(file)

Dump a pickle of the array to the specified file.

dumps()

Returns the pickle of the array as a string.

fill(value)

Fill the array with a scalar value.

flatten([order])

Return a copy of the array collapsed into one dimension.

getfield(dtype[, offset])

Returns a field of the given array as a certain type.

item(*args)

Copy an element of an array to a standard Python scalar and return it.

itemset(*args)

Insert scalar into an array (scalar is cast to array's dtype, if possible)

max([axis, out, keepdims, initial, where])

Return the maximum along a given axis.

mean([axis, dtype, out, keepdims, where])

Returns the average of the array elements along given axis.

min([axis, out, keepdims, initial, where])

Return the minimum along a given axis.

newbyteorder([new_order])

Return the array with the same data viewed with a different byte order.

nonzero()

Return the indices of the elements that are non-zero.

partition(kth[, axis, kind, order])

Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array.

prod([axis, dtype, out, keepdims, initial, ...])

Return the product of the array elements over the given axis

ptp([axis, out, keepdims])

Peak to peak (maximum - minimum) value along a given axis.

put(indices, values[, mode])

Set a.flat[n] = values[n] for all n in indices.

ravel([order])

Return a flattened array.

repeat(repeats[, axis])

Repeat elements of an array.

reshape(shape[, order])

Returns an array containing the same data with a new shape.

resize(new_shape[, refcheck])

Change shape and size of array in-place.

round([decimals, out])

Return a with each element rounded to the given number of decimals.

save(file_name[, to32, order, imagej, ...])

Save the timeseries in single precision.

searchsorted(v[, side, sorter])

Find indices where elements of v should be inserted in a to maintain order.

setfield(val, dtype[, offset])

Put a value into a specified place in a field defined by a data-type.

setflags([write, align, uic])

Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively.

sort([axis, kind, order])

Sort an array in-place.

squeeze([axis])

Remove axes of length one from a.

std([axis, dtype, out, ddof, keepdims, where])

Returns the standard deviation of the array elements along given axis.

sum([axis, dtype, out, keepdims, initial, where])

Return the sum of the array elements over the given axis.

swapaxes(axis1, axis2)

Return a view of the array with axis1 and axis2 interchanged.

take(indices[, axis, out, mode])

Return an array formed from the elements of a at the given indices.

tobytes([order])

Construct Python bytes containing the raw data bytes in the array.

tofile(fid[, sep, format])

Write array to a file as text or binary (default).

tolist()

Return the array as an a.ndim-levels deep nested list of Python scalars.

tostring([order])

A compatibility alias for tobytes, with exactly the same behavior.

trace([offset, axis1, axis2, dtype, out])

Return the sum along diagonals of the array.

transpose(*axes)

Returns a view of the array with axes transposed.

var([axis, dtype, out, ddof, keepdims, where])

Returns the variance of the array elements, along given axis.

view([dtype][, type])

New view of array with the same data.

dot

timeseries.save(file_name, to32=True, order='F', imagej=False, bigtiff=True, excitation_lambda=488.0, compress=0, q_max=99.75, q_min=1, var_name_hdf5='mov', sess_desc='some_description', identifier='some identifier', imaging_plane_description='some imaging plane description', emission_lambda=520.0, indicator='OGB-1', location='brain', unit='some TwoPhotonSeries unit description', starting_time=0.0, experimenter='Dr Who', lab_name=None, institution=None, experiment_description='Experiment Description', session_id='Session ID')

Save the timeseries in single precision. Supported formats include TIFF, NPZ, AVI, MAT, HDF5/H5, MMAP, and NWB

Args:
file_name: str

name of file. Possible formats are tif, avi, npz, mmap and hdf5

to32: Bool

whether to transform to 32 bits

order: ‘F’ or ‘C’

C or Fortran order

var_name_hdf5: str

Name of hdf5 file subdirectory

q_max, q_min: float in [0, 100]

percentile for maximum/minimum clipping value if saving as avi (If set to None, no automatic scaling to the dynamic range [0, 255] is performed)

compress: int

if saving as .tif, specifies the compression level if saving as .avi or .mkv, compress=0 uses the IYUV codec, otherwise the FFV1 codec is used

Raises:

Exception ‘Extension Unknown’

Motion Correction

class caiman.motion_correction.MotionCorrect(fname, min_mov=None, dview=None, max_shifts=(6, 6), niter_rig=1, splits_rig=14, num_splits_to_process_rig=None, strides=(96, 96), overlaps=(32, 32), splits_els=14, num_splits_to_process_els=None, upsample_factor_grid=4, max_deviation_rigid=3, shifts_opencv=True, nonneg_movie=True, gSig_filt=None, use_cuda=False, border_nan=True, pw_rigid=False, num_frames_split=80, var_name_hdf5='mov', is3D=False, indices=(slice(None, None, None), slice(None, None, None)))

class implementing motion correction operations

Methods

apply_shifts_movie(fname[, rigid_shifts, ...])

Applies shifts found by registering one file to a different file.

motion_correct([template, save_movie])

general function for performing all types of motion correction.

motion_correct_pwrigid([save_movie, ...])

Perform pw-rigid motion correction

motion_correct_rigid([template, save_movie])

Perform rigid motion correction

MotionCorrect.motion_correct(template=None, save_movie=False)

general function for performing all types of motion correction. The function will perform either rigid or piecewise rigid motion correction depending on the attribute self.pw_rigid and will perform high pass spatial filtering for determining the motion (used in 1p data) if the attribute self.gSig_filt is not None. A template can be passed, and the output can be saved as a memory mapped file.

Args:
template: ndarray, default: None

template provided by user for motion correction

save_movie: bool, default: False

flag for saving motion corrected file(s) as memory mapped file(s)

Returns:

self

MotionCorrect.motion_correct_rigid(template=None, save_movie=False) None

Perform rigid motion correction

Args:
template: ndarray 2D (or 3D)

if known, one can pass a template to register the frames to

save_movie_rigid:Bool

save the movies vs just get the template

Important Fields:

self.fname_tot_rig: name of the mmap file saved

self.total_template_rig: template updated by iterating over the chunks

self.templates_rig: list of templates. one for each chunk

self.shifts_rig: shifts in x and y (and z if 3D) per frame

MotionCorrect.motion_correct_pwrigid(save_movie: bool = True, template: ndarray | None = None, show_template: bool = False) None

Perform pw-rigid motion correction

Args:
save_movie:Bool

save the movies vs just get the template

template: ndarray 2D (or 3D)

if known, one can pass a template to register the frames to

show_template: boolean

whether to show the updated template at each iteration

Important Fields:

self.fname_tot_els: name of the mmap file saved self.templates_els: template updated by iterating over the chunks self.x_shifts_els: shifts in x per frame per patch self.y_shifts_els: shifts in y per frame per patch self.z_shifts_els: shifts in z per frame per patch (if 3D) self.coord_shifts_els: coordinates associated to the patch for values in x_shifts_els and y_shifts_els (and z_shifts_els if 3D) self.total_template_els: list of templates. one for each chunk

Raises:

Exception: ‘Error: Template contains NaNs, Please review the parameters’

MotionCorrect.apply_shifts_movie(fname, rigid_shifts: bool | None = None, save_memmap: bool = False, save_base_name: str = 'MC', order: str = 'F', remove_min: bool = True)

Applies shifts found by registering one file to a different file. Useful for cases when shifts computed from a structural channel are applied to a functional channel. Currently only application of shifts through openCV is supported. Returns either caiman.movie or the path to a memory mapped file.

Args:
fname: str of list[str]

name(s) of the movie to motion correct. It should not contain nans. All the loadable formats from CaImAn are acceptable

rigid_shifts: bool (True)

apply rigid or pw-rigid shifts (must exist in the mc object) deprecated (read directly from mc.pw_rigid)

save_memmap: bool (False)

flag for saving the resulting file in memory mapped form

save_base_name: str [‘MC’]

base name for memory mapped file name

order: ‘F’ or ‘C’ [‘F’]

order of resulting memory mapped file

remove_min: bool (True)

If minimum value is negative, subtract it from the data

Returns:
m_reg: caiman movie object

caiman movie object with applied shifts (not memory mapped)

caiman.motion_correction.motion_correct_oneP_rigid(filename, gSig_filt, max_shifts, dview=None, splits_rig=10, save_movie=True, border_nan=True)

Perform rigid motion correction on one photon imaging movies

Args:
filename: str

name of the file to correct

gSig_filt:

size of the filter. If algorithm does not work change this parameters

max_shifts: tuple of ints

max shifts in x and y allowed

dview:

handle to cluster

splits_rig: int

number of chunks for parallelizing motion correction (remember that it should hold that length_movie/num_splits_to_process_rig>100)

save_movie: bool

whether to save the movie in memory mapped format

border_nanbool or string, optional

Specifies how to deal with borders. (True, False, ‘copy’, ‘min’)

Returns:

Motion correction object

caiman.motion_correction.motion_correct_oneP_nonrigid(filename, gSig_filt, max_shifts, strides, overlaps, splits_els, upsample_factor_grid, max_deviation_rigid, dview=None, splits_rig=10, save_movie=True, new_templ=None, border_nan=True)

Perform rigid motion correction on one photon imaging movies

Args:
filename: str

name of the file to correct

gSig_filt:

size of the filter. If algorithm does not work change this parameters

max_shifts: tuple of ints

max shifts in x and y allowed

dview:

handle to cluster

splits_rig: int

number of chunks for parallelizing motion correction (remember that it should hold that length_movie/num_splits_to_process_rig>100)

save_movie: bool

whether to save the movie in memory mapped format

border_nanbool or string, optional

specifies how to deal with borders. (True, False, ‘copy’, ‘min’)

Returns:

Motion correction object

Estimates

class caiman.source_extraction.cnmf.estimates.Estimates(A=None, b=None, C=None, f=None, R=None, dims=None)

Class for storing and reusing the analysis results and performing basic processing and plotting operations.

Methods

compute_background(Yr)

compute background (has big memory requirements)

compute_residuals(Yr)

compute residual for each component (variable R)

deconvolve(params[, dview, dff_flag])

performs deconvolution on the estimated traces using the parameters specified in params.

detrend_df_f([quantileMin, frames_window, ...])

Computes DF/F normalized fluorescence for the extracted traces.

evaluate_components(imgs, params[, dview])

Computes the quality metrics for each component and stores the indices of the components that pass user specified thresholds.

evaluate_components_CNN(params[, neuron_class])

Estimates the quality of inferred spatial components using a pretrained CNN classifier.

filter_components(imgs, params[, new_dict, ...])

Filters components based on given thresholds without re-computing the quality metrics.

hv_view_components([Yr, img, idx, ...])

view spatial and temporal components interactively in a notebook

make_color_movie(imgs[, q_max, q_min, ...])

Displays a color movie where each component is given an arbitrary color.

manual_merge(components, params)

merge a given list of components.

masks_2_neurofinder(dataset_name)

Return masks to neurofinder format

merge_components(Y, params[, mx, ...])

merges components

nb_view_components([Yr, img, idx, ...])

view spatial and temporal components interactively in a notebook

nb_view_components_3d([Yr, image_type, ...])

view spatial and temporal components interactively in a notebook (version for 3d data)

normalize_components()

Normalizes components such that spatial components have l_2 norm 1

play_movie(imgs[, q_max, q_min, gain_res, ...])

Displays a movie with three panels (original data (left panel), reconstructed data (middle panel), residual (right panel))

plot_contours([img, idx, thr_method, thr, ...])

view contours of all spatial footprints.

plot_contours_nb([img, idx, thr_method, ...])

view contours of all spatial footprints (notebook environment).

remove_duplicates([predictions, r_values, ...])

remove neurons that heavily overlap and might be duplicates.

remove_small_large_neurons(min_size_neuro, ...)

Remove neurons that are too large or too small

restore_discarded_components()

Recover components that are filtered out with the select_components method

save_NWB(filename[, imaging_plane_name, ...])

writes NWB file

select_components([idx_components, ...])

Keeps only a selected subset of components and removes the rest.

threshold_spatial_components([maxthr, dview])

threshold spatial components.

view_components([Yr, img, idx])

view spatial and temporal components interactively

Estimates.compute_residuals(Yr)

compute residual for each component (variable R)

Args:
Yrnp.ndarray

movie in format pixels (d) x frames (T)

Estimates.deconvolve(params, dview=None, dff_flag=False)

performs deconvolution on the estimated traces using the parameters specified in params. Deconvolution on detrended and normalized (DF/F) traces can be performed by setting dff_flag=True. In this case the results of the deconvolution are stored in F_dff_dec and S_dff

Args:
params: params object

Parameters of the algorithm

dff_flag: bool (True)

Flag for deconvolving the DF/F traces

Returns:

self: estimates object

Estimates.detrend_df_f(quantileMin=8, frames_window=500, flag_auto=True, use_fast=False, use_residuals=True, detrend_only=False)

Computes DF/F normalized fluorescence for the extracted traces. See caiman.source.extraction.utilities.detrend_df_f for details

Args:
quantile_min: float

quantile used to estimate the baseline (values in [0,100])

frames_window: int

number of frames for computing running quantile

flag_auto: bool

flag for determining quantile automatically (different for each trace)

use_fast: bool

flag for using approximate fast percentile filtering

use_residuals: bool

flag for using non-deconvolved traces in DF/F calculation

detrend_only: bool (False)

flag for only subtracting baseline and not normalizing by it. Used in 1p data processing where baseline fluorescence cannot be determined.

Returns:
self: CNMF object

self.F_dff contains the DF/F normalized traces

Estimates.evaluate_components(imgs, params, dview=None)

Computes the quality metrics for each component and stores the indices of the components that pass user specified thresholds. The various thresholds and parameters can be passed as inputs. If left empty then they are read from self.params.quality’]

Args:
imgs: np.array (possibly memory mapped, t,x,y[,z])

Imaging data

params: params object

Parameters of the algorithm. The parameters in play here are contained in the subdictionary params.quality:

min_SNR: float

trace SNR threshold

rval_thr: float

space correlation threshold

use_cnn: bool

flag for using the CNN classifier

min_cnn_thr: float

CNN classifier threshold

Returns:
self: estimates object
self.idx_components: np.array

indices of accepted components

self.idx_components_bad: np.array

indices of rejected components

self.SNR_comp: np.array

SNR values for each temporal trace

self.r_values: np.array

space correlation values for each component

self.cnn_preds: np.array

CNN classifier values for each component

Estimates.evaluate_components_CNN(params, neuron_class=1)

Estimates the quality of inferred spatial components using a pretrained CNN classifier.

Args:
params: params object

see .params for details

neuron_class: int

class label for neuron shapes

Returns:
self: Estimates object

self.idx_components contains the indeced of components above the required threshold.

Estimates.filter_components(imgs, params, new_dict={}, dview=None, select_mode: str = 'All')

Filters components based on given thresholds without re-computing the quality metrics. If the quality metrics are not present then it calls self.evaluate components.

Args:
imgs: np.array (possibly memory mapped, t,x,y[,z])

Imaging data

params: params object

Parameters of the algorithm

new_dict: dict

New dictionary with parameters to be called. The dictionary’s keys are used to modify the params.quality subdictionary:

min_SNR: float

trace SNR threshold

SNR_lowest: float

minimum required trace SNR

rval_thr: float

space correlation threshold

rval_lowest: float

minimum required space correlation

use_cnn: bool

flag for using the CNN classifier

min_cnn_thr: float

CNN classifier threshold

cnn_lowest: float

minimum required CNN threshold

gSig_range: list

gSig scale values for CNN classifier

select_mode:

Can be ‘All’ (no subselection is made, but quality filtering is performed), ‘Accepted’ (subselection of accepted components, a field named self.accepted_list must exist), ‘Rejected’ (subselection of rejected components, a field named self.rejected_list must exist), ‘Unassigned’ (both fields above need to exist)

Returns:
self: estimates object
self.idx_components: np.array

indices of accepted components

self.idx_components_bad: np.array

indices of rejected components

self.SNR_comp: np.array

SNR values for each temporal trace

self.r_values: np.array

space correlation values for each component

self.cnn_preds: np.array

CNN classifier values for each component

Estimates.hv_view_components(Yr=None, img=None, idx=None, denoised_color=None, cmap='viridis')

view spatial and temporal components interactively in a notebook

Args:
Yrnp.ndarray

movie in format pixels (d) x frames (T)

imgnp.ndarray

background image for contour plotting. Default is the mean image of all spatial components (d1 x d2)

idxlist

list of components to be plotted

denoised_color: string or None

color name (e.g. ‘red’) or hex color code (e.g. ‘#F0027F’)

cmap: string

name of colormap (e.g. ‘viridis’) used to plot image_neurons

Estimates.nb_view_components(Yr=None, img=None, idx=None, denoised_color=None, cmap='jet', thr=0.99)

view spatial and temporal components interactively in a notebook

Args:
Yrnp.ndarray

movie in format pixels (d) x frames (T)

imgnp.ndarray

background image for contour plotting. Default is the mean image of all spatial components (d1 x d2)

idxlist

list of components to be plotted

thr: double

threshold regulating the extent of the displayed patches

denoised_color: string or None

color name (e.g. ‘red’) or hex color code (e.g. ‘#F0027F’)

cmap: string

name of colormap (e.g. ‘viridis’) used to plot image_neurons

Estimates.nb_view_components_3d(Yr=None, image_type='mean', dims=None, max_projection=False, axis=0, denoised_color=None, cmap='jet', thr=0.9)

view spatial and temporal components interactively in a notebook (version for 3d data)

Args:
Yrnp.ndarray

movie in format pixels (d) x frames (T) (only required to compute the correlation image)

dims: tuple of ints

dimensions of movie (x, y and z)

image_type: ‘mean’|’max’|’corr’

image to be overlaid to neurons (average of shapes, maximum of shapes or nearest neighbor correlation of raw data)

max_projection: bool

plot max projection along specified axis if True, o/w plot layers

axis: int (0, 1 or 2)

axis along which max projection is performed or layers are shown

thr: scalar between 0 and 1

Energy threshold for computing contours

denoised_color: string or None

color name (e.g. ‘red’) or hex color code (e.g. ‘#F0027F’)

cmap: string

name of colormap (e.g. ‘viridis’) used to plot image_neurons

Estimates.normalize_components()

Normalizes components such that spatial components have l_2 norm 1

Estimates.play_movie(imgs, q_max=99.75, q_min=2, gain_res=1, magnification=1, include_bck=True, frame_range=slice(None, None, None), bpx=0, thr=0.0, save_movie=False, movie_name='results_movie.avi', display=True, opencv_codec='H264', use_color=False, gain_color=4, gain_bck=0.2)

Displays a movie with three panels (original data (left panel), reconstructed data (middle panel), residual (right panel))

Args:
imgs: np.array (possibly memory mapped, t,x,y[,z])

Imaging data

q_max: float (values in [0, 100], default: 99.75)

percentile for maximum plotting value

q_min: float (values in [0, 100], default: 1)

percentile for minimum plotting value

gain_res: float (1)

amplification factor for residual movie

magnification: float (1)

magnification factor for whole movie

include_bck: bool (True)

flag for including background in original and reconstructed movie

frame_range: range or slice or list (default: slice(None))

display only a subset of frames

bpx: int (default: 0)

number of pixels to exclude on each border

thr: float (values in [0, 1[) (default: 0)

threshold value for contours, no contours if thr=0

save_movie: bool (default: False)

flag to save an avi file of the movie

movie_name: str (default: ‘results_movie.avi’)

name of saved file

display: bool (default: True)

flag for playing the movie (to stop the movie press ‘q’)

opencv_codec: str (default: ‘H264’)

FourCC video codec for saving movie. Check http://www.fourcc.org/codecs.php

use_color: bool (default: False)

flag for making a color movie. If True a random color will be assigned for each of the components

gain_color: float (default: 4)

amplify colors in the movie to make them brighter

gain_bck: float (default: 0.2)

dampen background in the movie to expose components (applicable only when color is used.)

Returns:

mov: The concatenated output movie

Estimates.plot_contours(img=None, idx=None, thr_method='max', thr=0.2, display_numbers=True, params=None, cmap='viridis')

view contours of all spatial footprints.

Args:
imgnp.ndarray

background image for contour plotting. Default is the mean image of all spatial components (d1 x d2)

idxlist

list of accepted components

thr_methodstr

thresholding method for computing contours (‘max’, ‘nrg’) if list of coordinates self.coordinates is None, i.e. not already computed

thrfloat

threshold value only effective if self.coordinates is None, i.e. not already computed

display_numbersbool

flag for displaying the id number of each contour

paramsparams object

set of dictionary containing the various parameters

Estimates.plot_contours_nb(img=None, idx=None, thr_method='max', thr=0.2, params=None, line_color='white', cmap='viridis')

view contours of all spatial footprints (notebook environment).

Args:
imgnp.ndarray

background image for contour plotting. Default is the mean image of all spatial components (d1 x d2)

idxlist

list of accepted components

thr_methodstr

thresholding method for computing contours (‘max’, ‘nrg’) if list of coordinates self.coordinates is None, i.e. not already computed

thrfloat

threshold value only effective if self.coordinates is None, i.e. not already computed

paramsparams object

set of dictionary containing the various parameters

Estimates.remove_duplicates(predictions=None, r_values=None, dist_thr=0.1, min_dist=10, thresh_subset=0.6, plot_duplicates=False, select_comp=False)

remove neurons that heavily overlap and might be duplicates.

Args:

predictions r_values dist_thr min_dist thresh_subset plot_duplicates

Estimates.remove_small_large_neurons(min_size_neuro, max_size_neuro, select_comp=False)

Remove neurons that are too large or too small

Args:
min_size_neuro: int

min size in pixels

max_size_neuro: int

max size in pixels

select_comp: bool

remove components that are too small/large from main estimates fields. See estimates.selecte_components() for more details.

Returns:
neurons_to_keep: np.array

indices of components with size within the acceptable range

Estimates.select_components(idx_components=None, use_object=False, save_discarded_components=True)

Keeps only a selected subset of components and removes the rest. The subset can be either user defined with the variable idx_components or read from the estimates object. The flag use_object determines this choice. If no subset is present then all components are kept.

Args:
idx_components: list

indices of components to be kept

use_object: bool

Flag to use self.idx_components for reading the indices.

save_discarded_components: bool

whether to save the components from initialization so that they can be restored using the restore_discarded_components method

Returns:

self: Estimates object

Estimates.restore_discarded_components()

Recover components that are filtered out with the select_components method

Estimates.save_NWB(filename, imaging_plane_name=None, imaging_series_name=None, sess_desc='CaImAn Results', exp_desc=None, identifier=None, imaging_rate=30.0, starting_time=0.0, session_start_time=None, excitation_lambda=488.0, imaging_plane_description='some imaging plane description', emission_lambda=520.0, indicator='OGB-1', location='brain', raw_data_file=None)

writes NWB file

Args:

filename: str

imaging_plane_name: str, optional

imaging_series_name: str, optional

sess_desc: str, optional

exp_desc: str, optional

identifier: str, optional

imaging_rate: float, optional

default: 30 (Hz)

starting_time: float, optional

default: 0.0 (seconds)

location: str, optional

session_start_time: datetime.datetime, optional

Only required for new files

excitation_lambda: float

imaging_plane_description: str

emission_lambda: float

indicator: str

location: str

Estimates.view_components(Yr=None, img=None, idx=None)

view spatial and temporal components interactively

Args:
Yrnp.ndarray

movie in format pixels (d) x frames (T)

imgnp.ndarray

background image for contour plotting. Default is the mean image of all spatial components (d1 x d2)

idxlist

list of components to be plotted

Deconvolution

caiman.source_extraction.cnmf.deconvolution.constrained_foopsi(fluor, bl=None, c1=None, g=None, sn=None, p=None, method_deconvolution='oasis', bas_nonneg=True, noise_range=[0.25, 0.5], noise_method='logmexp', lags=5, fudge_factor=1.0, verbosity=False, solvers=None, optimize_g=0, s_min=None, **kwargs)

Infer the most likely discretized spike train underlying a fluorescence trace

It relies on a noise constrained deconvolution approach

Args:
fluor: np.ndarray

One dimensional array containing the fluorescence intensities with one entry per time-bin.

bl: [optional] float

Fluorescence baseline value. If no value is given, then bl is estimated from the data.

c1: [optional] float

value of calcium at time 0

g: [optional] list,float

Parameters of the AR process that models the fluorescence impulse response. Estimated from the data if no value is given

sn: float, optional

Standard deviation of the noise distribution. If no value is given, then sn is estimated from the data.

p: int

order of the autoregression model

method_deconvolution: [optional] string

solution method for basis projection pursuit ‘cvx’ or ‘cvxpy’ or ‘oasis’

bas_nonneg: bool

baseline strictly non-negative

noise_range: list of two elms

frequency range for averaging noise PSD

noise_method: string

method of averaging noise PSD

lags: int

number of lags for estimating time constants

fudge_factor: float

fudge factor for reducing time constant bias

verbosity: bool

display optimization details

solvers: list string

primary and secondary (if problem unfeasible for approx solution) solvers to be used with cvxpy, default is [‘ECOS’,’SCS’]

optimize_g[optional] int, only applies to method ‘oasis’

Number of large, isolated events to consider for optimizing g. If optimize_g=0 (default) the provided or estimated g is not further optimized.

s_minfloat, optional, only applies to method ‘oasis’

Minimal non-zero activity within each bin (minimal ‘spike size’). For negative values the threshold is abs(s_min) * sn * sqrt(1-g) If None (default) the standard L1 penalty is used If 0 the threshold is determined automatically such that RSS <= sn^2 T

Returns:
c: np.ndarray float

The inferred denoised fluorescence signal at each time-bin.

bl, c1, g, sn : As explained above

sp: ndarray of float

Discretized deconvolved neural activity (spikes)

lam: float

Regularization parameter

Raises:

Exception(“You must specify the value of p”)

Exception(‘OASIS is currently only implemented for p=1 and p=2’)

Exception(‘Undefined Deconvolution Method’)

References:
caiman.source_extraction.cnmf.deconvolution.constrained_oasisAR2(y, g, sn, optimize_b=True, b_nonneg=True, optimize_g=0, decimate=5, shift=100, window=None, tol=1e-09, max_iter=1, penalty=1, s_min=0)

Infer the most likely discretized spike train underlying an AR(2) fluorescence trace

Solves the noise constrained sparse non-negative deconvolution problem min (s)_1 subject to (c-y)^2 = sn^2 T and s_t = c_t-g1 c_{t-1}-g2 c_{t-2} >= 0

Args:
yarray of float

One dimensional array containing the fluorescence intensities (with baseline already subtracted) with one entry per time-bin.

g(float, float)

Parameters of the AR(2) process that models the fluorescence impulse response.

snfloat

Standard deviation of the noise distribution.

optimize_bbool, optional, default True

Optimize baseline if True else it is set to 0, see y.

b_nonneg: bool, optional, default True

Enforce strictly non-negative baseline if True.

optimize_gint, optional, default 0

Number of large, isolated events to consider for optimizing g. No optimization if optimize_g=0.

decimateint, optional, default 5

Decimation factor for estimating hyper-parameters faster on decimated data.

shiftint, optional, default 100

Number of frames by which to shift window from on run of NNLS to the next.

windowint, optional, default None (200 or larger dependent on g)

Window size.

tolfloat, optional, default 1e-9

Tolerance parameter.

max_iterint, optional, default 1

Maximal number of iterations.

penaltyint, optional, default 1

Sparsity penalty. 1: min (s)_1 0: min (s)_0

s_minfloat, optional, default 0

Minimal non-zero activity within each bin (minimal ‘spike size’). For negative values the threshold is abs(s_min) * sn * sqrt(1 - decay_constant) If 0 the threshold is determined automatically such that RSS <= sn^2 T

Returns:
carray of float

The inferred denoised fluorescence signal at each time-bin.

sarray of float

Discretized deconvolved neural activity (spikes).

bfloat

Fluorescence baseline value.

(g1, g2)tuple of float

Parameters of the AR(2) process that models the fluorescence impulse response.

lamfloat

Sparsity penalty parameter lambda of dual problem.

References:

Friedrich J and Paninski L, NIPS 2016 Friedrich J, Zhou P, and Paninski L, arXiv 2016

Parameter Setting

class caiman.source_extraction.cnmf.params.CNMFParams(fnames=None, dims=None, dxy=(1, 1), border_pix=0, del_duplicates=False, low_rank_background=True, memory_fact=1, n_processes=1, nb_patch=1, p_ssub=2, p_tsub=2, remove_very_bad_comps=False, rf=None, stride=None, check_nan=True, n_pixels_per_process=None, k=30, alpha_snmf=0.5, center_psf=False, gSig=[5, 5], gSiz=None, init_iter=2, method_init='greedy_roi', min_corr=0.85, min_pnr=20, gnb=1, normalize_init=True, options_local_NMF=None, ring_size_factor=1.5, rolling_length=100, rolling_sum=True, ssub=2, ssub_B=2, tsub=2, block_size_spat=5000, num_blocks_per_run_spat=20, block_size_temp=5000, num_blocks_per_run_temp=20, update_background_components=True, method_deconvolution='oasis', p=2, s_min=None, do_merge=True, merge_thresh=0.8, decay_time=0.4, fr=30, min_SNR=2.5, rval_thr=0.8, N_samples_exceptionality=None, batch_update_suff_stat=False, expected_comps=500, iters_shape=5, max_comp_update_shape=inf, max_num_added=5, min_num_trial=5, minibatch_shape=100, minibatch_suff_stat=5, n_refit=0, num_times_comp_updated=inf, simultaneously=False, sniper_mode=False, test_both=False, thresh_CNN_noisy=0.5, thresh_fitness_delta=-50, thresh_fitness_raw=None, thresh_overlap=0.5, update_freq=200, update_num_comps=True, use_dense=True, use_peak_max=True, only_init_patch=True, var_name_hdf5='mov', max_merge_area=None, use_corr_img=False, params_dict={})

Class for setting and changing the various parameters.

Methods

change_params(params_dict[, verbose])

Method for updating the params object by providing a single dictionary.

check_consistency()

Populates the params object with some dataset dependent values and ensures that certain constraints are satisfied.

get(group, key)

Get a value for a given group and key.

get_group(group)

Get the dictionary of key-value pairs for a group.

set(group, val_dict[, set_if_not_exists, ...])

Add key-value pairs to a group. Existing key-value pairs will be overwritten

to_dict()

Returns the params class as a dictionary with subdictionaries for each category.

CNMFParams.__init__(fnames=None, dims=None, dxy=(1, 1), border_pix=0, del_duplicates=False, low_rank_background=True, memory_fact=1, n_processes=1, nb_patch=1, p_ssub=2, p_tsub=2, remove_very_bad_comps=False, rf=None, stride=None, check_nan=True, n_pixels_per_process=None, k=30, alpha_snmf=0.5, center_psf=False, gSig=[5, 5], gSiz=None, init_iter=2, method_init='greedy_roi', min_corr=0.85, min_pnr=20, gnb=1, normalize_init=True, options_local_NMF=None, ring_size_factor=1.5, rolling_length=100, rolling_sum=True, ssub=2, ssub_B=2, tsub=2, block_size_spat=5000, num_blocks_per_run_spat=20, block_size_temp=5000, num_blocks_per_run_temp=20, update_background_components=True, method_deconvolution='oasis', p=2, s_min=None, do_merge=True, merge_thresh=0.8, decay_time=0.4, fr=30, min_SNR=2.5, rval_thr=0.8, N_samples_exceptionality=None, batch_update_suff_stat=False, expected_comps=500, iters_shape=5, max_comp_update_shape=inf, max_num_added=5, min_num_trial=5, minibatch_shape=100, minibatch_suff_stat=5, n_refit=0, num_times_comp_updated=inf, simultaneously=False, sniper_mode=False, test_both=False, thresh_CNN_noisy=0.5, thresh_fitness_delta=-50, thresh_fitness_raw=None, thresh_overlap=0.5, update_freq=200, update_num_comps=True, use_dense=True, use_peak_max=True, only_init_patch=True, var_name_hdf5='mov', max_merge_area=None, use_corr_img=False, params_dict={})

Class for setting the processing parameters. All parameters for CNMF, online-CNMF, quality testing, and motion correction can be set here and then used in the various processing pipeline steps. The preferred way to set parameters is by using the set function, where a subclass is determined and a dictionary is passed. The whole dictionary can also be initialized at once by passing a dictionary params_dict when initializing the CNMFParams object. Direct setting of the positional arguments in CNMFParams is only present for backwards compatibility reasons and should not be used if possible.

Args:

Any parameter that is not set get a default value specified by the dictionary default options

DATA PARAMETERS (CNMFParams.data) #####

fnames: list[str]

list of complete paths to files that need to be processed

dims: (int, int), default: computed from fnames

dimensions of the FOV in pixels

fr: float, default: 30

imaging rate in frames per second

decay_time: float, default: 0.4

length of typical transient in seconds

dxy: (float, float)

spatial resolution of FOV in pixels per um

var_name_hdf5: str, default: ‘mov’

if loading from hdf5 name of the variable to load

caiman_version: str

version of CaImAn being used

last_commit: str

hash of last commit in the caiman repo

mmap_F: list[str]

paths to F-order memory mapped files after motion correction

mmap_C: str

path to C-order memory mapped file after motion correction

PATCH PARAMS (CNMFParams.patch)######

rf: int or list or None, default: None

Half-size of patch in pixels. If None, no patches are constructed and the whole FOV is processed jointly. If list, it should be a list of two elements corresponding to the height and width of patches

stride: int or None, default: None

Overlap between neighboring patches in pixels.

nb_patch: int, default: 1

Number of (local) background components per patch

border_pix: int, default: 0

Number of pixels to exclude around each border.

low_rank_background: bool, default: True

Whether to update the background using a low rank approximation. If False all the nonzero elements of the background components are updated using hals (to be used with one background per patch)

del_duplicates: bool, default: False

Delete duplicate components in the overlapping regions between neighboring patches. If False, then merging is used.

only_init: bool, default: True

whether to run only the initialization

p_patch: int, default: 0

order of AR dynamics when processing within a patch

skip_refinement: bool, default: False

Whether to skip refinement of components (deprecated?)

remove_very_bad_comps: bool, default: True

Whether to remove (very) bad quality components during patch processing

p_ssub: float, default: 2

Spatial downsampling factor

p_tsub: float, default: 2

Temporal downsampling factor

memory_fact: float, default: 1

unitless number for increasing the amount of available memory

n_processes: int

Number of processes used for processing patches in parallel

in_memory: bool, default: True

Whether to load patches in memory

PRE-PROCESS PARAMS (CNMFParams.preprocess) #############

sn: np.array or None, default: None

noise level for each pixel

noise_range: [float, float], default: [.25, .5]

range of normalized frequencies over which to compute the PSD for noise determination

noise_method: ‘mean’|’median’|’logmexp’, default: ‘mean’

PSD averaging method for computing the noise std

max_num_samples_fft: int, default: 3*1024

Chunk size for computing the PSD of the data (for memory considerations)

n_pixels_per_process: int, default: 1000

Number of pixels to be allocated to each process

compute_g’: bool, default: False

whether to estimate global time constant

p: int, default: 2

order of AR indicator dynamics

lags: int, default: 5

number of lags to be considered for time constant estimation

include_noise: bool, default: False

flag for using noise values when estimating g

pixels: list, default: None

pixels to be excluded due to saturation

check_nan: bool, default: True

whether to check for NaNs

INIT PARAMS (CNMFParams.init)###############

K: int, default: 30

number of components to be found (per patch or whole FOV depending on whether rf=None)

SC_kernel: {‘heat’, ‘cos’, binary’}, default: ‘heat’

kernel for graph affinity matrix

SC_sigma: float, default: 1

variance for SC kernel

SC_thr: float, default: 0,

threshold for affinity matrix

SC_normalize: bool, default: True

standardize entries prior to computing the affinity matrix

SC_use_NN: bool, default: False

sparsify affinity matrix by using only nearest neighbors

SC_nnn: int, default: 20

number of nearest neighbors to use

gSig: [int, int], default: [5, 5]

radius of average neurons (in pixels)

gSiz: [int, int], default: [int(round((x * 2) + 1)) for x in gSig],

half-size of bounding box for each neuron

center_psf: bool, default: False

whether to use 1p data processing mode. Set to true for 1p

ssub: float, default: 2

spatial downsampling factor

tsub: float, default: 2

temporal downsampling factor

nb: int, default: 1

number of background components

lambda_gnmf: float, default: 1.

regularization weight for graph NMF

maxIter: int, default: 5

number of HALS iterations during initialization

method_init: ‘greedy_roi’|’corr_pnr’|’sparse_NMF’|’local_NMF’ default: ‘greedy_roi’

initialization method. use ‘corr_pnr’ for 1p processing and ‘sparse_NMF’ for dendritic processing.

min_corr: float, default: 0.85

minimum value of correlation image for determining a candidate component during corr_pnr

min_pnr: float, default: 20

minimum value of psnr image for determining a candidate component during corr_pnr

seed_method: str {‘auto’, ‘manual’, ‘semi’}

methods for choosing seed pixels during greedy_roi or corr_pnr initialization ‘semi’ detects nr components automatically and allows to add more manually if running as notebook ‘semi’ and ‘manual’ require a backend that does not inline figures, e.g. %matplotlib tk

ring_size_factor: float, default: 1.5

radius of ring (*gSig) for computing background during corr_pnr

ssub_B: float, default: 2

downsampling factor for background during corr_pnr

init_iter: int, default: 2

number of iterations during corr_pnr (1p) initialization

nIter: int, default: 5

number of rank-1 refinement iterations during greedy_roi initialization

rolling_sum: bool, default: True

use rolling sum (as opposed to full sum) for determining candidate centroids during greedy_roi

rolling_length: int, default: 100

width of rolling window for rolling sum option

kernel: np.array or None, default: None

user specified template for greedyROI

max_iter_snmfint, default: 500

maximum number of iterations for sparse NMF initialization

alpha_snmf: float, default: 0.5

sparse NMF sparsity regularization weight

sigma_smooth_snmf(float, float, float), default: (.5,.5,.5)

std of Gaussian kernel for smoothing data in sparse_NMF

perc_baseline_snmf: float, default: 20

percentile to be removed from the data in sparse_NMF prior to decomposition

normalize_init: bool, default: True

whether to equalize the movies during initialization

options_local_NMF: dict

dictionary with parameters to pass to local_NMF initializer

SPATIAL PARAMS (CNMFParams.spatial) ##########

method_exp: ‘dilate’|’ellipse’, default: ‘dilate’

method for expanding footprint of spatial components

dist: float, default: 3

expansion factor of ellipse

expandCore: morphological element, default: None(?)

morphological element for expanding footprints under dilate

nb: int, default: 1

number of global background components

n_pixels_per_process: int, default: 1000

number of pixels to be processed by each worker

thr_method: ‘nrg’|’max’, default: ‘nrg’

thresholding method

maxthr: float, default: 0.1

Max threshold

nrgthr: float, default: 0.9999

Energy threshold

extract_cc: bool, default: True

whether to extract connected components during thresholding (might want to turn to False for dendritic imaging)

medw: (int, int) default: None

window of median filter (set to (3,)*len(dims) in cnmf.fit)

se: np.array or None, default: None

Morphological closing structuring element (set to np.ones((3,)*len(dims), dtype=np.uint8) in cnmf.fit)

ss: np.array or None, default: None

Binary element for determining connectivity (set to np.ones((3,)*len(dims), dtype=np.uint8) in cnmf.fit)

update_background_components: bool, default: True

whether to update the spatial background components

method_ls: ‘lasso_lars’|’nnls_L0’, default: ‘lasso_lars’

‘nnls_L0’. Nonnegative least square with L0 penalty ‘lasso_lars’ lasso lars function from scikit learn

block_sizeint, default: 5000

Number of pixels to process at the same time for dot product. Reduce if you face memory problems

num_blocks_per_run: int, default: 20

Parallelization of A’*Y operation

normalize_yyt_one: bool, default: True

Whether to normalize the C and A matrices so that diag(C*C.T) = 1 during update spatial

TEMPORAL PARAMS (CNMFParams.temporal)###########

ITER: int, default: 2

block coordinate descent iterations

method_deconvolution: ‘oasis’|’cvxpy’|’oasis’, default: ‘oasis’

method for solving the constrained deconvolution problem (‘oasis’,’cvx’ or ‘cvxpy’) if method cvxpy, primary and secondary (if problem unfeasible for approx solution)

solvers: ‘ECOS’|’SCS’, default: [‘ECOS’, ‘SCS’]

solvers to be used with cvxpy, can be ‘ECOS’,’SCS’ or ‘CVXOPT’

p: 0|1|2, default: 2

order of AR indicator dynamics

memory_efficient: False

bas_nonneg: bool, default: True

whether to set a non-negative baseline (otherwise b >= min(y))

noise_range: [float, float], default: [.25, .5]

range of normalized frequencies over which to compute the PSD for noise determination

noise_method: ‘mean’|’median’|’logmexp’, default: ‘mean’

PSD averaging method for computing the noise std

lags: int, default: 5

number of autocovariance lags to be considered for time constant estimation

optimize_g: bool, default: False

flag for optimizing time constants

fudge_factor: float (close but smaller than 1) default: .96

bias correction factor for discrete time constants

nb: int, default: 1

number of global background components

verbosity: bool, default: False

whether to be verbose

block_sizeint, default: 5000

Number of pixels to process at the same time for dot product. Reduce if you face memory problems

num_blocks_per_run: int, default: 20

Parallelization of A’*Y operation

s_min: float or None, default: None

Minimum spike threshold amplitude (computed in the code if used).

MERGE PARAMS (CNMFParams.merge)#####
do_merge: bool, default: True

Whether or not to merge

thr: float, default: 0.8

Trace correlation threshold for merging two components.

merge_parallel: bool, default: False

Perform merging in parallel

max_merge_area: int or None, default: None

maximum area (in pixels) of merged components, used to determine whether to merge components during fitting process

QUALITY EVALUATION PARAMETERS (CNMFParams.quality)###########

min_SNR: float, default: 2.5

trace SNR threshold. Traces with SNR above this will get accepted

SNR_lowest: float, default: 0.5

minimum required trace SNR. Traces with SNR below this will get rejected

rval_thr: float, default: 0.8

space correlation threshold. Components with correlation higher than this will get accepted

rval_lowest: float, default: -1

minimum required space correlation. Components with correlation below this will get rejected

use_cnn: bool, default: True

flag for using the CNN classifier.

min_cnn_thr: float, default: 0.9

CNN classifier threshold. Components with score higher than this will get accepted

cnn_lowest: float, default: 0.1

minimum required CNN threshold. Components with score lower than this will get rejected.

gSig_range: list or integers, default: None

gSig scale values for CNN classifier. In not None, multiple values are tested in the CNN classifier.

ONLINE CNMF (ONACID) PARAMETERS (CNMFParams.online)#####

N_samples_exceptionality: int, default: np.ceil(decay_time*fr),

Number of frames over which trace SNR is computed (usually length of a typical transient)

batch_update_suff_stat: bool, default: False

Whether to update sufficient statistics in batch mode

ds_factor: int, default: 1,

spatial downsampling factor for faster processing (if > 1)

dist_shape_update: bool, default: False,

update shapes in a distributed fashion

epochs: int, default: 1,

number of times to go over data

expected_comps: int, default: 500

number of expected components (for memory allocation purposes)

full_XXt: bool, default: False

save the full residual sufficient statistic matrix for updating W in 1p. If set to False, a list of submatrices is saved (typically faster).

init_batch: int, default: 200,

length of mini batch used for initialization

init_method: ‘bare’|’cnmf’|’seeded’, default: ‘bare’,

initialization method

iters_shape: int, default: 5

Number of block-coordinate decent iterations for each shape update

max_comp_update_shape: int, default: np.inf

Maximum number of spatial components to be updated at each time

max_num_added: int, default: 5

Maximum number of new components to be added in each frame

max_shifts_online: int, default: 10,

Maximum shifts for motion correction during online processing

min_SNR: float, default: 2.5

Trace SNR threshold for accepting a new component

min_num_trial: int, default: 5

Number of mew possible components for each frame

minibatch_shape: int, default: 100

Number of frames stored in rolling buffer

minibatch_suff_stat: int, default: 5

mini batch size for updating sufficient statistics

motion_correct: bool, default: True

Whether to perform motion correction during online processing

movie_name_online: str, default: ‘online_movie.avi’

Name of saved movie (appended in the data directory)

normalize: bool, default: False

Whether to normalize each frame prior to online processing

n_refit: int, default: 0

Number of additional iterations for computing traces

num_times_comp_updated: int, default: np.inf

opencv_codec: str, default: ‘H264’

FourCC video codec for saving movie. Check http://www.fourcc.org/codecs.php

path_to_model: str, default: os.path.join(caiman_datadir(), ‘model’, ‘cnn_model_online.h5’)

Path to online CNN classifier

rval_thr: float, default: 0.8

space correlation threshold for accepting a new component

save_online_movie: bool, default: False

Whether to save the results movie

show_movie: bool, default: False

Whether to display movie of online processing

simultaneously: bool, default: False

Whether to demix and deconvolve simultaneously

sniper_mode: bool, default: False

Whether to use the online CNN classifier for screening candidate components (otherwise space correlation is used)

test_both: bool, default: False

Whether to use both the CNN and space correlation for screening new components

thresh_CNN_noisy: float, default: 0,5,

Threshold for the online CNN classifier

thresh_fitness_delta: float (negative)

Derivative test for detecting traces

thresh_fitness_raw: float (negative), default: computed from min_SNR

Threshold value for testing trace SNR

thresh_overlap: float, default: 0.5

Intersection-over-Union space overlap threshold for screening new components

update_freq: int, default: 200

Update each shape at least once every X frames when in distributed mode

update_num_comps: bool, default: True

Whether to search for new components

use_dense: bool, default: True

Whether to store and represent A and b as a dense matrix

use_peak_max: bool, default: True

Whether to find candidate centroids using skimage’s find local peaks function

MOTION CORRECTION PARAMETERS (CNMFParams.motion)####

border_nan: bool or str, default: ‘copy’

flag for allowing NaN in the boundaries. True allows NaN, whereas ‘copy’ copies the value of the nearest data point.

gSig_filt: int or None, default: None

size of kernel for high pass spatial filtering in 1p data. If None no spatial filtering is performed

is3D: bool, default: False

flag for 3D recordings for motion correction

max_deviation_rigid: int, default: 3

maximum deviation in pixels between rigid shifts and shifts of individual patches

max_shifts: (int, int), default: (6,6)

maximum shifts per dimension in pixels.

min_mov: float or None, default: None

minimum value of movie. If None it get computed.

niter_rig: int, default: 1

number of iterations rigid motion correction.

nonneg_movie: bool, default: True

flag for producing a non-negative movie.

num_frames_split: int, default: 80

split movie every x frames for parallel processing

num_splits_to_process_els, default: [7, None] num_splits_to_process_rig, default: None

overlaps: (int, int), default: (24, 24)

overlap between patches in pixels in pw-rigid motion correction.

pw_rigid: bool, default: False

flag for performing pw-rigid motion correction.

shifts_opencv: bool, default: True

flag for applying shifts using cubic interpolation (otherwise FFT)

splits_els: int, default: 14

number of splits across time for pw-rigid registration

splits_rig: int, default: 14

number of splits across time for rigid registration

strides: (int, int), default: (96, 96)

how often to start a new patch in pw-rigid registration. Size of each patch will be strides + overlaps

upsample_factor_grid” int, default: 4

motion field upsampling factor during FFT shifts.

use_cuda: bool, default: False

flag for using a GPU.

indices: tuple(slice), default: (slice(None), slice(None))

Use that to apply motion correction only on a part of the FOV

RING CNN PARAMETERS (CNMFParams.ring_CNN)

n_channels: int, default: 2

Number of “ring” kernels

use_bias: bool, default: False

Flag for using bias in the convolutions

use_add: bool, default: False

Flag for using an additive layer

pct: float between 0 and 1, default: 0.01

Quantile used during training with quantile loss function

patience: int, default: 3

Number of epochs to wait before early stopping

max_epochs: int, default: 100

Maximum number of epochs to be used during training

width: int, default: 5

Width of “ring” kernel

loss_fn: str, default: ‘pct’

Loss function specification (‘pct’ for quantile loss function, ‘mse’ for mean squared error)

lr: float, default: 1e-3

(initial) learning rate

lr_scheduler: function, default: None

Learning rate scheduler function

path_to_model: str, default: None

Path to saved weights (if training then path to saved model weights)

remove_activity: bool, default: False

Flag for removing activity of last frame prior to background extraction

reuse_model: bool, default: False

Flag for reusing an already trained model (saved in path to model)

CNMFParams.set(group, val_dict, set_if_not_exists=False, verbose=False)
Add key-value pairs to a group. Existing key-value pairs will be overwritten

if specified in val_dict, but not deleted.

Args:

group: The name of the group. val_dict: A dictionary with key-value pairs to be set for the group. set_if_not_exists: Whether to set a key-value pair in a group if the key does not currently exist in the group.

CNMFParams.get(group, key)

Get a value for a given group and key. Raises an exception if no such group/key combination exists.

Args:

group: The name of the group. key: The key for the property in the group of interest.

Returns: The value for the group/key combination.

CNMFParams.get_group(group)

Get the dictionary of key-value pairs for a group.

Args:

group: The name of the group.

CNMFParams.change_params(params_dict, verbose=False)

Method for updating the params object by providing a single dictionary. For each key in the provided dictionary the method will search in all subdictionaries and will update the value if it finds a match.

Args:

params_dict: dictionary with parameters to be changed and new values verbose: bool (False). Print message for all keys

CNMFParams.to_dict()

Returns the params class as a dictionary with subdictionaries for each category.

CNMF

class caiman.source_extraction.cnmf.cnmf.CNMF(n_processes, k=5, gSig=[4, 4], gSiz=None, merge_thresh=0.8, p=2, dview=None, Ain=None, Cin=None, b_in=None, f_in=None, do_merge=True, ssub=2, tsub=2, p_ssub=1, p_tsub=1, method_init='greedy_roi', alpha_snmf=0.5, rf=None, stride=None, memory_fact=1, gnb=1, nb_patch=1, only_init_patch=False, method_deconvolution='oasis', n_pixels_per_process=4000, block_size_temp=5000, num_blocks_per_run_temp=20, block_size_spat=5000, num_blocks_per_run_spat=20, check_nan=True, skip_refinement=False, normalize_init=True, options_local_NMF=None, minibatch_shape=100, minibatch_suff_stat=3, update_num_comps=True, rval_thr=0.9, thresh_fitness_delta=-20, thresh_fitness_raw=None, thresh_overlap=0.5, max_comp_update_shape=inf, num_times_comp_updated=inf, batch_update_suff_stat=False, s_min=None, remove_very_bad_comps=False, border_pix=0, low_rank_background=True, update_background_components=True, rolling_sum=True, rolling_length=100, min_corr=0.85, min_pnr=20, ring_size_factor=1.5, center_psf=False, use_dense=True, deconv_flag=True, simultaneously=False, n_refit=0, del_duplicates=False, N_samples_exceptionality=None, max_num_added=3, min_num_trial=2, thresh_CNN_noisy=0.5, fr=30, decay_time=0.4, min_SNR=2.5, ssub_B=2, init_iter=2, sniper_mode=False, use_peak_max=False, test_both=False, expected_comps=500, max_merge_area=None, params=None)

Source extraction using constrained non-negative matrix factorization.

The general class which is used to produce a factorization of the Y matrix being the video it computes it using all the files inside of cnmf folder. Its architecture is similar to the one of scikit-learn calling the function fit to run everything which is part of the structure of the class

See Also: @url http://www.cell.com/neuron/fulltext/S0896-6273(15)01084-3 .. image:: docs/img/quickintro.png

Methods

HALS4footprints(Yr[, update_bck, num_iter])

Uses hierarchical alternating least squares to update shapes and background

HALS4traces(Yr[, groups, use_groups, order, ...])

Solves C, f = argmin_C ||Yr-AC-bf|| using block-coordinate decent.

compute_residuals(Yr)

Compute residual trace for each component (variable YrA).

deconvolve([p, method_deconvolution, ...])

Performs deconvolution on already extracted traces using constrained foopsi.

fit(images[, indices])

This method uses the cnmf algorithm to find sources in data.

fit_file([motion_correct, indices, include_eval])

This method packages the analysis pipeline (motion correction, memory mapping, patch based CNMF processing and component evaluation) in a single method that can be called on a specific (sequence of) file(s).

initialize(Y, **kwargs)

Component initialization

merge_comps(Y[, mx, fast_merge, max_merge_area])

merges components

preprocess(Yr)

Examines data to remove corrupted pixels and computes the noise level estimate for each pixel.

refit(images[, dview])

Refits the data using CNMF initialized from a previous iteration

remove_components(ind_rm)

Remove a specified list of components from the CNMF object.

save(filename)

save object in hdf5 file format

update_spatial(Y[, use_init])

Updates spatial components

update_temporal(Y[, use_init])

Updates temporal components

CNMF.fit(images, indices=(slice(None, None, None), slice(None, None, None)))

This method uses the cnmf algorithm to find sources in data.

Args:

images : mapped np.ndarray of shape (t,x,y[,z]) containing the images that vary over time.

indices: list of slice objects along dimensions (x,y[,z]) for processing only part of the FOV

Returns:

self: updated using the cnmf algorithm with C,A,S,b,f computed according to the given initial values

http://www.cell.com/neuron/fulltext/S0896-6273(15)01084-3

CNMF.refit(images, dview=None)

Refits the data using CNMF initialized from a previous iteration

Args:

images dview

Returns:
cnm

A new CNMF object

CNMF.fit_file(motion_correct=False, indices=None, include_eval=False)

This method packages the analysis pipeline (motion correction, memory mapping, patch based CNMF processing and component evaluation) in a single method that can be called on a specific (sequence of) file(s). It is assumed that the CNMF object already contains a params object where the location of the files and all the relevant parameters have been specified. The method will perform the last step, i.e. component evaluation, if the flag “include_eval” is set to True.

Args:
motion_correct (bool)

flag for performing motion correction

indices (list of slice objects)

perform analysis only on a part of the FOV

include_eval (bool)

flag for performing component evaluation

Returns:

cnmf object with the current estimates

CNMF.save(filename)

save object in hdf5 file format

Args:
filename: str

path to the hdf5 file containing the saved object

CNMF.deconvolve(p=None, method_deconvolution=None, bas_nonneg=None, noise_method=None, optimize_g=0, s_min=None, **kwargs)

Performs deconvolution on already extracted traces using constrained foopsi.

CNMF.update_spatial(Y, use_init=True, **kwargs)

Updates spatial components

Args:
Y: np.array (d1*d2) x T

input data

use_init: bool

use Cin, f_in for computing A, b otherwise use C, f

Returns:
self

modified values self.estimates.A, self.estimates.b possibly self.estimates.C, self.estimates.f

CNMF.update_temporal(Y, use_init=True, **kwargs)

Updates temporal components

Args:
Y: np.array (d1*d2) x T

input data

CNMF.compute_residuals(Yr)

Compute residual trace for each component (variable YrA). WARNING: At the moment this method is valid only for the 2p processing pipeline

Args:
Yrnp.ndarray

movie in format pixels (d) x frames (T)

CNMF.remove_components(ind_rm)

Remove a specified list of components from the CNMF object.

Args:
ind_rmlist

indices of components to be removed

CNMF.HALS4traces(Yr, groups=None, use_groups=False, order=None, update_bck=True, bck_non_neg=True, **kwargs)

Solves C, f = argmin_C ||Yr-AC-bf|| using block-coordinate decent. Can use groups to update non-overlapping components in parallel or a specified order.

Args:
Yrnp.array (possibly memory mapped, (x,y,[,z]) x t)

Imaging data reshaped in matrix format

groupslist of sets

grouped components to be updated simultaneously

use_groupsbool

flag for using groups

orderlist

Update components in that order (used if nonempty and groups=None)

update_bckbool

Flag for updating temporal background components

bck_non_negbool

Require temporal background to be non-negative

Returns:

self (updated values for self.estimates.C, self.estimates.f, self.estimates.YrA)

CNMF.HALS4footprints(Yr, update_bck=True, num_iter=2)

Uses hierarchical alternating least squares to update shapes and background

Args:
Yr: np.array (possibly memory mapped, (x,y,[,z]) x t)

Imaging data reshaped in matrix format

update_bck: bool

flag for updating spatial background components

num_iter: int

number of iterations

Returns:

self (updated values for self.estimates.A and self.estimates.b)

CNMF.merge_comps(Y, mx=50, fast_merge=True, max_merge_area=None)

merges components

CNMF.initialize(Y, **kwargs)

Component initialization

CNMF.preprocess(Yr)

Examines data to remove corrupted pixels and computes the noise level estimate for each pixel.

Args:
Yr: np.array (or memmap.array)

2d array of data (pixels x timesteps) typically in memory mapped form

caiman.source_extraction.cnmf.cnmf.load_CNMF(filename: str, n_processes=1, dview=None)

load object saved with the CNMF save method

Args:
filename:

hdf5 (or nwb) file name containing the saved object

dview: multiprocessing or ipyparallel object

used to set up parallelization, default None

Online CNMF (OnACID)

class caiman.source_extraction.cnmf.online_cnmf.OnACID(params=None, estimates=None, path=None, dview=None, Ain=None)

Source extraction of streaming data using online matrix factorization. The class can be initialized by passing a “params” object for setting up the relevant parameters and an “Estimates” object for setting an initial state of the algorithm (optional)

Methods:
initialize_online:

Initialize the online algorithm using a provided method, and prepare the online object

_prepare_object:

Prepare the online object given a set of estimates

fit_next:

Fit the algorithm on the next data frame

fit_online:

Run the entire online pipeline on a given list of files

Methods

fit_next(t, frame_in[, num_iters_hals])

This method fits the next frame using the CaImAn online algorithm and updates the object.

fit_online(**kwargs)

Implements the caiman online algorithm on the list of files fls.

mc_next(t, frame)

Perform online motion correction on the next frame

save(filename)

save object in hdf5 file format

create_frame

initialize_online

OnACID.fit_online(**kwargs)

Implements the caiman online algorithm on the list of files fls. The files are taken in alpha numerical order and are assumed to each have the same number of frames (except the last one that can be shorter). Caiman online is initialized using the seeded or bare initialization methods.

Args:
fls: list

list of files to be processed

init_batch: int

number of frames to be processed during initialization

epochs: int

number of passes over the data

motion_correct: bool

flag for performing motion correction

kwargs: dict

additional parameters used to modify self.params.online’] see options.[‘online’] for details

Returns:

self (results of caiman online)

OnACID.fit_next(t, frame_in, num_iters_hals=3)

This method fits the next frame using the CaImAn online algorithm and updates the object. Does NOT perform motion correction, see mc_next()

Args
tint

temporal index of the next frame to fit

frame_inarray

flattened array of shape (x * y [ * z],) containing the t-th image.

num_iters_hals: int, optional

maximal number of iterations for HALS (NNLS via blockCD)

OnACID.save(filename)

save object in hdf5 file format

Args:
filename: str

path to the hdf5 file containing the saved object

OnACID.initialize_online(model_LN=None, T=None)
caiman.source_extraction.cnmf.online_cnmf.load_OnlineCNMF(filename, dview=None)

load object saved with the CNMF save method

Args:
filename: str

hdf5 file name containing the saved object

dview: multiprocessing or ipyparallel object

useful to set up parllelization in the objects

Preprocessing

caiman.source_extraction.cnmf.pre_processing.preprocess_data(Y, sn=None, dview=None, n_pixels_per_process=100, noise_range=[0.25, 0.5], noise_method='logmexp', compute_g=False, p=2, lags=5, include_noise=False, pixels=None, max_num_samples_fft=3000, check_nan=True)

Performs the pre-processing operations described above.

Args:
Y: ndarray

input movie (n_pixels x Time). Can be also memory mapped file.

n_processes: [optional] int

number of processes/threads to use concurrently

n_pixels_per_process: [optional] int

number of pixels to be simultaneously processed by each process

p: positive integer

order of AR process, default: 2

lags: positive integer

number of lags in the past to consider for determining time constants. Default 5

include_noise: Boolean

Flag to include pre-estimated noise value when determining time constants. Default: False

noise_range: np.ndarray [2 x 1] between 0 and 0.5

Range of frequencies compared to Nyquist rate over which the power spectrum is averaged default: [0.25,0.5]

noise method: string

method of averaging the noise. Choices: ‘mean’: Mean ‘median’: Median ‘logmexp’: Exponential of the mean of the logarithm of PSD (default)

Returns:
Y: ndarray

movie preprocessed (n_pixels x Time). Can be also memory mapped file.

g: np.ndarray (p x 1)

Discrete time constants

psx: ndarray

position of those pixels

sn_s: ndarray (memory mapped)

file where to store the results of computation.

Initialization

caiman.source_extraction.cnmf.initialization.initialize_components(Y, K=30, gSig=[5, 5], gSiz=None, ssub=1, tsub=1, nIter=5, maxIter=5, nb=1, kernel=None, use_hals=True, normalize_init=True, img=None, method_init='greedy_roi', max_iter_snmf=500, alpha_snmf=0.5, sigma_smooth_snmf=(0.5, 0.5, 0.5), perc_baseline_snmf=20, options_local_NMF=None, rolling_sum=False, rolling_length=100, sn=None, options_total=None, min_corr=0.8, min_pnr=10, seed_method='auto', ring_size_factor=1.5, center_psf=False, ssub_B=2, init_iter=2, remove_baseline=True, SC_kernel='heat', SC_sigma=1, SC_thr=0, SC_normalize=True, SC_use_NN=False, SC_nnn=20, lambda_gnmf=1)

Initialize components. This function initializes the spatial footprints, temporal components, and background which are then further refined by the CNMF iterations. There are four different initialization methods depending on the data you’re processing:

greedy_roi: GreedyROI method used in standard 2p processing (default) corr_pnr: GreedyCorr method used for processing 1p data sparse_nmf: Sparse NMF method suitable for dendritic/axonal imaging graph_nmf: Graph NMF method also suitable for dendritic/axonal imaging

The GreedyROI method by default is not using the RollingGreedyROI method. This can be changed through the binary flag ‘rolling_sum’.

All the methods can be used for volumetric data except ‘corr_pnr’ which is only available for 2D data.

It is also by default followed by hierarchical alternative least squares (HALS) NMF. Optional use of spatio-temporal downsampling to boost speed.

Args:
Y: np.ndarray

d1 x d2 [x d3] x T movie, raw data.

K: [optional] int

number of neurons to extract (default value: 30). Maximal number for method ‘corr_pnr’.

gSig: [optional] list,tuple

standard deviation of neuron size along x and y [and z] (default value: (5,5).

gSiz: [optional] list,tuple

half width of bounding box used for components during initialization (default 2*gSig + 1).

nIter: [optional] int

number of iterations for shape tuning (default 5).

maxIter: [optional] int

number of iterations for HALS algorithm (default 5).

ssub: [optional] int

spatial downsampling factor recommended for large datasets (default 1, no downsampling).

tsub: [optional] int

temporal downsampling factor recommended for long datasets (default 1, no downsampling).

kernel: [optional] np.ndarray

User specified kernel for greedyROI (default None, greedy ROI searches for Gaussian shaped neurons)

use_hals: [optional] bool

Whether to refine components with the hals method

normalize_init: [optional] bool

Whether to normalize_init data before running the initialization

img: optional [np 2d array]

Image with which to normalize. If not present use the mean + offset

method_init: {‘greedy_roi’, ‘corr_pnr’, ‘sparse_nmf’, ‘graph_nmf’, ‘pca_ica’}

Initialization method (default: ‘greedy_roi’)

max_iter_snmf: int

Maximum number of sparse NMF iterations

alpha_snmf: scalar

Sparsity penalty

rolling_sum: boolean

Detect new components based on a rolling sum of pixel activity (default: False)

rolling_length: int

Length of rolling window (default: 100)

center_psf: Boolean

True indicates centering the filtering kernel for background removal. This is useful for data with large background fluctuations.

min_corr: float

minimum local correlation coefficients for selecting a seed pixel.

min_pnr: float

minimum peak-to-noise ratio for selecting a seed pixel.

seed_method: str {‘auto’, ‘manual’, ‘semi’}

methods for choosing seed pixels ‘semi’ detects K components automatically and allows to add more manually if running as notebook ‘semi’ and ‘manual’ require a backend that does not inline figures, e.g. %matplotlib tk

ring_size_factor: float

it’s the ratio between the ring radius and neuron diameters.

nb: integer

number of background components for approximating the background using NMF model

sn: ndarray

per pixel noise

options_total: dict

the option dictionary

ssub_B: int, optional

downsampling factor for 1-photon imaging background computation

init_iter: int, optional

number of iterations for 1-photon imaging initialization

Returns:
Ain: np.ndarray

(d1 * d2 [ * d3]) x K , spatial filter of each neuron.

Cin: np.ndarray

T x K , calcium activity of each neuron.

center: np.ndarray

K x 2 [or 3] , inferred center of each neuron.

bin: np.ndarray

(d1 * d2 [ * d3]) x nb, initialization of spatial background.

fin: np.ndarray

nb x T matrix, initialization of temporal background

Raises:

Exception “Unsupported method”

Exception ‘You need to define arguments for local NMF’

caiman.source_extraction.cnmf.initialization.greedyROI(Y, nr=30, gSig=[5, 5], gSiz=[11, 11], nIter=5, kernel=None, nb=1, rolling_sum=False, rolling_length=100, seed_method='auto')

Greedy initialization of spatial and temporal components using spatial Gaussian filtering

Args:
Y: np.array

3d or 4d array of fluorescence data with time appearing in the last axis.

nr: int

number of components to be found

gSig: scalar or list of integers

standard deviation of Gaussian kernel along each axis

gSiz: scalar or list of integers

size of spatial component

nIter: int

number of iterations when refining estimates

kernel: np.ndarray

User specified kernel to be used, if present, instead of Gaussian (default None)

nb: int

Number of background components

rolling_max: boolean

Detect new components based on a rolling sum of pixel activity (default: True)

rolling_length: int

Length of rolling window (default: 100)

seed_method: str {‘auto’, ‘manual’, ‘semi’}

methods for choosing seed pixels ‘semi’ detects nr components automatically and allows to add more manually if running as notebook ‘semi’ and ‘manual’ require a backend that does not inline figures, e.g. %matplotlib tk

Returns:
A: np.array

2d array of size (# of pixels) x nr with the spatial components. Each column is ordered columnwise (matlab format, order=’F’)

C: np.array

2d array of size nr X T with the temporal components

center: np.array

2d array of size nr x 2 [ or 3] with the components centroids

Author:
Eftychios A. Pnevmatikakis and Andrea Giovannucci based on a matlab implementation by Yuanjun Gao

Simons Foundation, 2015

See Also:

http://www.cell.com/neuron/pdf/S0896-6273(15)01084-3.pdf

caiman.source_extraction.cnmf.initialization.greedyROI_corr(Y, Y_ds, max_number=None, gSiz=None, gSig=None, center_psf=True, min_corr=None, min_pnr=None, seed_method='auto', min_pixel=3, bd=0, thresh_init=2, ring_size_factor=None, nb=1, options=None, sn=None, save_video=False, video_name='initialization.mp4', ssub=1, ssub_B=2, init_iter=2)

initialize neurons based on pixels’ local correlations and peak-to-noise ratios.

Args:

* see init_neurons_corr_pnr for descriptions of following input arguments * data: max_number: gSiz: gSig: center_psf: min_corr: min_pnr: seed_method: min_pixel: bd: thresh_init: swap_dim: save_video: video_name: * see init_neurons_corr_pnr for descriptions of above input arguments *

ring_size_factor: float

it’s the ratio between the ring radius and neuron diameters.

ring_model: Boolean

True indicates using ring model to estimate the background components.

nb: integer

number of background components for approximating the background using NMF model for nb=0 the exact background of the ringmodel (b0 and W) is returned for nb=-1 the full rank background B is returned for nb<-1 no background is returned

ssub_B: int, optional

downsampling factor for 1-photon imaging background computation

init_iter: int, optional

number of iterations for 1-photon imaging initialization

caiman.source_extraction.cnmf.initialization.graphNMF(Y_ds, nr, max_iter_snmf=500, lambda_gnmf=1, sigma_smooth=(0.5, 0.5, 0.5), remove_baseline=True, perc_baseline=20, nb=1, truncate=2, tol=0.001, SC_kernel='heat', SC_normalize=True, SC_thr=0, SC_sigma=1, SC_use_NN=False, SC_nnn=20)
caiman.source_extraction.cnmf.initialization.sparseNMF(Y_ds, nr, max_iter_snmf=200, alpha=0.5, sigma_smooth=(0.5, 0.5, 0.5), remove_baseline=True, perc_baseline=20, nb=1, truncate=2)

Initialization using sparse NMF

Args:
Y_ds: nd.array or movie (x, y, T [,z])

data

nr: int

number of components

max_iter_snm: int

number of iterations

alpha_snmf:

sparsity regularizer (alpha_W)

sigma_smooth_snmf:

smoothing along z,x, and y (.5,.5,.5)

perc_baseline_snmf:

percentile to remove from movie before NMF

nb: int

Number of background components

Returns:
A: np.array

2d array of size (# of pixels) x nr with the spatial components. Each column is ordered columnwise (matlab format, order=’F’)

C: np.array

2d array of size nr X T with the temporal components

center: np.array

2d array of size nr x 2 [ or 3] with the components centroids

Spatial Components

caiman.source_extraction.cnmf.spatial.update_spatial_components(Y, C=None, f=None, A_in=None, sn=None, dims=None, min_size=3, max_size=8, dist=3, normalize_yyt_one=True, method_exp='dilate', expandCore=None, dview=None, n_pixels_per_process=128, medw=(3, 3), thr_method='max', maxthr=0.1, nrgthr=0.9999, extract_cc=True, b_in=None, se=array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]), ss=array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]), nb=1, method_ls='lasso_lars', update_background_components=True, low_rank_background=True, block_size_spat=1000, num_blocks_per_run_spat=20)

update spatial footprints and background through Basis Pursuit Denoising

for each pixel i solve the problem

[A(i,:),b(i)] = argmin sum(A(i,:))

subject to

|| Y(i,:) - A(i,:)*C + b(i)*f || <= sn(i)*sqrt(T);

for each pixel the search is limited to a few spatial components

Args:
Y: np.ndarray (2D or 3D)

movie, raw data in 2D or 3D (pixels x time).

C: np.ndarray

calcium activity of each neuron.

f: np.ndarray

temporal profile of background activity.

A_in: np.ndarray

spatial profile of background activity. If A_in is boolean then it defines the spatial support of A. Otherwise it is used to determine it through determine_search_location

b_in: np.ndarray

you can pass background as input, especially in the case of one background per patch, since it will update using hals

dims: [optional] tuple

x, y[, z] movie dimensions

min_size: [optional] int

max_size: [optional] int

dist: [optional] int

sn: [optional] float

noise associated with each pixel if known

backend [optional] str

‘ipyparallel’, ‘single_thread’ single_thread:no parallelization. It can be used with small datasets. ipyparallel: uses ipython clusters and then send jobs to each of them

n_pixels_per_process: [optional] int

number of pixels to be processed by each thread

method: [optional] string

method used to expand the search for pixels ‘ellipse’ or ‘dilate’

expandCore: [optional] scipy.ndimage.morphology

if method is dilate this represents the kernel used for expansion

dview: view on ipyparallel client

you need to create an ipyparallel client and pass a view on the processors (client = Client(), dview=client[:])

medw, thr_method, maxthr, nrgthr, extract_cc, se, ss: [optional]

Parameters for components post-processing. Refer to spatial.threshold_components for more details

nb: [optional] int

Number of background components

method_ls:
method to perform the regression for the basis pursuit denoising.

‘nnls_L0’. Nonnegative least square with L0 penalty ‘lasso_lars’ lasso lars function from scikit learn

normalize_yyt_one: bool

whether to normalize the C and A matrices so that diag(C*C.T) are ones

update_background_components:bool

whether to update the background components in the spatial phase

low_rank_background:bool

whether to update the using a low rank approximation. In the False case all the nonzero elements of the background components are updated using hals (to be used with one background per patch)

Returns:
A: np.ndarray

new estimate of spatial footprints

b: np.ndarray

new estimate of spatial background

C: np.ndarray

temporal components (updated only when spatial components are completely removed)

f: np.ndarray

same as f_in except if empty component deleted.

Raises:

Exception ‘You need to define the input dimensions’

Exception ‘Dimension of Matrix Y must be pixels x time’

Exception ‘Dimension of Matrix C must be neurons x time’

Exception ‘Dimension of Matrix f must be background comps x time ‘

Exception ‘Either A or C need to be determined’

Exception ‘Dimension of Matrix A must be pixels x neurons’

Exception ‘You need to provide estimate of C and f’

Exception ‘Not implemented consistently’

Exception “Failed to delete: “ + folder

Temporal Components

caiman.source_extraction.cnmf.temporal.update_temporal_components(Y, A, b, Cin, fin, bl=None, c1=None, g=None, sn=None, nb=1, ITER=2, block_size_temp=5000, num_blocks_per_run_temp=20, debug=False, dview=None, **kwargs)

Update temporal components and background given spatial components using a block coordinate descent approach.

Args:
Y: np.ndarray (2D)

input data with time in the last axis (d x T)

A: sparse matrix (crc format)

matrix of spatial components (d x K)

b: ndarray (dx1)

current estimate of spatial background component

Cin: np.ndarray

current estimate of temporal components (K x T)

fin: np.ndarray

current estimate of temporal background (vector of length T)

g: np.ndarray

Global time constant (not used)

bl: np.ndarray

baseline for fluorescence trace for each column in A

c1: np.ndarray

initial concentration for each column in A

g: np.ndarray

discrete time constant for each column in A

sn: np.ndarray

noise level for each column in A

nb: [optional] int

Number of background components

ITER: positive integer

Maximum number of block coordinate descent loops.

method_foopsi: string

Method of deconvolution of neural activity. constrained_foopsi is the only method supported at the moment.

n_processes: int
number of processes to use for parallel computation.

Should be less than the number of processes started with ipcluster.

backend: ‘str’

single_thread no parallelization ipyparallel, parallelization using the ipyparallel cluster. You should start the cluster (install ipyparallel and then type ipcluster -n 6, where 6 is the number of processes).

memory_efficient: Bool

whether or not to optimize for memory usage (longer running times). necessary with very large datasets

kwargs: dict
all parameters passed to constrained_foopsi except bl,c1,g,sn (see documentation).

Some useful parameters are

p: int

order of the autoregression model

method: [optional] string
solution method for constrained foopsi. Choices are

‘cvx’: using cvxopt and picos (slow especially without the MOSEK solver) ‘cvxpy’: using cvxopt and cvxpy with the ECOS solver (faster, default)

solvers: list string
primary and secondary (if problem unfeasible for approx solution)

solvers to be used with cvxpy, default is [‘ECOS’,’SCS’]

Note:

The temporal components are updated in parallel by default by forming of sequence of vertex covers.

Returns:
C: np.ndarray

matrix of temporal components (K x T)

A: np.ndarray

updated A

b: np.array

updated estimate

f: np.array

vector of temporal background (length T)

S: np.ndarray

matrix of merged deconvolved activity (spikes) (K x T)

bl: float

same as input

c1: float

same as input

sn: float

same as input

g: float

same as input

YrA: np.ndarray

matrix of spatial component filtered raw data, after all contributions have been removed. YrA corresponds to the residual trace for each component and is used for faster plotting (K x T)

lam: np.ndarray

Automatically tuned sparsity parameter

Merge components

caiman.source_extraction.cnmf.merging.merge_components(Y, A, b, C, R, f, S, sn_pix, temporal_params, spatial_params, dview=None, thr=0.85, fast_merge=True, mx=1000, bl=None, c1=None, sn=None, g=None, merge_parallel=False, max_merge_area=None) tuple[csc_matrix, ndarray, int, list, ndarray, float, float, float, float, list, ndarray]

Merging of spatially overlapping components that have highly correlated temporal activity

The correlation threshold for merging overlapping components is user specified in thr

Args:
Y: np.ndarray

residual movie after subtracting all found components (Y_res = Y - A*C - b*f) (d x T)

A: sparse matrix

matrix of spatial components (d x K)

b: np.ndarray

spatial background (vector of length d)

C: np.ndarray

matrix of temporal components (K x T)

R: np.ndarray

array of residuals (K x T)

f: np.ndarray

temporal background (vector of length T)

S: np.ndarray

matrix of deconvolved activity (spikes) (K x T)

sn_pix: ndarray

noise standard deviation for each pixel

temporal_params: dictionary

all the parameters that can be passed to the update_temporal_components function

spatial_params: dictionary

all the parameters that can be passed to the update_spatial_components function

thr: scalar between 0 and 1

correlation threshold for merging (default 0.85)

mx: int

maximum number of merging operations (default 50)

sn_pix: nd.array

noise level for each pixel (vector of length d)

fast_merge: bool

if true perform rank 1 merging, otherwise takes best neuron

bl:

baseline for fluorescence trace for each row in C

c1:

initial concentration for each row in C

g:

discrete time constant for each row in C

sn:

noise level for each row in C

merge_parallel: bool

perform merging in parallel

max_merge_area: int

maximum area (in pixels) of merged components, used to determine whether to merge

Returns:
A: sparse matrix

matrix of merged spatial components (d x K)

C: np.ndarray

matrix of merged temporal components (K x T)

nr: int

number of components after merging

merged_ROIs: list

index of components that have been merged

S: np.ndarray

matrix of merged deconvolved activity (spikes) (K x T)

bl: float

baseline for fluorescence trace

c1: float

initial concentration

sn: float

noise level

g: float

discrete time constant

empty: list

indices of neurons that were removed, as they were merged with other neurons.

R: np.ndarray

residuals

Raises:

Exception “The number of elements of bl, c1, g, sn must match the number of components”

Utilities

caiman.source_extraction.cnmf.utilities.detrend_df_f(A, b, C, f, YrA=None, quantileMin=8, frames_window=500, flag_auto=True, use_fast=False, detrend_only=False)

Compute DF/F signal without using the original data. In general much faster than extract_DF_F

Args:
A: scipy.sparse.csc_matrix

spatial components (from cnmf cnm.A)

b: ndarray

spatial background components

C: ndarray

temporal components (from cnmf cnm.C)

f: ndarray

temporal background components

YrA: ndarray

residual signals

quantile_min: float

quantile used to estimate the baseline (values in [0,100]) used only if ‘flag_auto’ is False, i.e. ignored by default

frames_window: int

number of frames for computing running quantile

flag_auto: bool

flag for determining quantile automatically

use_fast: bool

flag for using approximate fast percentile filtering

detrend_only: bool (False)

flag for only subtracting baseline and not normalizing by it. Used in 1p data processing where baseline fluorescence cannot be determined.

Returns:
F_df:

the computed Calcium activity to the derivative of f

caiman.source_extraction.cnmf.utilities.update_order(A, new_a=None, prev_list=None, method='greedy')

Determines the update order of the temporal components given the spatial components by creating a nest of random approximate vertex covers

Args:
A: np.ndarray

matrix of spatial components (d x K)

new_a: sparse array

spatial component that is added, in order to efficiently update the orders in online scenarios

prev_list: list of list

orders from previous iteration, you need to pass if new_a is not None

Returns:
O: list of sets

list of subsets of components. The components of each subset can be updated in parallel

lo: list

length of each subset

Written by Eftychios A. Pnevmatikakis, Simons Foundation, 2015

ROIs

caiman.base.rois.register_ROIs(A1, A2, dims, template1=None, template2=None, align_flag=True, D=None, max_thr=0, use_opt_flow=True, thresh_cost=0.7, max_dist=10, enclosed_thr=None, print_assignment=False, plot_results=False, Cn=None, cmap='viridis')

Register ROIs across different sessions using an intersection over union metric and the Hungarian algorithm for optimal matching

Args:
A1: ndarray or csc_matrix # pixels x # of components

ROIs from session 1

A2: ndarray or csc_matrix # pixels x # of components

ROIs from session 2

dims: list or tuple

dimensionality of the FOV

template1: ndarray dims

template from session 1

template2: ndarray dims

template from session 2

align_flag: bool

align the templates before matching

D: ndarray

matrix of distances in the event they are pre-computed

max_thr: scalar

max threshold parameter before binarization

use_opt_flow: bool

use dense optical flow to align templates

thresh_cost: scalar

maximum distance considered

max_dist: scalar

max distance between centroids

enclosed_thr: float

if not None set distance to at most the specified value when ground truth is a subset of inferred

print_assignment: bool

print pairs of matched ROIs

plot_results: bool

create a plot of matches and mismatches

Cn: ndarray

background image for plotting purposes

cmap: string

colormap for background image

Returns:
matched_ROIs1: list

indices of matched ROIs from session 1

matched_ROIs2: list

indices of matched ROIs from session 2

non_matched1: list

indices of non-matched ROIs from session 1

non_matched2: list

indices of non-matched ROIs from session 2

performance: list

(precision, recall, accuracy, f_1 score) with A1 taken as ground truth

A2: csc_matrix # pixels x # of components

ROIs from session 2 aligned to session 1

caiman.base.rois.register_multisession(A, dims, templates=[None], align_flag=True, max_thr=0, use_opt_flow=True, thresh_cost=0.7, max_dist=10, enclosed_thr=None)

Register ROIs across multiple sessions using an intersection over union metric and the Hungarian algorithm for optimal matching. Registration occurs by aligning session 1 to session 2, keeping the union of the matched and non-matched components to register with session 3 and so on.

Args:
A: list of ndarray or csc_matrix matrices # pixels x # of components

ROIs from each session

dims: list or tuple

dimensionality of the FOV

template: list of ndarray matrices of size dims

templates from each session

align_flag: bool

align the templates before matching

max_thr: scalar

max threshold parameter before binarization

use_opt_flow: bool

use dense optical flow to align templates

thresh_cost: scalar

maximum distance considered

max_dist: scalar

max distance between centroids

enclosed_thr: float

if not None set distance to at most the specified value when ground truth is a subset of inferred

Returns:
A_union: csc_matrix # pixels x # of total distinct components

union of all kept ROIs

assignments: ndarray int of size # of total distinct components x # sessions

element [i,j] = k if component k from session j is mapped to component i in the A_union matrix. If there is no much the value is NaN

matchings: list of lists

matchings[i][j] = k means that component j from session i is represented by component k in A_union

caiman.base.rois.com(A: ndarray, d1: int, d2: int, d3: int | None = None) array

Calculation of the center of mass for spatial components

Args:
A: np.ndarray

matrix of spatial components (d x K)

d1: int

number of pixels in x-direction

d2: int

number of pixels in y-direction

d3: int

number of pixels in z-direction

Returns:
cm: np.ndarray

center of mass for spatial components (K x 2 or 3)

caiman.base.rois.extract_binary_masks_from_structural_channel(Y, min_area_size: int = 30, min_hole_size: int = 15, gSig: int = 5, expand_method: str = 'closing', selem: array = array([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]])) tuple[ndarray, array]

Extract binary masks by using adaptive thresholding on a structural channel

Args:
Y: caiman movie object

movie of the structural channel (assumed motion corrected)

min_area_size: int

ignore components with smaller size

min_hole_size: int

fill in holes up to that size (donuts)

gSig: int

average radius of cell

expand_method: string

method to expand binary masks (morphological closing or dilation)

selem: np.array

structuring element (‘selem’) with which to expand binary masks

Returns:
A: sparse column format matrix

matrix of binary masks to be used for CNMF seeding

mR: np.array

mean image used to detect cell boundaries

Memory mapping

caiman.mmapping.load_memmap(filename: str, mode: str = 'r') tuple[Any, tuple, int]

Load a memory mapped file created by the function save_memmap

Args:
filename: str

path of the file to be loaded

mode: str

One of ‘r’, ‘r+’, ‘w+’. How to interact with files

Returns:
Yr:

memory mapped variable

dims: tuple

frame dimensions

T: int

number of frames

Raises:

ValueError “Unknown file extension”

caiman.mmapping.save_memmap_join(mmap_fnames: list[str], base_name: str | None = None, n_chunks: int = 20, dview=None, add_to_mov=0) str

Makes a large file memmap from a number of smaller files

Args:

mmap_fnames: list of memory mapped files

base_name: string, will be the first portion of name to be solved

n_chunks: number of chunks in which to subdivide when saving, smaller requires more memory

dview: cluster handle

add_to_mov: (undocumented)

caiman.mmapping.save_memmap(filenames: list[str], base_name: str = 'Yr', resize_fact: tuple = (1, 1, 1), remove_init: int = 0, idx_xy: tuple | None = None, order: str = 'F', var_name_hdf5: str = 'mov', xy_shifts: list | None = None, is_3D: bool = False, add_to_movie: float = 0, border_to_0=0, dview=None, n_chunks: int = 100, slices=None) str

Efficiently write data from a list of tif files into a memory mappable file

Args:
filenames: list

list of tif files or list of numpy arrays

base_name: str

the base used to build the file name. WARNING: Names containing underscores may collide with internal semantics.

resize_fact: tuple

x,y, and z downsampling factors (0.5 means downsampled by a factor 2)

remove_init: int

number of frames to remove at the beginning of each tif file (used for resonant scanning images if laser in rutned on trial by trial)

idx_xy: tuple size 2 [or 3 for 3D data]

for selecting slices of the original FOV, for instance idx_xy = (slice(150,350,None), slice(150,350,None))

order: string

whether to save the file in ‘C’ or ‘F’ order

xy_shifts: list

x and y shifts computed by a motion correction algorithm to be applied before memory mapping

is_3D: boolean

whether it is 3D data

add_to_movie: floating-point

value to add to each image point, typically to keep negative values out.

border_to_0: (undocumented)

dview: (undocumented)

n_chunks: (undocumented)

slices: slice object or list of slice objects

slice can be used to select portion of the movies in time and x,y directions. For instance slices = [slice(0,200),slice(0,100),slice(0,100)] will take the first 200 frames and the 100 pixels along x and y dimensions.

Returns:
fname_new: the name of the mapped file, the format is such that

the name will contain the frame dimensions and the number of frames

Image statistics

caiman.summary_images.local_correlations(Y, eight_neighbours: bool = True, swap_dim: bool = True, order_mean=1) ndarray

Computes the correlation image for the input dataset Y

Args:
Y: np.ndarray (3D or 4D)

Input movie data in 3D or 4D format

eight_neighbours: Boolean

Use 8 neighbors if true, and 4 if false for 3D data (default = True) Use 6 neighbors for 4D data, irrespectively

swap_dim: Boolean

True indicates that time is listed in the last axis of Y (matlab format) and moves it in the front

order_mean: (undocumented)

Returns:

rho: d1 x d2 [x d3] matrix, cross-correlation with adjacent pixels

caiman.summary_images.max_correlation_image(Y, bin_size: int = 1000, eight_neighbours: bool = True, swap_dim: bool = True) ndarray

Computes the max-correlation image for the input dataset Y with bin_size

Args:
Y: np.ndarray (3D or 4D)

Input movie data in 3D or 4D format

bin_size: scalar (integer)

Length of bin_size (if last bin is smaller than bin_size < 2 bin_size is increased to impose uniform bins)

eight_neighbours: Boolean

Use 8 neighbors if true, and 4 if false for 3D data (default = True) Use 6 neighbors for 4D data, irrespectively

swap_dim: Boolean

True indicates that time is listed in the last axis of Y (matlab format) and moves it in the front

Returns:
Cn: d1 x d2 [x d3] matrix,

max correlation image

caiman.summary_images.correlation_pnr(Y, gSig=None, center_psf: bool = True, swap_dim: bool = True, background_filter: str = 'disk') tuple[ndarray, ndarray]

compute the correlation image and the peak-to-noise ratio (PNR) image. If gSig is provided, then spatially filtered the video.

Args:
Y: np.ndarray (3D or 4D).

Input movie data in 3D or 4D format

gSig: scalar or vector.

gaussian width. If gSig == None, no spatial filtering

center_psf: Boolean

True indicates subtracting the mean of the filtering kernel

swap_dim: Boolean

True indicates that time is listed in the last axis of Y (matlab format) and moves it in the front

background_filter: str

(undocumented)

Returns:
cn: np.ndarray (2D or 3D).

local correlation image of the spatially filtered (or not) data

pnr: np.ndarray (2D or 3D).

peak-to-noise ratios of all pixels/voxels

Parallel Processing functions

caiman.cluster.setup_cluster(backend: str = 'multiprocessing', n_processes: int | None = None, single_thread: bool = False, ignore_preexisting: bool = False, maxtasksperchild: int | None = None) tuple[Any, Any, int | None]

Setup and/or restart a parallel cluster.

Args:
backend:
One of:

‘multiprocessing’ - Use multiprocessing library ‘ipyparallel’ - Use ipyparallel instead (better on Windows?) ‘single’ - Don’t be parallel (good for debugging, slow)

Most backends will try, by default, to stop a running cluster if it is running before setting up a new one, or throw an error if they find one.

n_processes:

Sets number of processes to use. If None, is set automatically.

single_thread:

Deprecated alias for the ‘single’ backend.

ignore_preexisting:

If True, ignores the existence of an already running multiprocessing pool (which usually indicates a previously-started CaImAn cluster)

maxtasksperchild:

Only used for multiprocessing, default None (number of tasks a worker process can complete before it will exit and be replaced with a fresh worker process).

Returns:
c:

ipyparallel.Client object; only used for ipyparallel backends, else None

dview:

multicore processing engine that is used for parallel processing. If backend is ‘multiprocessing’ then dview is Pool object. If backend is ‘ipyparallel’ then dview is a DirectView object.

n_processes:

number of workers in dview. None means single core mode in use.

caiman.cluster.start_server(ipcluster: str = 'ipcluster', ncpus: int | None = None) None

programmatically start the ipyparallel server

Args:
ncpus

number of processors

ipcluster

ipcluster binary file name; requires 4 path separators on Windows. ipcluster=”C:\Anaconda3\Scripts\ipcluster.exe” Default: “ipcluster”

caiman.cluster.stop_server(ipcluster: str = 'ipcluster', pdir: str | None = None, profile: str | None = None, dview=None) None

programmatically stops the ipyparallel server

Args:
ipclusterstr

ipcluster binary file name; requires 4 path separators on Windows Default: “ipcluster”a

pdir : Undocumented profile: Undocumented dview: Undocumented

Ring-CNN functions

class caiman.utils.nn_models.Masked_Conv2D(*args, **kwargs)

Creates a trainable ring convolutional kernel with non zero entries between user specified radius_min and radius_max. Uses a random uniform non-negative initializer unless specified otherwise.

Args:
output_dim: int, default: 1

number of output channels (number of kernels)

kernel_size: (int, int), default: (5, 5)

dimension of 2d boundaing box

strides: (int, int), default: (1, 1)

stride for convolution (modifying that will downsample)

radius_min: int, default: 2

inner radius of kernel

radius_max: int, default: 3

outer radius of kernel (typically: 2*radius_max - 1 = kernel_size[0])

initializer: ‘uniform’ or Keras initializer, default: ‘uniform’

initializer for ring weights. ‘uniform’ will choose from a non-negative random uniform distribution such that the expected value of the sum is 2.

use_bias: bool, default: True

add a bias term to each convolution kernel

Returns:
Masked_Conv2D: tensorflow.keras.layer

A trainable layer implementing the convolution with a ring

Attributes:
activity_regularizer

Optional regularizer function for the output of this layer.

compute_dtype

The dtype of the layer’s computations.

dtype

The dtype of the layer weights.

dtype_policy

The dtype policy associated with this layer.

dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

inbound_nodes

Return Functional API nodes upstream of this layer.

input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape(s) of a layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

losses

List of losses added using the add_loss() API.

metrics

List of metrics attached to the layer.

name

Name of the layer (string), set in the constructor.

name_scope

Returns a tf.name_scope instance for this class.

non_trainable_variables

Sequence of non-trainable variables owned by this module and its submodules.

non_trainable_weights

List of all non-trainable weights tracked by this layer.

outbound_nodes

Return Functional API nodes downstream of this layer.

output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape(s) of a layer.

stateful
submodules

Sequence of all sub-modules.

supports_masking

Whether this layer supports computing a mask using compute_mask.

trainable
trainable_variables

Sequence of trainable variables owned by this module and its submodules.

trainable_weights

List of all trainable weights tracked by this layer.

updates
variable_dtype

Alias of Layer.dtype, the dtype of the weights.

variables

Returns the list of all layer variables/weights.

weights

Returns the list of all layer variables/weights.

Methods

__call__(*args, **kwargs)

Wraps call, applying pre- and post-processing steps.

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

add_metric(value[, name])

Adds metric tensor to the layer.

add_update(updates)

Add update op(s), potentially dependent on layer inputs.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight([name, shape, dtype, ...])

Adds a new variable to the layer.

build(input_shape)

Creates the variables of the layer (for subclass implementers).

build_from_config(config)

Builds the layer's states with the supplied config dict.

call(x)

This is where the layer's logic lives.

compute_mask(inputs[, mask])

Computes an output mask tensor.

compute_output_shape(input_shape)

Computes the output shape of the layer.

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

count_params()

Count the total number of scalars composing the weights.

finalize_state()

Finalizes the layers state after updating layer weights.

from_config(config)

Creates a layer from its config.

get_build_config()

Returns a dictionary with the layer's input shape.

get_config()

Returns the config of the layer.

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

get_weights()

Returns the current weights of the layer, as NumPy arrays.

load_own_variables(store)

Loads the state of the layer.

save_own_variables(store)

Saves the state of the layer.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

with_name_scope(method)

Decorator to automatically enter the module name scope.

class caiman.utils.nn_models.Hadamard(*args, **kwargs)

Creates a tensorflow.keras multiplicative layer that performs pointwise multiplication with a set of learnable weights.

Args:

initializer: keras initializer, default: Constant(0.1)

Attributes:
activity_regularizer

Optional regularizer function for the output of this layer.

compute_dtype

The dtype of the layer’s computations.

dtype

The dtype of the layer weights.

dtype_policy

The dtype policy associated with this layer.

dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

inbound_nodes

Return Functional API nodes upstream of this layer.

input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape(s) of a layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

losses

List of losses added using the add_loss() API.

metrics

List of metrics attached to the layer.

name

Name of the layer (string), set in the constructor.

name_scope

Returns a tf.name_scope instance for this class.

non_trainable_variables

Sequence of non-trainable variables owned by this module and its submodules.

non_trainable_weights

List of all non-trainable weights tracked by this layer.

outbound_nodes

Return Functional API nodes downstream of this layer.

output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape(s) of a layer.

stateful
submodules

Sequence of all sub-modules.

supports_masking

Whether this layer supports computing a mask using compute_mask.

trainable
trainable_variables

Sequence of trainable variables owned by this module and its submodules.

trainable_weights

List of all trainable weights tracked by this layer.

updates
variable_dtype

Alias of Layer.dtype, the dtype of the weights.

variables

Returns the list of all layer variables/weights.

weights

Returns the list of all layer variables/weights.

Methods

__call__(*args, **kwargs)

Wraps call, applying pre- and post-processing steps.

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

add_metric(value[, name])

Adds metric tensor to the layer.

add_update(updates)

Add update op(s), potentially dependent on layer inputs.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight([name, shape, dtype, ...])

Adds a new variable to the layer.

build(input_shape)

Creates the variables of the layer (for subclass implementers).

build_from_config(config)

Builds the layer's states with the supplied config dict.

call(x)

This is where the layer's logic lives.

compute_mask(inputs[, mask])

Computes an output mask tensor.

compute_output_shape(input_shape)

Computes the output shape of the layer.

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

count_params()

Count the total number of scalars composing the weights.

finalize_state()

Finalizes the layers state after updating layer weights.

from_config(config)

Creates a layer from its config.

get_build_config()

Returns a dictionary with the layer's input shape.

get_config()

Returns the config of the layer.

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

get_weights()

Returns the current weights of the layer, as NumPy arrays.

load_own_variables(store)

Loads the state of the layer.

save_own_variables(store)

Saves the state of the layer.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

with_name_scope(method)

Decorator to automatically enter the module name scope.

class caiman.utils.nn_models.Additive(*args, **kwargs)

Creates a tensorflow.keras additive layer that performs pointwise addition with a set of learnable weights.

Args:

initializer: keras initializer, default: Constant(0)

Attributes:
activity_regularizer

Optional regularizer function for the output of this layer.

compute_dtype

The dtype of the layer’s computations.

dtype

The dtype of the layer weights.

dtype_policy

The dtype policy associated with this layer.

dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

inbound_nodes

Return Functional API nodes upstream of this layer.

input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape(s) of a layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

losses

List of losses added using the add_loss() API.

metrics

List of metrics attached to the layer.

name

Name of the layer (string), set in the constructor.

name_scope

Returns a tf.name_scope instance for this class.

non_trainable_variables

Sequence of non-trainable variables owned by this module and its submodules.

non_trainable_weights

List of all non-trainable weights tracked by this layer.

outbound_nodes

Return Functional API nodes downstream of this layer.

output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape(s) of a layer.

stateful
submodules

Sequence of all sub-modules.

supports_masking

Whether this layer supports computing a mask using compute_mask.

trainable
trainable_variables

Sequence of trainable variables owned by this module and its submodules.

trainable_weights

List of all trainable weights tracked by this layer.

updates
variable_dtype

Alias of Layer.dtype, the dtype of the weights.

variables

Returns the list of all layer variables/weights.

weights

Returns the list of all layer variables/weights.

Methods

__call__(*args, **kwargs)

Wraps call, applying pre- and post-processing steps.

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

add_metric(value[, name])

Adds metric tensor to the layer.

add_update(updates)

Add update op(s), potentially dependent on layer inputs.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight([name, shape, dtype, ...])

Adds a new variable to the layer.

build(input_shape)

Creates the variables of the layer (for subclass implementers).

build_from_config(config)

Builds the layer's states with the supplied config dict.

call(x)

This is where the layer's logic lives.

compute_mask(inputs[, mask])

Computes an output mask tensor.

compute_output_shape(input_shape)

Computes the output shape of the layer.

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

count_params()

Count the total number of scalars composing the weights.

finalize_state()

Finalizes the layers state after updating layer weights.

from_config(config)

Creates a layer from its config.

get_build_config()

Returns a dictionary with the layer's input shape.

get_config()

Returns the config of the layer.

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

get_weights()

Returns the current weights of the layer, as NumPy arrays.

load_own_variables(store)

Loads the state of the layer.

save_own_variables(store)

Saves the state of the layer.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

with_name_scope(method)

Decorator to automatically enter the module name scope.

caiman.utils.nn_models.create_LN_model(Y=None, shape=(None, None, 1), n_channels=2, gSig=5, r_factor=1.5, use_add=True, initializer='uniform', lr=0.0001, pct=10, loss='mse', width=5, use_bias=False)

Creates a convolutional neural network with ring shape convolutions and multiplicative layers. User needs to specify the radius of the average neuron through gSig and the number of channels. The other parameters can be modified or left to the default values. The inner and outer radius of the ring kernel will be int(gSig*r_factor) and int(gSig*r_factor) + width, respectively.

Args:
Y: np.array, default: None

dataset to be fit, used only if a percentile based initializer is used for the additive layer and can be left to None

shape: tuple, default: (None, None, 1)

dimensions of the FOV. Can be left to its default value

n_channels: int, default: 2

number of convolutional kernels

gSig: int, default: 5

radius of average neuron

r_factor: float, default: 1.5

expansion factor to determine inner radius

width: int, default: 5

width of ring kernel

use_add: bool, default: True

flag for using an additive layer

initializer: ‘uniform’ or Keras initializer, default: ‘uniform’

initializer for ring weights. ‘uniform’ will choose from a non-negative random uniform distribution such that the expected value of the sum is 2.

lr: float, default: 1e-4

(initial) learning rate

pct: float, default: 10

percentile used for initializing additive layer

loss: str or keras loss function

loss function used for training

use_bias: bool, default: False

add a bias term to each convolution kernel

Returns:

model_LIN: tf.keras model compiled and ready to be trained.

caiman.utils.nn_models.fit_NL_model(model_NL, Y, patience=5, val_split=0.2, batch_size=32, epochs=500, schedule=None)

Fit either the linear or the non-linear model. The model is fit for a use specified maximum number of epochs and early stopping is used based on the validation loss. A Tensorboard compatible log is also created.

Args:
model_LN: Keras Ring-CNN model

see create_LN_model and create_NL_model above

patience: int, default: 5

patience value for early stopping criterion

val_split: float, default: 0.2

fraction of data to keep for validation (value between 0 and 1)

batch_size: int, default: 32

batch size during training

epochs: int, default: 500

maximum number of epochs

schedule: keras learning rate scheduler

Returns:
model_NL:

Keras Ring-CNN model trained model loaded with best weights according to validation loss

history_NL:

contains data related to the training history

path_to_model:

path to where the weights are stored

caiman.utils.nn_models.quantile_loss(qnt=0.5)

Returns a quantile loss function that can be used for training.

Args:
qnt: float, default: 0.5

desired quantile (0 < qnt < 1)

Returns:

my_qnt_loss: quantile loss function

VolPy

class caiman.source_extraction.volpy.volpy.VOLPY(n_processes, dview=None, template_size=0.02, context_size=35, censor_size=12, visualize_ROI=False, flip_signal=True, hp_freq_pb=0.3333333333333333, nPC_bg=8, ridge_bg=0.01, hp_freq=1, clip=100, threshold_method='adaptive_threshold', min_spikes=10, pnorm=0.5, threshold=3, sigmas=array([1., 1.5, 2.]), n_iter=2, weight_update='ridge', do_plot=False, do_cross_val=False, sub_freq=20, method='spikepursuit', superfactor=10, params=None)

Spike Detection in Voltage Imaging The general file class which is used to find spikes of voltage imaging. Its architecture is similar to the one of scikit-learn calling the function fit to run everything which is part of the structure of the class. The output will be recorded in self.estimates. In order to use VolPy within CaImAn, you must install Keras into your conda environment. You can do this by activating your environment, and then issuing the command “conda install -c conda-forge keras”.

Methods

fit([n_processes, dview])

Run the volspike function to detect spikes and save the result into self.estimates

VOLPY.__init__(n_processes, dview=None, template_size=0.02, context_size=35, censor_size=12, visualize_ROI=False, flip_signal=True, hp_freq_pb=0.3333333333333333, nPC_bg=8, ridge_bg=0.01, hp_freq=1, clip=100, threshold_method='adaptive_threshold', min_spikes=10, pnorm=0.5, threshold=3, sigmas=array([1., 1.5, 2.]), n_iter=2, weight_update='ridge', do_plot=False, do_cross_val=False, sub_freq=20, method='spikepursuit', superfactor=10, params=None)
Args:
n_processes: int

number of processes used

dview: Direct View object

for parallelization pruposes when using ipyparallel

template_size: float

template_size, # half size of the window length for spike templates, default is 20 ms

context_size: int

number of pixels surrounding the ROI to use as context

censor_size: int

number of pixels surrounding the ROI to censor from the background PCA; roughly the spatial scale of scattered/dendritic neural signals, in pixels

flip_signal: boolean

whether to flip signal upside down for spike detection True for voltron, False for others

hp_freq_pb: float

high-pass frequency for removing photobleaching

nPC_bg: int

number of principal components used for background subtraction

ridge_bg: float

regularization strength for ridge regression in background removal

hp_freq: float

high-pass cutoff frequency to filter the signal after computing the trace

clip: int

maximum number of spikes for producing templates

threshold_method: str

adaptive_threshold or simple method for thresholding signals adaptive_threshold method threshold based on estimated peak distribution simple method threshold based on estimated noise level

min_spikes: int

minimal number of spikes to be detected

pnorm: float, between 0 and 1, default is 0.5

a variable decides spike count chosen for adaptive threshold method

threshold: float

threshold for spike detection in simple threshold method The real threshold is the value multiplied by the estimated noise level

sigmas: 1-d array

spatial smoothing radius imposed on high-pass filtered movie only for finding weights

n_iter: int

number of iterations alternating between estimating spike times and spatial filters

weight_update: str

ridge or NMF for weight update

do_plot: boolean

if True, plot trace of signals and spiketimes, peak triggered average, histogram of heights in the last iteration

do_cross_val: boolean

whether to use cross validation to optimize regression regularization parameters

sub_freq: float

frequency for subthreshold extraction

method: str

spikepursuit or atm method

superfactor: int

used in atm method for regression

VOLPY.fit(n_processes=None, dview=None)

Run the volspike function to detect spikes and save the result into self.estimates

class caiman.source_extraction.volpy.volparams.volparams(fnames=None, fr=None, index=None, ROIs=None, weights=None, template_size=0.02, context_size=35, censor_size=12, visualize_ROI=False, flip_signal=True, hp_freq_pb=0.3333333333333333, nPC_bg=8, ridge_bg=0.01, hp_freq=1, clip=100, threshold_method='adaptive_threshold', min_spikes=10, pnorm=0.5, threshold=3, sigmas=array([1., 1.5, 2.]), n_iter=2, weight_update='ridge', do_plot=False, do_cross_val=False, sub_freq=20, method='spikepursuit', superfactor=10, params_dict={})

Methods

get(group, key)

Get a value for a given group and key.

get_group(group)

Get the dictionary of key-value pairs for a group.

set(group, val_dict[, set_if_not_exists, ...])

Add key-value pairs to a group. Existing key-value pairs will be overwritten

change_params

volparams.__init__(fnames=None, fr=None, index=None, ROIs=None, weights=None, template_size=0.02, context_size=35, censor_size=12, visualize_ROI=False, flip_signal=True, hp_freq_pb=0.3333333333333333, nPC_bg=8, ridge_bg=0.01, hp_freq=1, clip=100, threshold_method='adaptive_threshold', min_spikes=10, pnorm=0.5, threshold=3, sigmas=array([1., 1.5, 2.]), n_iter=2, weight_update='ridge', do_plot=False, do_cross_val=False, sub_freq=20, method='spikepursuit', superfactor=10, params_dict={})

Class for setting parameters for voltage imaging. Including parameters for the data, motion correction and spike detection. The preferred way to set parameters is by using the set function, where a subclass is determined and a dictionary is passed. The whole dictionary can also be initialized at once by passing a dictionary params_dict when initializing the CNMFParams object.

volparams.set(group, val_dict, set_if_not_exists=False, verbose=False)
Add key-value pairs to a group. Existing key-value pairs will be overwritten

if specified in val_dict, but not deleted.

Args:

group: The name of the group. val_dict: A dictionary with key-value pairs to be set for the group. set_if_not_exists: Whether to set a key-value pair in a group if the key does not currently exist in the group.

volparams.get(group, key)

Get a value for a given group and key. Raises an exception if no such group/key combination exists.

Args:

group: The name of the group. key: The key for the property in the group of interest.

Returns: The value for the group/key combination.

volparams.get_group(group)

Get the dictionary of key-value pairs for a group.

Args:

group: The name of the group.

volparams.change_params(params_dict, verbose=False)
caiman.source_extraction.volpy.spikepursuit.volspike(pars)

Function for finding spikes of single neuron with given ROI in voltage imaging. Use function denoise_spikes to find spikes from one dimensional signal, and use ridge regression to find the best weight. Do these two steps iteratively to find best spike times.

Args:
pars: list
fnames: str

name of the memory mapping file in C order

fr: int

frame rate of the movie

cell_n: int

index of the cell processing

ROIs: 3-d array

all regions of interests

weights: 3-d array

spatial weights of different cells generated by previous data blocks as initialization

args: dictionary
template_size: float

half size of the window length for spike templates, default is 20 ms

context_size: int

number of pixels surrounding the ROI to use as context

censor_size: int

number of pixels surrounding the ROI to censor from the background PCA; roughly the spatial scale of scattered/dendritic neural signals, in pixels

visualize_ROI: boolean

whether to visualize the region of interest inside the context region

flip_signal: boolean

whether to flip signal upside down for spike detection True for voltron, False for others

hp_freq_pb: float

high-pass frequency for removing photobleaching

nPC_bg: int

number of principle components used for background subtraction

ridge_bg: float

regularization strength for ridge regression in background removal

hp_freq: float

high-pass cutoff frequency to filter the signal after computing the trace

clip: int

maximum number of spikes for producing templates

threshold_method: str

adaptive_threshold or simple method for thresholding signals adaptive_threshold method threshold based on estimated peak distribution simple method threshold based on estimated noise level

min_spikes: int

minimal number of spikes to be detected

pnorm: float

a variable decides spike count chosen for adaptive threshold method

threshold: float

threshold for spike detection in simple threshold method The real threshold is the value multiplied by the estimated noise level

sigmas: 1-d array

spatial smoothing radius imposed on high-pass filtered movie only for finding weights

n_iter: int

number of iterations alternating between estimating spike times and spatial filters

weight_update: str

ridge or NMF for weight update

do_plot: boolean

if True, plot trace of signals and spiketimes, peak triggered average, histogram of heights in the last iteration

do_cross_val: boolean

whether to use cross validation to optimize regression regularization parameters

sub_freq: float

frequency for subthreshold extraction

Returns:
output: dictionary
cell_n: int

index of cell

t: 1-d array

trace without applying whitened matched filter

ts: 1-d array

trace after applying whitened matched filter

t_rec: 1-d array

reconstructed signal of the neuron

t_sub: 1-d array

subthreshold signal of the neuron

spikes: 1-d array

spike time of the neuron

num_spikes: list

number of spikes detected in each iteration

low_spikes: boolean

True if detected number spikes is less than min_spikes

template: 1-d array

temporal template of the neuron

snr: float

signal to noise ratio of the processed signal

thresh: float

threshold of the signal

weights: 2-d array

ridge regression coefficients for fitting reconstructed signal

locality: boolean

False if the maximum of spatial filter is not in the initial ROI

context_coord: 2-d array

boundary of context region in x,y coordinates

mean_im: 1-d array

mean of the signal in ROI after removing photobleaching, used for producing F0

F0: 1-d array

baseline signal

dFF: 1-d array

scaled signal

rawROI: dictionary

including the result after the first spike extraction