The library infrastructure »Frameworks module

Functions that form the basis of most pixel-based processing in DIPlib.

Contents

The various frameworks implement iterating over image pixels, giving access to a single pixel, a whole image line, or a pixel's neighborhood. The programmer needs to define a function that loops over one dimension. The framework will call this function repeatedly to process all the image's lines, thereby freeing the programmer from implementing loops over multiple dimensions. This process allows most of DIPlib's filters to be dimensionality independent, with little effort from the programmer. See Frameworks.

There are three frameworks that represent three different types of image processing functions:

Classes

struct dip::Framework::ScanBuffer
Structure that holds information about input or output pixel buffers for the dip::Framework::Scan callback function object.
struct dip::Framework::ScanLineFilterParameters
Parameters to the line filter for dip::Framework::Scan.
class dip::Framework::ScanLineFilter
Prototype line filter for dip::Framework::Scan.
template<dip::uint N, typename TPI, typename F>
class dip::Framework::VariadicScanLineFilter
struct dip::Framework::SeparableBuffer
Structure that holds information about input or output pixel buffers for the dip::Framework::Separable callback function object.
struct dip::Framework::SeparableLineFilterParameters
Parameters to the line filter for dip::Framework::Separable.
class dip::Framework::SeparableLineFilter
Prototype line filter for dip::Framework::Separable.
struct dip::Framework::FullBuffer
Structure that holds information about input or output pixel buffers for the dip::Framework::Full callback function object.
struct dip::Framework::FullLineFilterParameters
Parameters to the line filter for dip::Framework::Full.
class dip::Framework::FullLineFilter
Prototype line filter for dip::Framework::Full.
class dip::Framework::ScanOptions
Defines options to the dip::Framework::Scan function.
class dip::Framework::SeparableOptions
Defines options to the dip::Framework::Separable function.
class dip::Framework::FullOptions
Defines options to the dip::Framework::Full function.

Functions

void SingletonExpandedSize(UnsignedArray& size1, UnsignedArray const& size2)
Determines the singleton-expanded size as a combination of the two sizes.
auto SingletonExpandedSize(ImageConstRefArray const& in) -> UnsignedArray
Determines if images can be singleton-expanded to the same size, and what that size would be.
auto SingletonExpandedSize(ImageArray const& in) -> UnsignedArray
Determines if images can be singleton-expanded to the same size, and what that size would be.
auto SingletonExpendedTensorElements(ImageArray const& in) -> dip::uint
Determines if tensors in images can be singleton-expanded to the same size, and what that size would be.
auto OptimalProcessingDim(Image const& in) -> dip::uint
Determines the best processing dimension, which is the one with the smallest stride, except if that dimension is very small and there's a longer dimension.
auto OptimalProcessingDim(Image const& in, UnsignedArray const& kernelSizes) -> dip::uint
Determines the best processing dimension as above, but giving preference to a dimension where kernelSizes is large also.
void Scan(ImageConstRefArray const& in, ImageRefArray& out, DataTypeArray const& inBufferTypes, DataTypeArray const& outBufferTypes, DataTypeArray const& outImageTypes, UnsignedArray const& nTensorElements, ScanLineFilter& lineFilter, ScanOptions opts = {})
Framework for pixel-based processing of images.
void ScanSingleOutput(Image& out, DataType bufferType, ScanLineFilter& lineFilter, ScanOptions opts = {})
Calls dip::Framework::Scan with one output image, which is already forged.
void ScanSingleInput(Image const& in, Image const& c_mask, DataType bufferType, ScanLineFilter& lineFilter, ScanOptions opts = {})
Calls dip::Framework::Scan with one input image and a mask image.
void ScanMonadic(Image const& in, Image& out, DataType bufferTypes, DataType outImageType, dip::uint nTensorElements, ScanLineFilter& lineFilter, ScanOptions opts = {})
Calls dip::Framework::Scan with one input image and one output image.
void ScanDyadic(Image const& in1, Image const& in2, Image& out, DataType inType, DataType outType, ScanLineFilter& lineFilter, ScanOptions opts = {})
Calls dip::Framework::Scan with two input images and one output image.
template<typename TPI, typename F>
auto NewMonadicScanLineFilter(F const& func, dip::uint cost = 1) -> std::unique_ptr<ScanLineFilter>
Support for quickly defining monadic operators (1 input image, 1 output image). See dip::Framework::VariadicScanLineFilter.
template<typename TPI, typename F>
auto NewDyadicScanLineFilter(F const& func, dip::uint cost = 1) -> std::unique_ptr<ScanLineFilter>
Support for quickly defining dyadic operators (2 input images, 1 output image). See dip::Framework::VariadicScanLineFilter.
template<typename TPI, typename F>
auto NewTriadicScanLineFilter(F const& func, dip::uint cost = 1) -> std::unique_ptr<ScanLineFilter>
Support for quickly defining triadic operators (3 input images, 1 output image). See dip::Framework::VariadicScanLineFilter.
template<typename TPI, typename F>
auto NewTetradicScanLineFilter(F const& func, dip::uint cost = 1) -> std::unique_ptr<ScanLineFilter>
Support for quickly defining tetradic operators (4 input images, 1 output image). See dip::Framework::VariadicScanLineFilter.
void Separable(Image const& in, Image& out, DataType bufferType, DataType outImageType, BooleanArray process, UnsignedArray border, BoundaryConditionArray boundaryConditions, SeparableLineFilter& lineFilter, SeparableOptions opts = {})
Framework for separable filtering of images.
void Full(Image const& in, Image& out, DataType inBufferType, DataType outBufferType, DataType outImageType, dip::uint nTensorElements, BoundaryConditionArray const& boundaryConditions, Kernel const& kernel, FullLineFilter& lineFilter, FullOptions opts = {})
Framework for filtering of images with an arbitrary shape neighborhood.

Function documentation

void SingletonExpandedSize(UnsignedArray& size1, UnsignedArray const& size2)

Determines the singleton-expanded size as a combination of the two sizes.

Singleton dimensions (size==1) can be expanded to match another image's size. This function can be used to check if such expansion is possible, and what the resulting sizes would be. size1 is adjusted. An exception is thrown if the singleton expansion is not possible.

UnsignedArray SingletonExpandedSize(ImageConstRefArray const& in)

Determines if images can be singleton-expanded to the same size, and what that size would be.

Singleton dimensions (size==1) can be expanded to a larger size by setting their stride to 0. This change can be performed without modifying the data segment. If image dimensions differ such that singleton expansion cannot make them all the same size, an exception is thrown. Use dip::Image::ExpandSingletonDimensions to apply the transform to one image.

UnsignedArray SingletonExpandedSize(ImageArray const& in)

Determines if images can be singleton-expanded to the same size, and what that size would be.

Singleton dimensions (size==1) can be expanded to a larger size by setting their stride to 0. This change can be performed without modifying the data segment. If image dimensions differ such that singleton expansion cannot make them all the same size, an exception is thrown. Use dip::Image::ExpandSingletonDimensions to apply the transform to one image.

dip::uint SingletonExpendedTensorElements(ImageArray const& in)

Determines if tensors in images can be singleton-expanded to the same size, and what that size would be.

The tensors must all be of the same size, or of size 1. The tensors with size 1 are singletons, and can be expended to the size of the others by setting their stride to 0. This change can be performed without modifying the data segment. If singleton expansion cannot make them all the same size, an exception is thrown. Use dip::Image::ExpandSingletonTensor to apply the transform to one image.

void Scan(ImageConstRefArray const& in, ImageRefArray& out, DataTypeArray const& inBufferTypes, DataTypeArray const& outBufferTypes, DataTypeArray const& outImageTypes, UnsignedArray const& nTensorElements, ScanLineFilter& lineFilter, ScanOptions opts = {})

Framework for pixel-based processing of images.

The function object lineFilter is called for each image line, with input and output buffers either pointing directly to the input and output images, or pointing to temporary buffers that are handled by the framework and serve to prevent lineFilter to have to deal with too many different data types. The buffers are always of the type specified in inBuffer and outBuffer, but are passed as void*. lineFilter should cast these pointers to the right types. Output buffers are not initialized, lineFilter is responsible for setting all its values.

Output images (unless protected) will be resized to match the (singleton-expanded) input, and their type will be set to that specified by outImage. Protected output images must have the correct size and type, otherwise an exception will be thrown. The scan function can be called without input images. In this case, at least one output image must be given. The dimensions of the first output image will be used to direct the scanning, and the remaining output images (if any) will be adjusted to the same size. It is also possible to give no output images, as would be the case for a reduction operation such as computing the average pixel value. However, it makes no sense to call the scan function without input nor output images.

Tensors are passed to lineFilter as vectors, if the shape is important, store this information in lineFilter. nTensorElements gives the number of tensor elements for each output image. These are created as standard vectors. The calling function can reshape the tensors after the call to dip::Framework::Scan. It is not necessary nor enforced that the tensors for each image (both input and output) are the same, the calling function is to make sure the tensors satisfy whatever constraints.

However, if the option dip::FrameWork::ScanOption::TensorAsSpatialDim is given, then the tensor is cast to a spatial dimension, and singleton expansion is applied. Thus, lineFilter does not need to check inTensorLength or outTensorLength (they will be 1), and the output tensor size is guaranteed to match the largest input tensor. nTensorElements is ignored. Even with a single input image, where no singleton expansion can happen, it is beneficial to use the dip::FrameWork::ScanOption::TensorAsSpatialDim option, as lineFilter can be simpler and faster. Additionally, the output tensor shape is identical to the input image's. In case of multiple inputs, the first input image that has as many tensor elements as the (singleton-expanded) output will model the output tensor shape.

If the option dip::FrameWork::ScanOption::ExpandTensorInBuffer is given, then the input buffers passed to lineFilter will contain the tensor elements as a standard, column-major matrix. If the image has tensors stored differently, buffers will be used. This option is not used when dip::FrameWork::ScanOption::TensorAsSpatialDim is set, as that forces the tensor to be a single sample. Use this option if you need to do computations with the tensors, but do not want to bother with all the different tensor shapes, which are meant only to save memory. Note, however, that this option does not apply to the output images. When expanding the input tensors in this way, it makes sense to set the output tensor to a full matrix. Don't forget to specify the right size in nTensorElements.

The framework function sets the output pixel size to that of the first input image with a defined pixel size, and it sets the color space to that of the first input image with matching number of tensor elements. The calling function is expected to "correct" these values if necessary.

The buffers are not guaranteed to be contiguous, please use the stride and tensorStride values to access samples. All buffers contain bufferLength pixels. position gives the coordinates for the first pixel in the buffers, subsequent pixels occur along dimension dimension. position[dimension] is not necessarily zero. However, when dip::FrameWork::ScanOption::NeedCoordinates is not given, dimension and position are meaningless. The framework is allowed to treat all pixels in the image as a single image line in this case.

If in and out share an image, then it is possible that the corresponding input and output buffers point to the same memory. The input image will be overwritten with the processing result. That is, all processing can be performed in place. The scan framework is intended for pixel-wise processing, not neighborhood-based processing, so there is never a reason not to work in place. However, some types of tensor processing might want to write to the output without invalidating the input for that same pixel. In this case, give the option dip::FrameWork::ScanOption::NotInPlace. It will make sure that the output buffers given to the line filter do not alias the input buffers.

dip::Framework::Scan will process the image using multiple threads, so lineFilter will be called from multiple threads simultaneously. If it is not thread safe, specify dip::FrameWork::ScanOption::NoMultiThreading as an option. the SetNumberOfThreads method to lineFilter will be called once before the processing starts, when dip::Framework::Scan has determined how many threads will be used in the scan, even if dip::FrameWork::ScanOption::NoMultiThreading was specified.

void ScanSingleInput(Image const& in, Image const& c_mask, DataType bufferType, ScanLineFilter& lineFilter, ScanOptions opts = {})

Calls dip::Framework::Scan with one input image and a mask image.

If mask is forged, it is expected to be a scalar image of type dip::DT_BIN, and of size compatible with in. mask is singleton-expanded to the size of in, but not the other way around. Its pointer will be passed to lineFilter directly, without copies to change its data type. Thus, inBuffer[ 1 ].buffer is of type bin*, not of type bufferType.

void ScanDyadic(Image const& in1, Image const& in2, Image& out, DataType inType, DataType outType, ScanLineFilter& lineFilter, ScanOptions opts = {})

Calls dip::Framework::Scan with two input images and one output image.

It handles some of the work for dyadic (binary) operators related to matching up tensor dimensions in the input image.

Input tensors are expected to match, but a scalar is expanded to the size of the other tensor. The output tensor will be of the same size as the input tensors, its shape will match the input shape if one image is a scalar, or if both images have matching tensor shapes. Otherwise the output tensor will be a column-major matrix (or vector or scalar, as appropriate).

This function adds dip::Framework::ScanOption::TensorAsSpatialDim or dip::Framework::ScanOption::ExpandTensorInBuffer to opts, so don't set these values. This means that the tensors passed to lineFilter is either all scalars (the tensor can be converted to a spatial dimension) or full, column-major tensors of equal size. Do not specify dip::Framework::ScanOption::NoSingletonExpansion in opts.

void Separable(Image const& in, Image& out, DataType bufferType, DataType outImageType, BooleanArray process, UnsignedArray border, BoundaryConditionArray boundaryConditions, SeparableLineFilter& lineFilter, SeparableOptions opts = {})

Framework for separable filtering of images.

The function object lineFilter is called for each image line, and along each dimension, with input and output buffers either pointing directly to the input and output images, or pointing to temporary buffers that are handled by the framework and present the line's pixel data with a different data type, with expanded borders, etc. The buffers are always of the type specified in inBuffer and outBuffer, but are passed as void*. lineFilter should cast these pointers to the right types. The output buffer is not initialized, lineFilter is responsible for setting all its values.

The process array specifies along which dimensions the filtering is applied. If it is an empty array, all dimensions will be processed. Otherwise, it must have one element per image dimension.

The output image (unless protected) will be resized to match the input, and its type will be set to that specified by outImage. A protected output image must have the correct size and type, otherwise an exception will be thrown. The separable filter always has one input and one output image.

If the option dip::FrameWork::SeparableOption::DontResizeOutput is given, then the sizes of the output image will be kept (but it could still be reforged to change the data type). In this case, the length of the input and output buffers can differ, causing the intermediate result image to change size one dimension at the time, as each dimension is processed. For example, if the input image is of size 256x256, and the output is 1x1, then in a first step 256 lines are processed, each with 256 pixels as input and a single pixel as output. In a second step, a single line of 256 pixels is processed yielding the final single-pixel result. In the same case, but with an output of 64x512, 256 lines are processed, each with 256 pixels as input and 64 pixels as output. In the second step, 64 lines are processed, each with 256 pixels as input and 512 pixels as output. This option is useful for functions that scale and do other geometric transformations, as well as functions that compute projections.

Tensors are passed to lineFilter as vectors, if the shape is important, store this information in lineFilter. The output image will have the same tensor shape as the input except if the option dip::FrameWork::SeparableOption::ExpandTensorInBuffer is given. In this case, the input buffers passed to lineFilter will contain the tensor elements as a standard, column-major matrix, and the output image will be a full matrix of that size. If the input image has tensors stored differently, buffers will be used when processing the first dimension; for subsequent dimensions, the intermediate result will already contain the full matrix. Use this option if you need to do computations with the tensors, but do not want to bother with all the different tensor shapes, which are meant only to save memory.

However, if the option dip::FrameWork::SeparableOption::AsScalarImage is given, then the line filter is called for each tensor element, effectively causing the filter to process a sequence of scalar images, one for each tensor element. This is accomplished by converting the tensor into a spatial dimension for both the input and output image, and setting the process array for the new dimension to false. For example, given an input image in with 3 tensor elements, filter(in,out) will result in an output image out with 3 tensor elements, and computed as if filter were called 3 times: filter(in[0],out[0]), filter(in[1],out[1]), and filter(in[2],out[2]).

The framework function sets the output tensor size to that of the input image, and it sets the color space to that of the input image if the two images have matching number of tensor elements (these can differ if dip::FrameWork::SeparableOption::ExpandTensorInBuffer is given). The calling function is expected to "correct" these values if necessary. Note the difference here with the Scan and Full frameworks: it is not possible to apply a separate filter to a tensor image and obtain an output with a different tensor representation (because the question arises: in which image pass does this change occur?).

The buffers are not guaranteed to be contiguous, please use the stride and tensorStride values to access samples. The dip::Framework::SeparableOption::UseInputBorder and dip::Framework::SeparableOption::UseOutputBorder options force the use of temporary buffers to store each image line. These temporary buffers always have contiguous samples, with the tensor stride equal to 1 and the spatial stride equal to the number of tensor elements. That is, the tensor elements for each pixel are contiguous, and the pixels are contiguous. This is useful when calling external code to process the buffers, and that external code expects input data to be contiguous. If the input has a stride of 0 in the dimension being processed (this happens when expanding singleton dimensions), it means that a single pixel is repeated across the whole line. This property is preserved in the buffer. Thus, even when these two flags are used, you need to check the stride value and deal with the singleton dimension appropriately.

The input buffer contains bufferLength + 2 * border pixels. The pixel pointed to by the buffer pointer is the first pixel on that line in the input image. The lineFilter function object can read up to border pixels before that pixel, and up to border pixels after the last pixel on the line. These pixels are filled by the framework using the boundaryCondition value for the given dimension. The boundaryCondition array can be empty, in which case the default boundary condition value is used. If the option dip::FrameWork::SeparableOption::UseOutputBorder is given, then the output buffer also has border extra samples at each end. These extra samples are meant to help in the computation for some filters, and are not copied back to the output image. position gives the coordinates for the first pixel in the buffers, subsequent pixels occur along dimension dimension. position[dimension] is always zero.

If in and out share their data segments, then the input image might be overwritten with the processing result. However, the input and output buffers will never share memory. That is, the line filter can freely write in the output buffer without invalidating the input buffer, even when the filter is being applied in-place. With the dip::FrameWork::SeparableOption::UseInputBuffer option, the input buffer never points to the input image, the input data are always copied to a temporary buffer. This allows the lineFilter to modify the input, which is useful for, for example, computing the median of the input data by sorting.

If in and out share their data segments (e.g. they are the same image), then the filtering operation can be applied completely in place, without any temporary images. For this to be possible, outImageType, bufferType and the input image data type must all be the same.

dip::Framework::Separable will process the image using multiple threads, so lineFilter will be called from multiple threads simultaneously. If it is not thread safe, specify dip::FrameWork::SeparableOption::NoMultiThreading as an option. the SetNumberOfThreads method to lineFilter will be called once before the processing starts, when dip::Framework::Separable has determined how many threads will be used in the processing, even if dip::FrameWork::SeparableOption::NoMultiThreading was specified.

void Full(Image const& in, Image& out, DataType inBufferType, DataType outBufferType, DataType outImageType, dip::uint nTensorElements, BoundaryConditionArray const& boundaryConditions, Kernel const& kernel, FullLineFilter& lineFilter, FullOptions opts = {})

Framework for filtering of images with an arbitrary shape neighborhood.

The function object lineFilter is called for each image line, with input and output buffers either pointing directly to the input and output images, or pointing to temporary buffers that are handled by the framework and present the line's pixel data with a different data type, with expanded borders, etc. The buffers are always of the type specified in inBuffer and outBuffer, but are passed as void*. lineFilter should cast these pointers to the right types. The output buffer is not initialized, lineFilter is responsible for setting all its values.

lineFilter can access the pixels on the given line for all input and output images, as well as all pixels within the neighborhood for all input images. The neighborhood is given by kernel. This object defines the size of the border extension in the input buffer.

The output image (unless protected) will be resized to match the input, and its type will be set to that specified by outImage. A protected output image must have the correct size and type, otherwise an exception will be thrown. The full filter always has one input and one output image.

Tensors are passed to lineFilter as vectors, if the shape is important, store this information in lineFilter. nTensorElements gives the number of tensor elements for the output image. These are created as standard vectors, unless the input image has the same number of tensor elements, in which case that tensor shape is copied. The calling function can reshape the tensors after the call to dip::Framework::Full. It is not necessary nor enforced that the tensors for each image (both input and output) are the same, the calling function is to make sure the tensors satisfy whatever constraints.

However, if the option dip::FrameWork::FullOption::AsScalarImage is given, then the line filter is called for each tensor element, effectively causing the filter to process a sequence of scalar images, one for each tensor element. nTensorElements is ignored, and set to the number of tensor elements of the input. For example, given an input image in with 3 tensor elements, filter(in,out) will result in an output image out with 3 tensor elements, and computed as if filter were called 3 times: filter(in[0],out[0]), filter(in[1],out[1]), and filter(in[2],out[2]).

If the option dip::FrameWork::FullOption::ExpandTensorInBuffer is given, then the input buffer passed to lineFilter will contain the tensor elements as a standard, column-major matrix. If the image has tensors stored differently, buffers will be used. This option is not used when dip::FrameWork::FullOption::AsScalarImage is set, as that forces the tensor to be a single sample. Use this option if you need to do computations with the tensors, but do not want to bother with all the different tensor shapes, which are meant only to save memory. Note, however, that this option does not apply to the output image. When expanding the input tensor in this way, it makes sense to set the output tensor to a full matrix. Don't forget to specify the right size in nTensorElements.

The framework function sets the output pixel size to that of the input image, and it sets the color space to that of the input image if the two images have matching number of tensor elements. The calling function is expected to "correct" these values if necessary.

The buffers are not guaranteed to be contiguous, please use the stride and tensorStride values to access samples. The pixel pointed to by the buffer pointer is the first pixel on that line in the input image. lineFilter can read any pixel within the neighborhood of all the pixels on the line. These pixels are filled by the framework using the boundaryCondition values. The boundaryCondition vector can be empty, in which case the default boundary condition value is used.

If the option dip::Framework::FullOption::BorderAlreadyExpanded is given, then the input image is presumed to have been expanded using the function dip::ExtendImage (specify the option "masked"). That is, it is possible to read outside the image bounds within an area given by the size of kernel. If the tensor doesn't need to be expanded, and the image data type matches the buffer data type, then the input image will not be copied. In this case, a new data segment will always be allocated for the output image. That is, the operation cannot be performed in place. Also, boundaryConditions are ignored.

position gives the coordinates for the first pixel in the buffers, subsequent pixels occur along dimension dimension. position[dimension] is always zero. If dip::FrameWork::FullOption::AsScalarImage was given and the input image has more than one tensor element, then position will have an additional element. Use pixelTable.Dimensionality() to determine how many of the elements in position to use.

The input and output buffers will never share memory. That is, the line filter can freely write in the output buffer without invalidating the input buffer, even when the filter is being applied in-place.

dip::Framework::Full will process the image using multiple threads, so lineFilter will be called from multiple threads simultaneously. If it is not thread safe, specify dip::FrameWork::FullOption::NoMultiThreading as an option. the SetNumberOfThreads method to lineFilter will be called once before the processing starts, when dip::Framework::Full has determined how many threads will be used in the scan, even if dip::FrameWork::FullOption::NoMultiThreading was specified.