Working with large microscopy images can be frustrating:
- Your computer crashes when you are just trying to load an image
- The image appears to be loaded but you cannot view or interact with it
- The software stops responding just when you are saving the last hour’s work
Sound familiar?
A typical microscopy image in #biotech research can be around 70 GB*, too large to fit in the RAM on most systems. Too large for visualization, let alone analysis. If you are interested in performing #deeplearning based analysis, this data gets copied multiple times based on the neural network architecture, making it impossible to get the job done.
So, what image analysis strategies do you have at your disposal?
This works great if you can find a subset for training, which is representative of the whole data set. This is a valid approach for many #microscopy use cases. Although you need to keep in mind that you may be missing parts of the story by picking a subset. During inference (e.g., segmentation), data can be analyzed in chunks and put back together using smart blending techniques between the chunks (tiles). For example, arivis Cloud uses smooth tiling that gives seamless segmentation results from large images.
This is a great approach for those with access to distributed computing resources. Such resources can be easily subscribed to via AWS, Google Cloud, or Microsoft Azure. The downside of such an approach is the data transfer time between your local storage and the cloud. 70 GB is manageable for cloud transfers, but it may not be practical for very large data sets. On-premises (local) solutions such as the arivis VisionHub make it possible to load massive quantities of data to a local server for processing using computational workers. These workers are designed to utilize processing resources in a smart way allowing for efficient image analysis, from megabytes to terabytes.
Such as lazy evaluations and out-of-core calculations.
While coders can code their way into building software solutions, non-coders need prepackaged software that is usable by anyone. A software that leverages advanced algorithms but hides the complexity from the researcher. For example, arivis Vision4D is a multi-dimensional image analysis software from ZEISS that incorporates intelligent algorithms for large data visualization and analysis. It scales the computation accordingly and allows you to work on your laptop or an advanced server. You will get to your results much faster on a server, but you will not be stopped from analyzing your data on your laptop, just because the data doesn’t fit your memory in its entirety.
These image analysis strategies can help you work with large images on a laptop or small workstation.
Contact us to learn more about the products mentioned in this article or to automate your image analysis pipelines.
* File size calculation for scientific images:
File size in bytes = X. Y. Z. T. C. d/8 (divide by 8 to convert bits to bytes)
X - Number of pixels along X
Y - Number of pixels along Y
Z - Number of pixels along Z (number of slices)
T - Number of time points (e.g., in a time series)
C - Number of channels
d - Pixel depth (e.g., 8-bit or 16-bit integer)
Example calculation for a 2048 x 2048 image with 300 slices along Z, 12-time points, 5 channels, and saved as 8 bit:
2048 x 2048 x 300 x 12 x 5 x 8/8= 75,497,472,000 Bytes
= 75,497,472,000 Bytes / (1024 *1024 *1024) = 70.3125 GB