It was not until some 400 years ago, by the end of the Renaissance, that people started to resort to optical hardware to go beyond the boundary of our vision. In the same period, Galileo raised up his telescope and pointed to the sky, while Robert Hooke lit up the oil lamp next to his compound microscope. This journey of discovery continued with the evolutions in telescopy/microscopy, where our own curiosity has been pushing us closer to (and in some cases beyond) the limits in the laws of physics.
In the field of microscopy, new challenges emerge in various techniques and imaging system designs from recent years like super-resolution microscopy, light-sheet microscopy, and correlative microscopy.
These challenges call for a better spatial and temporal resolution, a higher capacity to handle the augmentation of data volume and throughput and a fast and accurate data crunching from multimodality and information extraction with statistical significance.
Just as how it has helped us lately to gain a first glimpse of the black hole, the computational approach has been predominantly employed in addressing these challenges as well. We have witnessed powerful algorithms demonstrating their great capability and value in each and every segment of imaging - ranging from image acquisition to image processing. With an emerging pivotal role in moments of discovery, there is no denial that computation is becoming another important means in bringing information to sight - and this is increasingly the case in microscopy.
It has long been our belief that the best performing imaging system arises from the synergy between hardware design and computational software solutions that supplement its potential and its capacity. One can only achieve meaningful results, when
play in perfect concert.
Developing or even just finding the right image processing software solution can often be a very painstaking task. Through our interviews with the researchers, we have learned that one can almost not get anywhere anymore in research without quite some coding skills or finding someone coding for them. But what if there is an easy and inexpensive way to deliver the right image processing solution for anyone, at anywhere, in any imaging experiment with any microscope? An easy, collaborative approach to access the right imaging solution regardless of location, field of research or microscope seems to be the dream.
When arivis Cloud was just starting, we were facing a highly fragmented space for image processing solutions in microscopy. We were able to identify valuable dots here and there – be it a powerful algorithm or a handy visualized annotation tool for machine learning, but there was barely any connection between most of them, and certainly much less of anything that you would call an “off-the-shelf image processing solution” that is publicly available. It became almost immediately clear to us that a platform approach is the only way.
By leveraging and embracing the power of containerization technology (e.g. Docker), we enable arivis Cloud users to create standard arivis Cloud image processing modules with an underlying algorithm or code written in almost any high-level programming language. This helps to take away the complexity of configuring execution environment needed for different programming languages. To execute an arivis Cloud module, one would only need to install Docker on your computer, which is very easy to do, and then you would be good to go.
To truly unlock the power of image processing solutions, we support an easy way for users to combine modules into workflows, We have managed to achieve it by defining a set of standard module interfaces (i.e. module specification), through which modules can be chained, or replace one another in a workflow – if there is another module you created or created by other arivis Cloud users that does a better job.
Et voilà, code from any language and by different authors connected in a flexible fashion in your own image processing solution that you can easily create and automate – regardless of your level of coding skills.
We encourage users to share their image processing solutions (i.e. modules and workflows) publicly or within a circle of peers on arivis Cloud. We believe that, by this way, not only does it help to promote collaboration and collegiality within the microscopy community, but it saves precious research time by reusing existing solutions as well.
Up to now, we have been greatly encouraged by our users, when they confirm the value of our approach. In one case, Dr. Ravi Manjithaya has managed to transform the analysis of the traffic light assay from a manual process to an automated high throughput processing - while in another, Dr. Mindaugas Valius has helped his team in enhancing the efficiency of biomarker identification and characterization in cancer cell research. We are happy to see that more and more similar cases continuously appear around the globe, as we contribute to help the microscopy community in embracing a computational future.