Sentinel-2 Super-Resolution Showdown: SR4RS vs. S2DR3

Remote sensing images are crucial for planetary monitoring but are often limited by the spatial resolution of sensors and the high costs of obtaining very high-resolution data. The Sentinel-2 (S2) mission provides valuable multispectral imagery across 13 bands at 10, 20, and 60-meter resolutions. However, these resolutions may not capture fine details needed for tasks like land cover mapping, agricultural monitoring, or disaster assessment. Super-Resolution (SR) techniques address this by reconstructing high-resolution images from lower-resolution ones, enhancing the spatial detail of S2 imagery for more accurate and insightful applications.

I have been exploring Sentinel-2 super-resolution techniques for some time, particularly since the rise of machine learning and artificial intelligence in remote sensing for tasks like classification, segmentation, and image enhancement. In this post, I will discuss two approaches to achieving super-resolution for Sentinel-2 imagery: the first is SR4RS, and the second is S2DR3.

Super-Resolution for Remote Sensing (SR4RS)

SR4RS is an open-source software tool designed to apply super-resolution techniques to remote sensing images, particularly Sentinel-2 imagery. It utilizes the Orfeo ToolBox TensorFlow (OTBTF) module, which integrates TensorFlow with the Orfeo ToolBox (OTB) for geospatial data processing. SR4RS enables the enhancement of low-resolution images, such as Sentinel-2's 10-meter Red, Green, Blue, and Near-Infrared bands, to higher resolutions (e.g., 2.5 meters) using deep learning models like convolutional neural networks. It includes a pre-trained model trained on 250 Spot-6/7 scenes from 2020, paired with Sentinel-2 images, and supports user-friendly applications for training and inference. The software is accessible via a Docker image and is designed for researchers to implement and test new super-resolution methods, with tools like PatchesExtraction and TensorflowModelServe for building processing pipelines.

To run the SR4RS tool, you need the latest version of OTBTF installed on your machine. However, manual installation can be challenging due to complex dependencies and configurations. The simplest approach is to use a pre-configured, ready-to-use Docker image. Here are the step-by-step instructions for using SR4RS.

1. Pull a docker image you can use the following command.

Latest CPU-only docker image

docker pull mdl4eo/otbtf:latest

 Latest GPU docker image

docker pull mdl4eo/otbtf:latest-gpu

2. After pulling the docker image. Run the image.

docker run --rm -ti -v /some/folder:/data mdl4eo/otbtf:latest

Or if using GPU

docker run --rm --gpus=all -ti -v  /some/folder:/data mdl4eo/otbtf:latest-gpu

3. Using the docker shell/command prompt, download and unzip the pre-trained saved model using the following command:

wget https://nextcloud.inrae.fr/s/boabW9yCjdpLPGX/download/sr4rs_sentinel2_bands4328_france2020_savedmodel.zip
unzip sr4rs_sentinel2_bands4328_france2020_savedmodel.zip

4. Next clone the SR4RS from github using git.

git clone https://github.com/remicres/sr4rs.git

5. The input image must be stacked in order of 4-3-2-8 bands . You can do the stacking using Orfeo Toolbox ConcatenateImages command as follow:

otbcli_ConcatenateImages -il band4.tif band3.tif band2.tif band8.tif -out image_4328.tif

 6. Lastly run the super resolution code.

python sr4rs/code/sr.py \
--savedmodel sr4rs_sentinel2_bands4328_france2020_savedmodel \
--input /path/to/some/S2_image/images_4328.tif \
--output test.tif

The super resolution output has resolution 2.5 m. The figure 1 below shows the output. You can compare the output from the original Sentinel-2 input and also compare the result with higher image resolution from Google Satellite.

If you have any problem in using the SR2RS tool, please visit it's official Github repository.

Sentinel-2 Super Resolution Output using SR4RS Tool
Figure 1. Sentinel-2 Super Resolution Output using SR4RS Tool

Sentinel-2 Deep Resolution 3.0 (S2DR3)

S2DR3 is a cutting-edge single-image super-resolution model developed to improve the spatial resolution of Sentinel-2 satellite imagery. It enhances all 12 spectral bands of a Sentinel-2 L2A (or L1C) scene, transforming their original resolutions of 10, 20, and 60 meters per pixel to a finer 1-meter-per-pixel resolution. Introduced by Yosef Akhtman in October 2023, the model employs an artificial neural network (ANN) architecture tailored to maintain subtle spectral details in soil and vegetation across multispectral bands. Capable of accurately reconstructing objects and textures with spatial features as small as 3 meters.

Using the S2DR3 tool is straightforward. Open the shared Google Colab notebook and execute the code, which is self-explanatory. You only need to specify the center point coordinates for the processing location. At the end of the code, you can view a preview of the results and access a link comparing the output with the original Sentinel-2 image, as illustrated in Figure 2.

Sentinel-2 Image Super Resolution Output from S2DR3
Figure 2. Sentinel-2 Image Super Resolution Output from S2DR3

The super-resolution output from S2DR3 achieves a 1-meter spatial resolution, covering an area of 16 square kilometers, which is higher than the resolution provided by SR4RS. Figure 3 allows for a visual comparison of the two results. Additionally, beyond the True Color RGB image, S2DR3 outputs include a Normalized Difference Vegetation Index (NDVI) image, a 10-band multispectral image, and an infrared pseudo-color image.

If you want to explore more about S2DR3 performance and accuracy, I strongly suggest you to visit Yosef Akhtman's blog post, that explain more technical about it.

The S2DR3 model outperforms SR4RS not only in spatial resolution but also in the variety of output types and the quality of geometric object representation. Upon close inspection, S2DR3 effectively groups buildings in mixed environments of varying sizes into larger, well-defined rectangular shapes. In contrast, SR4RS produces more irregular and less consistent results. This difference can be attributed to S2DR3's training on a significantly larger dataset compared to SR4RS. However, SR4RS is more user-friendly for learning and offers the flexibility for users to train the model with their own data, providing a valuable opportunity for customization.
 
This concludes my post on generating Sentinel-2 super-resolution images using SR4RS and S2DR3, which leverage artificial intelligence techniques in remote sensing. If you're interested in exploring more applications of AI and machine learning in the geospatial domain, please check out my other posts, including Automating Building Detection, Image Segmentation, and Automating GIS Tasks in QGIS with AI

Related Posts

Disqus Comments