satellite-image-deep-learning

satellite-image-deep-learning

Share this post

satellite-image-deep-learning
satellite-image-deep-learning
New discoveries #8
Copy link
Facebook
Email
Notes
More
User's avatar
Discover more from satellite-image-deep-learning
Newsletter on deep learning with satellite & aerial imagery
Over 12,000 subscribers
Already have an account? Sign in

New discoveries #8

Data Fusion Contest, Building Coverage from Sentinel Imagery, How to choose a deep learning architecture, YOLOv8 & Raster Vision updates, Planet-CR dataset

Robin Cole's avatar
Robin Cole
Jan 10, 2023
5

Share this post

satellite-image-deep-learning
satellite-image-deep-learning
New discoveries #8
Copy link
Facebook
Email
Notes
More
Share

Welcome to the 8th edition of the newsletter. I am delighted to share that the newsletter reached a new milestone over the Xmas break and now has 2.2k subscribers 🥳

2023 IEEE GRSS Data Fusion Contest

Buildings dominate the urban landscape and consume significant amounts of energy, contributing to climate change. There has been significant progress in the extraction and 3D reconstruction of building footprints, but the fine-grained classification of roof types remains a challenge due to ambiguous visual features of roofs within aerial imagery. Roof type information is however critical to unlocking many applications and this challenge aims to create a solution. A large-scale, fine-grained, and multi-modal (SAR + optical) benchmark dataset for the classification of building roof types is released.

  • 🖥️ Contest Website

  • 🗓️ Deadline: 2023 March 13

  • 🖥️ References on building segmentation

Building Coverage from Sentinel Imagery

Building coverage information provides crucial insights into the urbanization, infrastructure, and poverty level of a region. This information will ideally be updated regularly, but automated mapping efforts using deep learning typically require expensive high resolution imagery, limiting the application of this approach. The paper Building Coverage Estimation with Low-resolution Remote Sensing Imagery demonstrates a method for estimating building coverage (i.e. %) using only publicly available low-resolution Sentinel 1 & 2 satellite imagery. Figure 1 from the paper (above) illustrates the approach: a ResNet18 based regression model is trained to predict the quantile of building coverage. The model accurately predicts the building coverage from raw input images and generalizes well to unseen countries and continents. It is great to see a scalable and low cost solution which can be applied globally with implications for monitoring development in areas which typically do not have the funds to purchase high resolution data.

Authors: Enci Liu, Chenlin Meng, Matthew Kolodner, Eun Jee Sung, Sihang Chen, Marshall Burke, David Lobell, Stefano Ermon

  • 📖 paper

  • 🖥️ References on regression models

How to choose a deep learning architecture

Jeff Faudi has published a blog post titled How to choose a deep learning architecture to detect aircrafts in satellite imagery? Typically model performance metrics are reported on the COCO dataset, but this dataset has different characteristics to remote sensing datasets and the results therefore may not be indicative. In this post Jeff demonstrates how to systematically evaluate the performance of different object detection model architectures on a remote sensing dataset of planes. Jeff covers the practical details of dataset pre-processing & model training, and proceeds to evaluating 4 different object detection models using the Icevision framework. Comparison of model metrics is complemented with an excellent discussion on the qualitative differences in predictions from the different models.

  • 🖥️ Blog post

  • 🐦 Jeff on Twitter

  • 💻 Icevision object detection framework

Colab notebook

YOLOv8 launches today!

YOLOv5 from Ultralytics has been my go-to library for object detection for the past couple of years owing to its ease of use, excellent documentation, and generally ‘good enough’ results with relatively minimal effort. There are a couple of pain points with v5 however, in particular it is necessary to git clone the repository to use it, and the model performance is no longer state of the art. I am therefore delighted that today Ultralytics is launching YOLOv8 with significant improvements 🎉 I have been granted early access to YOLOv8 and am impressed by the improvements in both the user experience and the model metrics on benchmark datasets (COCO & RF100). Significantly v8 can be installed via pip and used either via a CLI or python interface. Note that model performance improvements are due to architectural and other changes, which will be documented in a paper to be released soon. I have also been told that benchmarking will soon be performed on the DOTA remote sensing dataset. In the meantime please see the official documentation below:

  • 🖥️ YOLOv8 documentation

Colab notebook

Raster Vision v0.20

Raster Vision is an open source library and framework for building computer vision models on satellite, aerial, and other large imagery sets. Whilst I have been aware of Raster Vision for sometime, I never really used it owing the the workflow it imposed (CLI and docker containers) which is not my preferred workflow. However the recently announced v0.20 brings a host of improvements including:

  • Improved documentation & tutorials

  • Raster vision can now be imported as a library and used in Jupyter notebooks

  • Support for multiband imagery and external models has been extended to chip classification and object detection as well as segmentation

  • Raster Vision can now combine bands from multiple sources of raster data even if they have different resolutions and extents

  • Compatibility with pytorch-lightning ⚡

Whether you are training models or creating production grade inference pipelines, Raster Vision has something to offer

  • 🖥️ Raster Vision v2.0 release post

Planet-CR dataset

Planet-CR is a public dataset for cloud removal which features globally sampled high resolution optical observations, in combination with paired radar measurements as well as pixel-level land cover annotations.

  • 💽 Dataset on Github

  • 📖 Paper

Poll

In the last poll I asked which kind of models people plan to deploy in the future, and segmentation (pixel level) received the most votes, tied with ‘all of the above’ (I like the ambition!). This week I am interested to know what kind of video content you want to see:

Loading...

Thanks for reading satellite-image-deep-learning! Subscribe for free to receive new posts and support my work.

Chaminda Bandara's avatar
Ed Surridge's avatar
Michael Spencer's avatar
5 Likes
5

Share this post

satellite-image-deep-learning
satellite-image-deep-learning
New discoveries #8
Copy link
Facebook
Email
Notes
More
Share

Discussion about this post

User's avatar
New Discoveries #29
Open Buildings 2.5D Temporal Dataset, Detecting Looted Archaeological Sites from Satellite Image Time Series, Global Prediction of Aboveground Biomass…
Oct 3, 2024 • 
Robin Cole
14

Share this post

satellite-image-deep-learning
satellite-image-deep-learning
New Discoveries #29
Copy link
Facebook
Email
Notes
More
1
Deepness QGIS plugin
With Marek Kraft
Dec 19, 2024 • 
Robin Cole
11

Share this post

Copy link
Facebook
Email
Notes
More
12:56
Building Damage Assessment
With Caleb Robinson
Jan 8 • 
Robin Cole
10

Share this post

Copy link
Facebook
Email
Notes
More
17:10

Ready for more?

© 2025 Robin Cole
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More

Create your profile

User's avatar

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.