New discoveries #8
Data Fusion Contest, Building Coverage from Sentinel Imagery, How to choose a deep learning architecture, YOLOv8 & Raster Vision updates, Planet-CR dataset
Welcome to the 8th edition of the newsletter. I am delighted to share that the newsletter reached a new milestone over the Xmas break and now has 2.2k subscribers 🥳 I took a break from writing and spent the time considering my goals for 2023. I will be moving from a weekly to a fortnightly schedule for newsletter posts, in order to free up time to work on videos. Please note that this edition of the newsletter does not have a sponsor. As a sponsor, you'll receive a shout-out in the opening statement and a dedicated section in the newsletter, reaching a wide audience in the community. If you're interested in gaining visibility for your business or service, sponsoring a future edition of the newsletter is an excellent way to achieve this. For more information on how to sponsor the newsletter, please email me 📧
2023 IEEE GRSS Data Fusion Contest
Buildings dominate the urban landscape and consume significant amounts of energy, contributing to climate change. There has been significant progress in the extraction and 3D reconstruction of building footprints, but the fine-grained classification of roof types remains a challenge due to ambiguous visual features of roofs within aerial imagery. Roof type information is however critical to unlocking many applications and this challenge aims to create a solution. A large-scale, fine-grained, and multi-modal (SAR + optical) benchmark dataset for the classification of building roof types is released.
🗓️ Deadline: 2023 March 13
Building Coverage from Sentinel Imagery
Building coverage information provides crucial insights into the urbanization, infrastructure, and poverty level of a region. This information will ideally be updated regularly, but automated mapping efforts using deep learning typically require expensive high resolution imagery, limiting the application of this approach. The paper Building Coverage Estimation with Low-resolution Remote Sensing Imagery demonstrates a method for estimating building coverage (i.e. %) using only publicly available low-resolution Sentinel 1 & 2 satellite imagery. Figure 1 from the paper (above) illustrates the approach: a ResNet18 based regression model is trained to predict the quantile of building coverage. The model accurately predicts the building coverage from raw input images and generalizes well to unseen countries and continents. It is great to see a scalable and low cost solution which can be applied globally with implications for monitoring development in areas which typically do not have the funds to purchase high resolution data.
Authors: Enci Liu, Chenlin Meng, Matthew Kolodner, Eun Jee Sung, Sihang Chen, Marshall Burke, David Lobell, Stefano Ermon
How to choose a deep learning architecture
Jeff Faudi has published a blog post titled How to choose a deep learning architecture to detect aircrafts in satellite imagery? Typically model performance metrics are reported on the COCO dataset, but this dataset has different characteristics to remote sensing datasets and the results therefore may not be indicative. In this post Jeff demonstrates how to systematically evaluate the performance of different object detection model architectures on a remote sensing dataset of planes. Jeff covers the practical details of dataset pre-processing & model training, and proceeds to evaluating 4 different object detection models using the Icevision framework. Comparison of model metrics is complemented with an excellent discussion on the qualitative differences in predictions from the different models.
YOLOv8 launches today!
YOLOv5 from Ultralytics has been my go-to library for object detection for the past couple of years owing to its ease of use, excellent documentation, and generally ‘good enough’ results with relatively minimal effort. There are a couple of pain points with v5 however, in particular it is necessary to git clone
the repository to use it, and the model performance is no longer state of the art. I am therefore delighted that today Ultralytics is launching YOLOv8 with significant improvements 🎉 I have been granted early access to YOLOv8 and am impressed by the improvements in both the user experience and the model metrics on benchmark datasets (COCO & RF100). Significantly v8 can be installed via pip
and used either via a CLI or python interface. Note that model performance improvements are due to architectural and other changes, which will be documented in a paper to be released soon. I have also been told that benchmarking will soon be performed on the DOTA remote sensing dataset. In the meantime please see the official documentation below:
Raster Vision v0.20
Raster Vision is an open source library and framework for building computer vision models on satellite, aerial, and other large imagery sets. Whilst I have been aware of Raster Vision for sometime, I never really used it owing the the workflow it imposed (CLI and docker containers) which is not my preferred workflow. However the recently announced v0.20 brings a host of improvements including:
Improved documentation & tutorials
Raster vision can now be imported as a library and used in Jupyter notebooks
Support for multiband imagery and external models has been extended to chip classification and object detection as well as segmentation
Raster Vision can now combine bands from multiple sources of raster data even if they have different resolutions and extents
Compatibility with pytorch-lightning ⚡
Whether you are training models or creating production grade inference pipelines, Raster Vision has something to offer
Planet-CR dataset
Planet-CR is a public dataset for cloud removal which features globally sampled high resolution optical observations, in combination with paired radar measurements as well as pixel-level land cover annotations.
Course
I have begun work on a course on deep learning applied to satellite & aerial imagery ✍️ Whilst there are relevant courses online already (see here), most courses (in my opinion) either lack in remote sensing specific details, or simply have not been updated with contemporary best practices. My goal is to create a remote sensing specific course that will take a reader from basic deep learning techniques (classification, object detection etc) all the way to more advanced techniques such as cloud removal and data fusion. By making this course open source I aim to attract contributions from the top experts in this field, and ensure the course is continually maintained and improved over time. If you are interested in contributing to the course please let me know via email.
Jobs & Events
Do you have a job or event you would like to promote here? Let me know!
Event: IGARSS2023 session on "Responsible AI4EO" (CCS.132)
The International Geoscience and Remote Sensing Symposium (IGARSS) is the flagship conference of the IEEE Geoscience and Remote Sensing Society (GRSS). It is aimed at providing a platform for sharing knowledge and experience on recent developments and advancements in geoscience and remote sensing technologies, particularly in the context of earth observation, disaster monitoring and risk assessment. I have received a particular request for presenters on the topic of "Responsible AI4EO" (CCS.132): What is the role of EO and AI in addressing sustainability and ethics?
🖥️ Website
🌎 Pasadena, California, USA
🗓️ 16 - 21 July, 2023
🗓️ Abstract submission deadline: 13 Jan
Poll
In the last poll I asked which kind of models people plan to deploy in the future, and segmentation (pixel level) received the most votes, tied with ‘all of the above’ (I like the ambition!). This week I am interested to know what kind of video content you want to see: