New discoveries #9
Unsupervised Wildfire Change Detection, MAFAT challenge, Which device to deploy in a satellite?, Iquaflow, Discord, datasets repository & Career chat with Philip Robinson
Welcome to the 9th edition of the newsletter. I am delighted to share that the newsletter continues to grow and now reaches 2,620 subscribers 🥳 Shout out and special thanks to the sponsors of this newsletter edition, the MAFAT challenge 🙏 If you're interested in gaining visibility for your business or service, sponsoring a future edition of the newsletter is an excellent way to achieve this. As a sponsor, you'll receive a shout-out in the opening statement and a dedicated section in the newsletter, reaching a wide audience in the community. For more information on how to sponsor the newsletter, please email me 📧
Unsupervised Wildfire Change Detection
Assessment of the severity of a wildfire provides information about the fuel conditions in an area, and informs predictions about future fires in that area. This information is also useful to emergency first responders, and is used to assess the impacts of a wildfire on people, communities, and the natural ecosystem. The paper Unsupervised Wildfire Change Detection based on Contrastive Learning demonstrates a method for detecting burned areas by detecting changes between pairs of multispectral images (4 bands of Sentinel-2 and PlanetScope satellite imagery). Typically change detection is treated as a supervised machine learning problem, requiring large volumes of annotated data which is costly to generate. Here this is avoided by using self-supervised contrastive learning to pre-train a feature extraction network (named FireCLR) on unlabelled data. To perform change detection, k-means clustering is applied to the differenced FireCLR representations from a pair of images at a location. A small amount of annotated data is used to optimise the k-means clustering. Using this approach resulted in more accurate burned area predictions than baseline approaches using dNDVI & dNBR (but mixed results on some downstream tasks of identifying particular kinds of ash). The authors propose future enhancements to this approach, using longer time series of images in the pre-training to ensure the model is robust to changes due to seasons etc.
Authors: Beichen Zhang, Huiqi Wang, Amani Alabri, Karol Bot, Cole McCall, Dale Hamilton, Vít Růžička
MAFAT challenge
The “MAFAT Satellite Vision Challenge” — Satellite Imagery Object Detection Competition, is the 4th competition in the MAFAT Challenge series. The competition challenges entrants to address ‘model drift’ in the context of object detection. Model drift is a term for the problem that models tend to degrade in performance (ie. drift) when deployed in the real world. To train object detection models that are robust to model drift, the competition provides a dataset of diversified satellite images which have a range of resolutions (0.4m to 1.3m), look angles/azimuths and imaging conditions (night, day & seasonal variations). Entrants are allowed two passes on the test dataset - one for calibration and a second for final predictions. Sample images from the dataset (below) demonstrate the challenge facing entrants:
The competition is operated by Webiks on behalf of MAFAT’s DDR&D (Directorate of Defence Research & Development), and kicks off with an online Meetup (in English) that includes a presentation from Nadav Barak, machine learning researcher at Deepchecks, on drift detection in structured and unstructured data. Whether you intend to participate in the competition, or just want to learn more about model drift, you are highly encouraged to sign up for the Meetup.
💰 prizes: $45,000 in total!
🗓️ Start date: 1st Feb with online Meetup at 18:30 (Israel time zone, GMT+2)
Which device to deploy in a satellite?
In the paper We are Going to the Space - Part 1: Which device to deploy in a satellite?, the authors investigate the performance of a variety of low cost edge compute devices for deep-learning-based image processing in space. Their goal was to determine which devices satisfy the latency and power constraints of CubSat sized (3U) satellites while achieving reasonably accurate results. Devices were evaluated on an image classification workload consisting of a 5-class classification problem using an off-the-shelf MobileNetV1 model processing a 4512 x 4512 pixel image. The authors demonstrate that the Coral TPU is capable of real time imaging, and also explore the effects of model quantization and depth. It is exciting to see the potential of this low cost processing hardware, and I look forward to seeing on-orbit demonstrations.
Authors: Robert Bayer, Julian Priest, Pınar Tözün
📖 Paper
Iquaflow
Iquaflow is an image quality framework in python that provides tools to assess image quality, using the performance of AI models trained on the images as a proxy. It includes ready-to-use image-quality metrics and also modifiers to alter images (e.g. to vary noise, blur, JPEG compression, quantization, etc). In the paper Object Detection performance variation on compressed satellite image datasets with iquaflow the authors demonstrate the impact of JPEG compression on oriented object detection using iquaflow. Unsurprisingly compression negatively effects model performance, but importantly Iquaflow provides the tools to systematically quantify the magnitude of this effect. Given that models are often applied to imagery that varies in quality, I think Iquaflow is a valuable tool to assess the robustness of models to real world conditions.
Authors: Pau Gallés, Katalin Takats, Javier Marin
Discord chat 🗣️
To provide a place for readers of this newsletter to connect I have created a satellite-image-deep-learning Discord server. For those unfamiliar with Discord it is a chat platform, similar to Slack. It is free to join and use, and has some interesting features such as audio & video live streaming. There are currently chat channels for topics including for jobseekers, technical-chat and projects, and if you would like to join please head to the Discord page here. I hope to chat soon!
Datasets repository
Discovering the right dataset for a new project can be a hugely time consuming and frustrating process. Information about datasets is scattered all over the internet, and often if can be hard to get started with a dataset without examples of how it can be loaded and used. I created the satellite-image-deep-learning/datasets repository to address this pain point. The datasets listed cover a wide range of challenges from object detection, to data fusion and change detection. I invite contributions to this repository, and I hope it can become THE place to find remote sensing datasets 🚀
Career chat with Philip Robinson
In this video new video, I caught up with Philip Robinson to discuss his career path, and hear how he transitioned from working in computer security research, to working on environmental and satellite imaging challenges at the Global Fishing Watch.