New discoveries #4
MCFNet, Building Footprints Updates, HRPlanesv2 & DAFT
Welcome to the 4th New discoveries newsletter. I am delighted to share the newsletter now has over 1375 subscribers! Please note that this edition of the newsletter does not have a sponsor. As a sponsor, you'll receive a shout-out in the opening statement and a dedicated section in the newsletter, reaching a wide audience in the community. If you're interested in gaining visibility for your business or service, sponsoring a future edition of the newsletter is an excellent way to achieve this. For more information on how to sponsor the newsletter, please email me 📧
MCFNet: Multi-Field Context Fusion Network
Due to limited memory (RAM) large geospatial images are typically either downsampled or cropped before being used to train a segmentation model. This reduces details or context and can limit segmentation accuracy. In this paper a multi‐field context fusion network (MCFNet) is proposed, which preserves both local and global information. Since MCFNet only performs segmentation enhancement on local locations in an image, it can improve segmentation accuracy without consuming excessive GPU memory. A comparison to state-of-the-art (SOTA) techniques on two benchmark datasets shows that MCFNet achieves the best balance in terms of segmentation accuracy, memory efficiency, and inference speed. It is encouraging to see the focus on practical solutions that deliver SOTA performance. Unfortunately the authors did not publish any code, but the paper is freely accessible:
Learning Color Distributions from Bitemporal Remote Sensing Images to Update Existing Building Footprints
Automated methods for generating up-to-date building footprints are in demand. However variations in the colour of images acquired at different times can limit the performance of change detection models. Also the accurate historical labels of unchanged areas are often unused. This paper proposes an 3 stage algorithm to update an existing building database to the current state: (1) an image colour translation method (CycleGAN) is used to standardise the images, (2) semantic segmentation predicts the building footprints on the images from the latest period, (3) a post-processing update strategy is applied to strictly retain the existing labels of unchanged regions to attain the updated results. Steps 1 & 2 are shown above. An evaluation of different colour translation and segmentation approaches yields the optimum combination. Metrics show a meaningful improvement over baseline. The paper concludes by proposing future work to utilise the label information of the target category in the translation process to better couple it with the segmentation. Overall it is interesting to see the use of a GAN for preprocessing, and a good demonstration of how multiple networks can be chained to produce a useful result.
The HRPlanesv2 dataset contains 2120 VHR Google Earth images containing planes. Each image is annotated bounding boxes making this dataset suitable for training object detection models using frameworks like Yolov5. Trained model weights are available on Github:
Daft is under active development and I think could enable some interesting workflows, during annotation, pre/post-processing and inferencing
The Data Science Weekly newsletter is `A free weekly newsletter of Data Science articles, news, tools, libraries, and cool projects`. I have been a subscriber for a while and it is an excellent read, do check it out:
Jobs & Events
Do you have a job or event you would like to promote here? Let me know!
Thanks for reading satellite-image-deep-learning! Subscribe for free to receive new posts and support my work.