By E+T Editorial Team Mon 31 Jul 2023 — updated 8 Oct 2023
Collected at : https://eandt.theiet.org/2023/07/31/machine-learning-algorithm-trained-outer-space
A team of researchers has trained a machine learning (ML) algorithm to detect changes in cloud cover from aerial images on board a satellite.
Satellite data is increasingly useful for scientists, enabling aerial mapping and weather prediction, and monitoring deforestation. However, at present, most satellites can only collect this data passively, and transmit it to Earth for analysis.
But what if machine learning tools could work in outer space?
To overcome these restrictions, a group of researchers led by Vít Růžička, a PhD student at the University of Oxford, took on the challenge of training the first machine learning program in outer space.
The team was able to partner with the Dashing Through The Stars mission, which had issued an open call for project proposals to be carried out on board the ION SCV004 satellite, launched in January 2022. During the autumn of that year, the team uplinked the code for the program to the satellite already in orbit.
The researchers trained a simple model to detect changes in cloud cover from aerial images directly on board the satellite, in contrast to training on the ground. The model was based on an approach called few-shot learning, which enables a model to learn the most important features to look for when it has only a few samples to train from.
A key advantage is that the data can be compressed into smaller representations, making the model faster and more efficient.
“The model we developed, called RaVAEn, first compresses the large image files into vectors of 128 numbers,” Růžička explained. “During the training phase, the model learns to keep only the informative values in this vector, the ones that relate to the change it is trying to detect – in this case, whether there is a cloud present or not. This results in extremely fast training due to having only a very small classification model to train.”
While the first part of the model (which compressed the newly seen images) was trained on the ground, the second part (which decided whether the image contained clouds or not) was trained directly on the satellite.
Normally, developing a machine learning model would require several rounds of training, using the power of a cluster of linked computers. In contrast, the team’s tiny model completed the training phase (using more than 1300 images) in around one and a half seconds.
When the team tested the model’s performance on novel data, it automatically detected whether a cloud was present or not in around a tenth of a second. This involved encoding and analysing a scene equivalent to an area of about 4.8 x 4.8 km2 area (equivalent to almost 450 football pitches).
The achievement could enable real-time monitoring and decision making for a range of applications, from disaster management to deforestation, the team has said.
According to the researchers, the model could easily be adapted to carry out different tasks, and to use other forms of data.
Růžička added: “Having achieved this demonstration, we now intend to develop more advanced models that can automatically differentiate between changes of interest (for instance flooding, fires and deforestation) and natural changes (such as natural changes in leaf colour across the seasons).
“Another aim is to develop models for more complex data, including images from hyperspectral satellites. This could allow, for instance, the detection of methane leaks, and would have key implications for combating climate change.”
The researchers’ findings have been published in arXiv .
Image credit: ESA.
Leave a Reply