Paigebowman

Overview

  • Founded Date junio 5, 2025
  • Sectors Tecnología
  • Posted Jobs 0
  • Viewed 60

Company Description

New aI Tool Generates Realistic Satellite Images Of Future Flooding

Visualizing the potential impacts of a hurricane on people’s homes before it strikes can assist citizens prepare and choose whether to leave.

MIT researchers have established a method that produces satellite images from the future to depict how a region would take care of a potential flooding event. The method combines a generative artificial intelligence design with a physics-based flood design to produce reasonable, birds-eye-view pictures of an area, showing where flooding is most likely to take place offered the strength of an oncoming storm.

As a test case, the team used the technique to Houston and produced satellite images depicting what specific areas around the city would look like after a storm equivalent to Hurricane Harvey, which struck the area in 2017. The group compared these created images with real satellite images taken of the very same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.

The group’s physics-reinforced approach generated satellite images of future flooding that were more reasonable and precise. The AI-only approach, on the other hand, generated pictures of flooding in locations where flooding is not physically possible.

The group’s technique is a proof-of-concept, suggested to show a case in which generative AI designs can produce realistic, reliable material when coupled with a physics-based design. In order to use the method to other regions to depict flooding from future storms, it will need to be trained on much more satellite images to find out how flooding would search in other areas.

«The idea is: One day, we might utilize this before a hurricane, where it provides an extra visualization layer for the public,» says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral trainee in MIT’s Department of Aeronautics and Astronautics (AeroAstro). «Among the most significant challenges is encouraging people to leave when they are at danger. Maybe this could be another visualization to assist increase that readiness.»

To show the potential of the new method, which they have called the «Earth Intelligence Engine,» the team has actually made it available as an online resource for others to try.

The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, teacher of AeroAstro and director of the MIT Media Lab; together with collaborators from multiple organizations.

Generative adversarial images

The new study is an extension of the team’s efforts to apply generative AI tools to visualize future environment situations.

«Providing a hyper-local point of view of climate appears to be the most reliable way to communicate our scientific outcomes,» says Newman, the research study’s senior author. «People relate to their own zip code, their regional environment where their friends and family live. Providing regional climate simulations ends up being user-friendly, personal, and relatable.»

For this research study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning approach that can create practical images using 2 competing, or «adversarial,» neural networks. The first «generator» network is trained on pairs of real data, such as satellite images before and after a typhoon. The 2nd «discriminator» network is then trained to compare the real satellite images and the one synthesized by the very first network.

Each network automatically enhances its performance based upon feedback from the other network. The idea, then, is that such an adversarial push and pull need to eventually produce synthetic images that are identical from the genuine thing. Nevertheless, GANs can still produce «hallucinations,» or factually incorrect features in an otherwise realistic image that should not be there.

«Hallucinations can misguide viewers,» states Lütjens, who began to wonder whether such hallucinations might be avoided, such that generative AI tools can be help inform individuals, especially in risk-sensitive scenarios. «We were believing: How can we utilize these generative AI models in a climate-impact setting, where having trusted information sources is so essential?»

Flood hallucinations

In their brand-new work, the scientists thought about a risk-sensitive circumstance in which generative AI is tasked with creating satellite pictures of future flooding that could be credible enough to notify choices of how to prepare and possibly evacuate individuals out of harm’s way.

Typically, policymakers can get a concept of where flooding might take place based upon visualizations in the type of color-coded maps. These maps are the last product of a pipeline of physical models that usually begins with a typhoon track model, which then feeds into a wind model that simulates the pattern and strength of winds over a regional area. This is integrated with a flood or storm rise design that anticipates how wind might press any neighboring body of water onto land. A hydraulic model then maps out where flooding will take place based upon the local flood infrastructure and creates a visual, color-coded map of flood elevations over a particular area.

«The question is: Can visualizations of satellite images add another level to this, that is a bit more concrete and emotionally interesting than a color-coded map of reds, yellows, and blues, while still being trustworthy?» Lütjens states.

The team first checked how generative AI alone would produce satellite pictures of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood pictures of the very same areas, they found that the images resembled common satellite images, but a closer appearance exposed hallucinations in some images, in the form of floods where flooding need to not be possible (for circumstances, in places at greater elevation).

To minimize hallucinations and increase the credibility of the AI-generated images, the group matched the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching typhoon’s trajectory, storm rise, and flood patterns. With this physics-reinforced method, the team produced satellite images around Houston that portray the very same flood extent, pixel by pixel, as anticipated by the flood model.