Drone Data in Flood Risk Assessment

Drone Data in Flood Risk Assessment

Drones are transforming flood risk assessment by offering precise, high-resolution data that traditional satellite methods often miss. With resolutions as fine as 1 foot, drones capture critical details like boundary walls, drainage channels, and small bridges, enabling better flood modeling. Compared to satellites, drones reduce errors in water flow predictions by up to 62.5% and cost significantly less than ground-based surveys. They’re also faster, completing scans in half the time.

Key points:

  • Types of Data: Drones use photogrammetry, LiDAR, and multispectral imagery for mapping. Each suits different terrains - LiDAR for dense vegetation, photogrammetry for open areas, and multispectral for water and soil analysis.
  • Applications: Data integrates into hydraulic models (e.g., HEC-RAS) to simulate floods, guide infrastructure planning, and improve disaster preparedness.
  • Cost & Accuracy: LiDAR systems offer unmatched detail but are expensive, while photogrammetry is more affordable. Multispectral imagery enhances land cover analysis.
  • Real-World Use: Examples include mapping flood-prone areas in Ghana, Greece, and Bangladesh, where drone data outperformed satellite models in accuracy.

Drones are reshaping flood risk management, offering detailed, cost-effective solutions for protecting lives and infrastructure.

SECOORA Webinar: Rapid Floodwater Mapping and Depth Analysis Using Optical UAV and SAR Technology

SECOORA

Types of Drone Data Used in Flood Risk Mapping

Comparison of Drone Data Types for Flood Risk Mapping

Comparison of Drone Data Types for Flood Risk Mapping

Drones have become essential for producing high-resolution flood maps, offering three key types of data - each suited for specific scenarios. By understanding their strengths and limitations, you can decide whether to use one type or combine them for more accurate flood predictions. Together, these data types provide a well-rounded approach to tackling the challenges of flood risk mapping.

Digital Elevation Models and Digital Surface Models from Photogrammetry

Photogrammetry, specifically Structure-from-Motion (SfM), creates Digital Surface Models (DSM) and Digital Elevation Models (DEM) by combining overlapping RGB images. It’s a relatively affordable and quick method, especially effective in open areas with minimal vegetation. However, it falls short in densely vegetated regions because the images capture the tops of trees or tall grasses instead of the ground, leading to "surface smoothing."

This method works best in urban areas, agricultural fields, or sparsely vegetated landscapes. It’s also useful for mapping shallow, clear riverbeds where the bottom is visible. In these scenarios, photogrammetry provides the level of detail needed for precise flood risk analysis in developed or open environments.

LiDAR Point Clouds

LiDAR, mounted on a drone, uses laser pulses to penetrate vegetation and reach the ground, making it ideal for creating Digital Terrain Models (DTM) in forested areas. For example, in August 2019, researchers used a LiDARSWISS system on a DJI Matrice 600 Pro to map flood-prone regions. The system achieved an impressive point density of 471 points per square meter and a vertical accuracy of under 0.1 ft. This level of detail uncovered features like boundary walls and bridges that satellite models with 33-ft resolution completely missed.

While LiDAR offers unmatched precision, it’s a more expensive option. A high-precision RTK GNSS Base Station alone costs about $8,000, and processing the data requires advanced software and expertise. Despite the cost, LiDAR is invaluable for mapping microtopography, which is crucial for understanding water flow patterns in complex terrains.

Multispectral and Thermal Imagery

Multispectral sensors go beyond visible light, capturing wavelengths like Near-Infrared (NIR) to analyze water, vegetation, and soil. They calculate indices like NDWI (Normalized Difference Water Index) to identify water bodies, even in murky conditions. For instance, researchers used a DJI Phantom 4 Multispectral from February 26–28, 2024, to survey the Bokha Stream in Icheon, South Korea. By applying an NDWI threshold of 0.19, they classified water areas with 97.3% accuracy and a Type 1 error rate of just 1.2%.

Multispectral imagery also evaluates soil moisture and vegetation health, helping predict how much rainwater will infiltrate the ground versus run off into waterways. While powerful on its own, it works best when combined with other data types, such as LiDAR or photogrammetry, to provide a comprehensive view of land cover and elevation.

Feature Photogrammetry (SfM) UAV-LiDAR Multispectral Imagery
Primary Use Surface models; clear riverbeds Penetrating vegetation; microtopography Land cover; water/soil moisture mapping
Vegetation Limited (captures canopy top) High penetration to bare earth Analyzes vegetation health/type
Water Surface Can see through shallow/clear water Reflects (cannot see bottom) NDWI for water discrimination
Cost Lower (standard RGB cameras) Higher (expensive sensors) Moderate
Accuracy High, but prone to surface smoothing Very high vertical accuracy (<1.2 inches) Spectral accuracy for classification

Combining these technologies offers the best results. Use LiDAR for vegetation-heavy zones, photogrammetry for submerged riverbeds, and multispectral imagery for land cover analysis. Together, they create detailed terrain models that include both above-ground and underwater features. This multi-sensor approach enhances flood risk modeling and supports better disaster preparedness.

Methods and Workflows in Recent Studies

Recent studies highlight a systematic approach to collecting and processing drone data for flood mapping. The process generally unfolds in three key phases: planning and data collection, processing into usable models, and integrating those models into simulation software. Each step demands precision to ensure flood predictions are accurate enough for practical applications.

Survey Design and Data Acquisition

The first phase involves defining the purpose of the mapping - whether for creating hazard maps for emergency responses or evaluating long-term infrastructure risks. After pinpointing flood-prone areas, teams must confirm airspace regulations and schedule flights during low-water periods to expose riverbed features, minimizing uncertainties.

Drone selection is crucial. Multi-rotor drones work well for detailed mapping of small areas, while fixed-wing drones are better for covering larger regions. Flight paths are planned to ensure 60–80% image overlap, which is essential for accurate 3D modeling.

For example, in January 2016, researchers mapped a 1.6 km stretch of Chile's Ñuble River using a fixed-wing UAV during low-water conditions. They georeferenced the data with four ground control points (GCPs) measured via RTK-GPS. When a major flood occurred in June 2023, their model's predictions were validated against actual high-water marks, showing a depth error of just 10.6%.

In another case, a DJI Phantom 4 RTK was used in Războieni, Romania, to map 1.28 square kilometers of the Casimcea River. The team captured 1,076 calibrated images with an average ground sample distance of 4.47 cm and used five GCPs collected with a Leica GS12 ROVER to improve accuracy. RTK-enabled drones, like the Phantom 4 RTK, can reduce the need for manual GCP placement while maintaining centimeter-level precision.

Data Processing and Model Generation

Once data is collected, the next step is transforming raw inputs into actionable models. Photogrammetry uses overlapping RGB images processed through Structure-from-Motion (SfM) algorithms to create point clouds, 3D meshes, and orthomosaics. LiDAR data, on the other hand, is georeferenced using onboard GNSS and inertial navigation systems, making it particularly effective for capturing bare-earth terrain beneath vegetation .

Point clouds are then classified into "ground" and "non-ground" features like buildings or vegetation. Flat terrains often use morphological-based algorithms, while hilly areas benefit from Triangulated Irregular Network (TIN) refinement. Increasingly, convolutional neural networks (CNNs) are being adopted to automatically categorize points into ground, vegetation, and infrastructure. Ground points are then interpolated - often with Inverse Distance Weighting - to create Digital Terrain Models (DTMs), while Digital Surface Models (DSMs) account for all surface features .

Following catastrophic floods in Greece's Thessalian Plain in February 2024, researchers deployed a DJI Matrice 300 equipped with a Zenmuse L1 LiDAR sensor. Over 15 flights, they mapped 36 km of river embankments with a point density exceeding 350 points per square meter. The data was processed in ArcGIS Pro, using CNN-based classification to produce DTMs and DSMs at 20 cm resolution.

Common tools for these workflows include Pix4D and Agisoft Metashape for photogrammetry, LAStools and ArcGIS Pro for point cloud processing, and QGIS for GIS visualization. These refined models serve as the foundation for simulating water dynamics in hydrological analyses.

Integration with Hydraulic and Hydrologic Models

Drone-derived DTMs provide the geometric base for hydraulic modeling software like HEC-RAS, SWAT, and TUFLOW. These tools use high-resolution terrain data to simulate water surface elevation, flow velocity, and areas of inundation. Studies show that relying on a 10 m resolution DTM can overestimate water flow by 15% in sloped terrains and up to 62.5% in flat areas compared to 0.3 m drone-derived DTMs.

"The hydraulic model derived from remote sensing seems to be an effective alternative for the construction of hydraulic models for flood studies."
– Robert Clasing et al., Department of Civil Engineering, Universidad Católica de la Santísima Concepción

Due to sensor limitations underwater, researchers often apply a "flat bed" assumption during high-water surveys. Drone orthomosaics also assist in estimating Manning's roughness coefficients by identifying land cover types and surface irregularities, which are critical for calculating flow velocity accurately.

For instance, researchers in Dhanera City, India, processed 9,222 images from a Phantom 4 Pro RTK using Pix4D software. This yielded a high-resolution DEM with a ground resolution of 3.6 x 3.6 cm. The data was then incorporated into hydrodynamic models to simulate both pluvial and fluvial flood scenarios under unsteady flow conditions.

Validating and Improving Flood Models with Drone Data

Accuracy and Uncertainty Assessment

To ensure drone terrain models are reliable, they must be validated against ground truth data. This is typically done by comparing drone-derived Digital Elevation Models (DEMs) with Ground Control Points (GCPs) collected using RTK GPS or GNSS receivers. Metrics like Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) help measure how well the drone data aligns with reality.

For example, in Bangladesh's Jamuna floodplain, researchers used a 2D HEC-RAS model built from a Mavic 2 Pro drone survey. They validated the model’s 17.5 cm resolution Digital Terrain Model (DTM) with 92 GCPs and three water level gauges over a 2.1 square kilometer area in Ranigram village. During the 2018 monsoon flood, the model achieved an RMSE of 0.068 m and an MAE of 0.043 m. Additional validation came from RTK-GPS surveys of flood traces on vegetation and high-water marks.

Another case involved Chile’s Ñuble River, which experienced an extreme flood on June 24, 2023, peaking at 2,578.2 cubic meters per second. Using a hydraulic model based on a 2016 fixed-wing UAV survey, researchers reported a mean absolute percentage error of 10.6% for flood depths during this 30-year return period event. These examples demonstrate how drone data can provide highly accurate flood models when validated with robust methods.

Comparing Drone Data to Other Sources

Drone data often outperforms traditional sources in capturing fine-scale terrain details. In August 2019, researchers surveyed three flood-prone communities in Accra, Ghana - Santa Maria, Legon Hall, and Okponglo - using a LiDARSWISS system mounted on a DJI Matrice 600 Pro. The resulting 0.3 m resolution DTMs highlighted critical features, such as archways and boundary walls, that were invisible in the 10 m Airbus satellite DTM. Satellite datasets often fail to capture these microtopographic details, leading to inaccuracies, especially in flat or sloped areas.

DEM Source Resolution Vertical Accuracy (RMSE) Key Limitation in Flood Modeling
UAV-LiDAR 0.3 m < 0.03 m Limited coverage area compared to satellites
UAV-Photogrammetry 0.17 m 0.068 m Difficulty penetrating dense vegetation
Airbus Satellite DTM 10 m 5-10 m Misses microtopography (walls, small drains)
SRTM / ASTER 30 m 12.62-17.76 m Coarse resolution; includes vegetation/buildings

In Sirajganj, Bangladesh, a study comparing satellite DEMs like SRTM30 and ALOS PALSAR with drone models revealed significant differences. While satellite DEMs failed to show floodwaters reaching elevated housing areas, the 17.5 cm drone model accurately captured both the timing and extent of flooding between 2018 and 2020. These findings emphasize how drone data’s higher resolution can address the shortcomings of satellite datasets.

Combining Drone Data with Other Data Sources

Drone data becomes even more powerful when combined with other sources, enhancing flood models for better disaster preparedness. In Brazil’s Upper Paranapanema River Basin, researchers integrated drone surveys from a Mavic 3E RTK with citizen science. Residents contributed by marking flood water levels on utility poles and participating in interviews. This approach reconstructed historical flood extents more effectively than satellite datasets like SRTM, aligning closely with community accounts of past events.

"UAV-based DEMs represent an effective and affordable alternative to costly LiDAR or extensive GNSS surveys, reinforcing their potential for flood risk mapping in isolated and data-limited regions."
– Hélio Rodrigues Bassanelli et al., Frontiers in Water

In addition, sonar-based bathymetric measurements fill gaps where drone sensors cannot capture underwater terrain, while hydrometric monitoring systems provide real-time water level data to fine-tune hydraulic models. For instance, in the Bangladesh study, three water level gauges installed across the floodplain enabled calibration that achieved a Nash-Sutcliffe efficiency of 0.91 and an R² of 0.98. Combining high-resolution drone data with sonar and hydrometric inputs creates detailed and reliable models of complex flood dynamics.

Benefits for Pre-Disaster Flood Planning

Risk Mapping Applications

Drone technology is transforming how communities map flood risks by offering a level of detail traditional methods often miss. For example, in August 2019, a UAV-LiDAR survey conducted in Accra, Ghana, created 0.3 m Digital Terrain Models. These models captured fine microtopographic details - like archways, boundary walls, and drainage channels - that helped reduce flow overestimations by up to 62.5% when compared to 10 m satellite models.

In another instance, a study in Brazil (February 2026) combined high-resolution drone surveys with resident interviews and physical flood markers. This approach reconstructed historical flood extents with significantly improved accuracy. By pairing detailed terrain models with local knowledge, communities can better understand past floods and prepare for future ones.

Infrastructure and Industrial Site Planning

Drone data isn't just for mapping risks - it’s also revolutionizing how we plan infrastructure in flood-prone areas. With enhanced flood models, drone-based 3D models allow for precise and resilient designs. For instance, a February 2024 study used a DJI Matrice 300 equipped with Zenmuse L1 LiDAR to create detailed 3D models of 36 km of embankments along Greece's Pineios River. These models were then used in virtual reality simulations of six different flood scenarios, helping planners evaluate embankment restoration options and design effective preventive measures.

AI tools are also stepping in to quickly identify hazards, such as debris blocking river passages, which ensures timely safety assessments for bridges and weirs. Platforms like Anvil Labs centralize drone data - LiDAR point clouds, thermal imagery, orthomosaics - into one system. This allows industrial site managers to annotate vulnerabilities, measure distances to at-risk areas, and share 3D models with stakeholders for collaborative decision-making.

Future Directions and Challenges

Despite its benefits, using drones for pre-disaster flood planning isn’t without challenges. Regulations can restrict where drones can operate, and processing the massive datasets they produce often requires specialized software and expertise. Additionally, weather conditions like strong winds, heavy rain, or water reflections can interfere with sensor accuracy.

However, new technologies are addressing some of these hurdles. AI and machine learning are beginning to automate the identification of flood-prone areas, while digital twins and virtual reality platforms allow stakeholders to explore immersive flood scenarios. The integration of wireless IoT sensors on drones is enabling real-time flood monitoring, and drone swarms are being tested to map large areas more efficiently. Ermioni Eirini Papadopoulou from Cyprus University of Technology highlights one key advantage of drone technology:

"The ability of LiDAR to penetrate vegetation and provide accurate ground measurements makes it especially useful in densely vegetated floodplains where traditional surveying methods may be less effective".

These advancements, coupled with better data management tools, are making drone-based flood planning more practical and impactful for communities worldwide.

Conclusion

Drone technology is reshaping how we assess flood risks and prepare for disasters. By offering ultra-high-resolution models that satellites simply can't match, drones capture critical microtopographic details that significantly improve flood predictions. Research highlights that coarse-resolution models often overestimate water flows, especially in flat areas, leading to inaccuracies in flood assessments. With drones, these biases are corrected, providing communities with the data they need to safeguard lives and infrastructure effectively.

Combining drone data like LiDAR point clouds, thermal imagery, and orthomosaics with hydraulic models such as HEC-RAS enables precise flood scenario simulations. In regions lacking traditional data sources, integrating drone surveys with community-driven information - like historical flood marks and interviews with residents - offers an affordable and reliable alternative to costly surveys. This approach ensures that even municipalities with limited resources can access advanced flood mapping tools, paving the way for more informed decision-making.

Platforms like Anvil Labs are taking this a step further by centralizing drone data into a collaborative workspace. Supporting diverse datasets such as LiDAR, thermal imagery, and orthomosaics, these platforms allow emergency planners and site managers to annotate vulnerabilities, measure distances to critical infrastructure, and share interactive 3D models with stakeholders. By uniting these datasets in one place, decision-making becomes more efficient, ensuring that the right information reaches the right people at the right time.

Emerging technologies like AI analysis, real-time IoT monitoring, and drone swarms are pushing flood risk management toward a proactive future. With the right tools and workflows, drone data is playing a crucial role in reducing flood vulnerability and protecting communities worldwide.

FAQs

When should I choose LiDAR vs photogrammetry vs multispectral?

LiDAR is ideal when you need precise measurements, the ability to see through vegetation, or to work in low-light situations like dense forests or urban environments. Its accuracy and ability to penetrate foliage make it a go-to option for challenging terrains.

On the other hand, Photogrammetry shines when capturing detailed textures and vibrant colors in well-lit, open areas. It’s a cost-effective way to achieve visually realistic mapping, especially for projects where visual detail is key.

For specialized tasks like monitoring vegetation health or assessing water quality, multispectral imaging is the tool of choice. It provides critical data by analyzing specific wavelengths of light, offering insights that are otherwise invisible to the naked eye.

When assessing flood risks, combining these technologies can give you a comprehensive picture. Together, they deliver accurate terrain data, detailed visual maps, and essential environmental information, ensuring a thorough and reliable analysis.

How accurate are drone-based flood models in real floods?

Drone-based flood models excel in precision, particularly when paired with cutting-edge tools like LiDAR and high-resolution digital terrain models (DTMs). These technologies allow drones to capture intricate details, such as drainage channels and boundary walls, with an impressive vertical accuracy of ±3 cm - far surpassing the capabilities of satellite methods. Research highlights that these models can lower runoff prediction errors by as much as 65% and consistently align with gauge data. This makes them an essential tool for flood risk assessment and effective disaster planning.

What do I need to turn drone data into HEC-RAS flood maps?

To produce HEC-RAS flood maps using drone data, start by creating a detailed digital terrain model (DTM) in a raster format, such as GeoTIFF. This model should precisely reflect ground elevations and include essential features like channels, floodplains, and levees.

Once the georeferenced DTM is ready, import it into HEC-RAS, connect it to the model geometry, and double-check its accuracy. This terrain data is crucial for running hydraulic simulations and generating floodplain maps.

Related Blog Posts