Expert Insight: The Benefits of Multi-Modality Data Fusion

Tuesday 23rd April 2024

As artificial intelligence (AI) continues to evolve, the ability to process and synthesise information from multiple modalities is increasingly crucial in enabling sophisticated real-world applications.

Data fusion, the process of integrating multiple data sources to produce more consistent, accurate and useful information, can be highly beneficial in addressing a variety of subsea challenges.

By leveraging data fusion, various subsea challenges can be addressed more effectively, leading to safer, more efficient and environmentally conscious operations.

In this thought leadership article, Transparent Ocean Lead, Professor Jinchang Ren, explains multi-modality data fusion systems, the crucial role that these techniques are playing in supporting the energy transition and how this evolving technology is enabling informed decision-making in complex subsea operations and global environmental challenges.

How would you define multi-modality?

In computing and AI, multi-modality refers to the capability of systems to handle and integrate information from various data types or input modalities such as text, images, video, audio or even the same media type, but from different sensors (e.g. images from camera, sonar, LiDAR, radar and hyperspectral sensing). Multimodal systems aim to achieve a more comprehensive understanding and interpretation of the environment or enhance interactions with users by utilising the diverse characteristics of each data modality. These systems offer enhanced perception by combining multimodal inputs. They are very robust systems, as they can compensate for the limitations or failures of one modality with another, improving the overall accuracy and reliability. Additionally, multimodal systems can better contextualise information, which is crucial in condition monitoring, predicted maintenance and smart asset management applications. This integration allows for more comprehensive decision-making, as seen in autonomous vehicles that use a mix of sensor, visual and audio data to navigate safely. However, creating these systems presents challenges such as data fusion—combining data from disparate sources effectively, modelling complexity - developing algorithms that manage the heterogeneity of multimodal data efficiently and ensuring alignment and synchronisation of data across different modalities to correspond to the same real-world events or objects.

What are the current subsea challenges where data fusion can help?

Here are some specific areas in the subsea environment where data fusion can play a crucial role:

  • Environmental Monitoring: Subsea environments are complex and can vary greatly over small areas. Data fusion can integrate information from different sensors (chemical, biological, physical) and imaging technologies to provide a comprehensive understanding of the environmental conditions, corresponding to various platforms such as satellite, UAV, ROV and UUV. This is crucial for monitoring the pollution and biodiversity, studying marine ecosystems and observing the impacts of climate change on underwater environments.
  • Subsea Pipeline and Infrastructure Inspection: Maintaining subsea infrastructure such as pipelines, cables and oil platforms involves regular inspections to prevent leaks and other damage that could be environmentally disastrous. Data fusion can help by combining visual data from cameras, sonar and data from structural sensors to detect faults, corrosion or other potential problems more accurately and promptly.
  • Resource Exploration: In the exploration of natural resources such as oil, gas and minerals, data fusion can integrate geological, seismic and hydrographic data. This comprehensive view allows for more precise mapping of subsea resources, reducing the environmental footprint of exploration activities and improving the efficiency of extraction processes.
  • Navigation and Object Detection: For autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs), navigating the complex subsea environment safely is a constant challenge. Data fusion can combine data from sonar, radar and optical imaging to enhance object detection and collision avoidance, allowing these vehicles to operate more safely and efficiently.
  • Archaeological and Geological Research: The study of underwater archaeological sites and geological formations often requires detailed images and maps. By fusing data from acoustic imaging, photogrammetry, and other sensing technologies, researchers can obtain higher-resolution images and more precise data to better understand the underwater sites.
  • Habitat Mapping: Detailed maps of habitats are essential for conservation efforts. Data fusion allows for the integration of photographic data, physical data measurements and biological data to create detailed maps of marine habitats, aiding in the preservation of biodiversity.

What type of techniques do you use to fuse data together?

The Transparent Ocean team’s work with data fusion primarily uses a combination of data-level and feature-level fusion techniques, supplemented by sophisticated machine learning and deep learning methods. At the data level, the team focuses on integrating raw data from various sources, ensuring synchronisation and calibration to maintain the integrity and richness of the data. This approach is challenging but provides a comprehensive dataset for further analysis.

Moving to feature-level fusion, the team extract and combine features from different data sources into a unified feature set. This method reduces the complexity of the data and prepares it for effective machine-learning applications. For modelling, the team often employ neural networks and ensemble methods like random forests which are well-suited for merging features or making integrated decisions from complex datasets.

Additionally, the team incorporate transfer learning and domain adaptation techniques, especially in projects where data comes from varied domains or where labelled data are scarce in some areas. These techniques help adapt models trained in one setting to perform well in another, significantly boosting their effectiveness.

Overall, the team’s approach to data fusion is dynamic, adapting to the specific needs of each project while leveraging the latest advancements in AI and machine learning to achieve the best results.

Can you give an example of when you’ve applied multi-modality data fusion to a subsea project?

In a previous project supported by Marine Scotland, Sentinel-1 (SAR) and Sentinel-2 (multispectral image) were fused for the Automatic Geolocation and Measuring of Offshore Energy Infrastructure. The use of the Sentinel-1 (SAR) data aims to quickly localise candidate offshore energy infrastructure with its all-weather imaging capabilities, while the high-resolution optical data provided by the Sentinel-2 can enable more accurate localisation and measurement of the offshore infrastructure. For the SeaSense project, the team are also working to combine RGB and depth images as well as the sonar images for subsea positioning and object detection.

How does this evolving technology support the energy transition?

The evolving technology of multi-modality data fusion plays a crucial role in supporting the energy transition from fossil fuels to more sustainable energy sources. Here’s how this technology is making a significant impact:

  • Enhancing Renewable Energy Operations: Multi-modal data fusion is instrumental in optimising the operation and maintenance of renewable energy installations, such as wind turbines and solar panels. By integrating data from various sensors (e.g. wind speed, temperature, mechanical stress sensors on turbine blades), operators can predict maintenance needs, reduce downtime and increase the overall efficiency and lifespan of renewable energy assets.
  • Improving Energy Efficiency: In traditional energy sectors, such as oil and gas, data fusion helps in making the extraction and transportation processes more efficient and less environmentally damaging. For instance, in subsea oil extraction, using fused data from acoustic, temperature and pressure sensors can help quickly detect and address leaks, significantly minimising the environmental impact.
  • Facilitating Smart Grids: The integration of renewable energy sources into the power grid requires advanced management systems to handle intermittent energy flows efficiently. Multi-modal data fusion is key in smart grid technology, where data from various sources (consumption patterns, weather forecasts, energy production rates) are integrated to optimise power distribution and balance supply with demand.
  • Supporting Carbon Capture and Storage (CCS): Multi-modal data fusion can enhance the monitoring and management of CCS sites. By integrating geological, seismic and chemical data, these technologies can ensure the safe and efficient sequestration of carbon dioxide, which is crucial for reducing greenhouse gas emissions.
  • Advancing Energy Storage Solutions: Effective integration and management of energy storage are pivotal for mitigating the variability of renewable energy sources. Data fusion allows for better prediction of storage needs and management of energy flows, enhancing the stability and reliability of energy storage systems.
  • Enabling Predictive Maintenance: In the energy sector, predictive maintenance can prevent costly downtimes and extend the life of equipment. Data fusion technologies aggregate and analyse data from various sources to predict failures before they occur, especially in critical infrastructures like nuclear power plants, where safety is paramount.

In summary, multimodal data fusion not only enhances the efficiency and safety of existing energy operations, but also plays a pivotal role in integrating and optimising new, sustainable technologies. This supports the energy transition by improving the reliability and performance of renewable sources, facilitating their adoption and helping to manage the complexities associated with modern energy systems.

How can multimodal data fusion techniques support global environmental challenges?

Multimodal data fusion techniques are instrumental in addressing global environmental challenges by enhancing the accuracy and efficiency of monitoring, analysis and management systems. Here are several key ways in which these techniques can be applied:

  • Climate Change Monitoring: By fusing data from satellite imagery, ground sensors, ocean buoys and atmospheric measurements, researchers can obtain a comprehensive view of climate dynamics. This integrated data helps in more accurate modelling of climate change effects, such as temperature rises, sea-level fluctuations and changes in precipitation patterns, facilitating better prediction and mitigation strategies.
  • Biodiversity Conservation: Multi-modal data fusion can improve the monitoring of ecosystems and wildlife by integrating diverse data sources like satellite images, drone footage and ground-based sensor networks. This approach enables detailed tracking of habitat changes, population dynamics and illegal activities such as poaching, thus supporting more effective conservation efforts.
  • Pollution Control: In tackling pollution, data fusion techniques combine information from air quality sensors, satellite imagery and ground reports to provide a detailed assessment of pollutant dispersion and concentration levels. This enriched dataset aids in pinpointing sources of pollution, monitoring its temporal and spatial trends and enforcing environmental regulations more effectively.
  • Natural Disaster Response and Preparedness: Fusing meteorological data, seismic activity records and hydrological data from multiple sources enables more reliable forecasting of natural disasters like hurricanes, earthquakes and floods. Such integrated systems allow for earlier warnings, better preparedness and more coordinated response efforts, reducing potential damage and enhancing community resilience.
  • Sustainable Agriculture: In agriculture, combining soil sensors, drone imagery and weather data can optimise resource use and crop management. This fusion results in precision agriculture techniques that maximise yield while minimising environmental impacts such as overuse of water and fertilisers.
  • Energy Management: Data fusion is crucial in managing energy systems, especially with the integration of renewable energy sources. By combining energy usage data, weather forecasts and generation statistics, energy providers can better balance supply and demand, reduce wastage and increase the overall sustainability of power systems.
  • Water Resource Management: Multi-modal data fusion supports water management by integrating hydrological data from satellites, ground sensors and climate models. This comprehensive view assists in managing water resources more efficiently, predicting droughts and floods and ensuring sustainable water supply for urban and rural areas.

By providing a richer, more nuanced view of environmental data, multi-modality data fusion techniques enable stakeholders to make informed decisions, predict environmental changes more accurately and implement more effective interventions. These capabilities are essential for addressing the complex and interlinked challenges of global environmental management.

Can you explain how these techniques are being applied to the current SeaSense project?

Currently, in the Seasense project, the team is leveraging two distinct sensor modalities: optical and sonar. While from optical the team obtained the object information and visual identification, from sonar, they can understand the range. Furthermore, the optical sensors provide detailed object information and visual identification, utilising both RGB cameras and IR-based stereo depth estimation to map the position of objects. The fusion of RGB, depth and sonar is helping the team to develop a robust solution for more accurate subsea measurement and navigation to support the smooth operations of ROVs.

However, creating a unified pipeline for these multimodal sensors presents unique challenges due to the differing operational principles of each modality. Data-level fusion is particularly challenging as it requires integrating fundamentally different types of data. To address these challenges, the team is exploring various advanced techniques. These include using deep learning-based transfer learning and high-level feature sharing, which are promising methods for achieving effective data fusion between the sonar and optical modalities. By leveraging these techniques, the team aim to enhance the integration and utility of the sensor data, thereby improving the overall effectiveness of our monitoring and detection systems in the SeaSense project.

Thanks to the contributions from Dr Junayed Hasan and Dr. Ping Ma. To discover more about how our Transparent Ocean team is solving real-world problems and the other impactful research projects that are currently being undertaken, view our dedicated Transparent Ocean webpage.