DOI | Resolve DOI: https://doi.org/10.1109/BigData55660.2022.10020557 |
---|
Author | Search for: Pouplin, T.; Search for: Perreault, H.; Search for: Debaque, B.; Search for: Drouin, M-A.1; Search for: Duclos-Hindie, N.; Search for: Roy, S. |
---|
Affiliation | - National Research Council of Canada. Digital Technologies
|
---|
Format | Text, Article |
---|
Conference | 2022 IEEE International Conference on Big Data (Big Data), December 17-20, 2022, Osaka, Japan |
---|
Subject | measurement; location awareness; image registration; visualization; image synthesis; transfer functions; estimation |
---|
Abstract | Multimodal image registration is a challenging task. To begin with, the variation of parallax in the images makes the process intrinsically tricky. Additionally, due to phenomenology differences in modalities, the appearance of the same feature may vary significantly between the images making the registration laborious. To help mitigate these issues, we propose a two-step approach targeted at visible and infrared imagery. First, we train a generative adversarial network to learn the domain transfer function between the visible and the infrared domain, thereby mitigating the impact of the visual dissimilarity between the images. Second, we train a deep Siamese network to compute a homography in an unsupervised setting. Both elements are combined and trained sequentially. Our method is evaluated on a publicly available dataset. Our results show that the proposed method provides a reduction of more than 30% on average from the previous state-of-the-art, and outperforms several baselines and recent deep homography methods. |
---|
Publication date | 2022-12-17 |
---|
Publisher | IEEE |
---|
In | |
---|
Language | English |
---|
Peer reviewed | Yes |
---|
Export citation | Export as RIS |
---|
Report a correction | Report a correction (opens in a new tab) |
---|
Record identifier | 2ad271bc-2263-49a8-b84d-ffdbac3255fa |
---|
Record created | 2023-01-31 |
---|
Record modified | 2023-02-01 |
---|