Skip to content
2000
Volume 18, Issue 10
  • ISSN: 2352-0965
  • E-ISSN: 2352-0973

Abstract

Background

Multi-view stereo matching is the reconstruction of a three-dimensional point cloud model from multiple views. Although the learn-based method achieves excellent results compared with the traditional method, the existing multi-view stereo matching method will lose the underlying details when extracting features due to the deepening of the number of convolutional layers, which will affect the quality of subsequent reconstruction.

Objective

The objective of this approach is to improve the integrity and accuracy of 3D reconstruction, and obtain a 3D point cloud model with richer texture and more complete structure.

Methods

Firstly, a context-semantic information fusion module is constructed in the feature extraction network FPN, and the feature maps containing rich context information can be obtained by using multi-scale dense connections.Subsequently, a full-scale jump connection is introduced in the regularization process to capture the shallow level of detail information and deep level of semantic information at the full scale, and capture the texture features of the scene more accurately, so as to carry out reliable depth estimation.

Results

The experimental results on DTU dataset show that the proposed CU-MVSNet reduces the completeness error by 3.58%, the accuracy error by 3.7%, and the overall error by 3.51% compared with the benchmark network. It also shows good generalization on TnT dataset.

Conclusion

The CU-MVSNet method proposed in this paper can improve the completeness and accuracy of 3D reconstruction, and obtain a 3D point cloud model with more detailed texture and more complete structure.

Loading

Article metrics loading...

/content/journals/raeeng/10.2174/0123520965330361241007061452
2025-01-06
2026-01-01
Loading full text...

Full text loading...

References

  1. LeeH. SongS. JoS. 3D reconstruction using a sparse laser scanner and a single camera for outdoor autonomous vehicle2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), 2016pp. 629-634
    [Google Scholar]
  2. LiZ. ZhangY.D. 3D reconstruction method of forest landscape based on virtual reality.Multimedia Tools Appl.20207923-24163691638310.1007/s11042‑019‑7320‑2
    [Google Scholar]
  3. BobocR.G. GîrbaciaF. PostelnicuC.C. Evaluation of using mobile devices for 3D reconstruction of cultural heritage artifacts[C]//VR Technologies in Cultural HeritageFirst International Conference, VRTCH 2018 Brasov, Romania, May 29–30, 2019, pp. 46-59.
    [Google Scholar]
  4. JosephS.S. AjuD. A comparative survey on three-dimensional reconstruction of medical modalities based on various approachesInformation Systems Design and Intelligent Applications Proceedings of Fifth International Conference INDIA 20182019223233
    [Google Scholar]
  5. ZhangS. High-speed 3D shape measurement with structured light methods: A review.Opt. Lasers Eng.201810611913110.1016/j.optlaseng.2018.02.017
    [Google Scholar]
  6. LiB. LiuZ. ZhangS. Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry.Opt. Express20162420232892330310.1364/OE.24.023289 27828393
    [Google Scholar]
  7. LiuX. KofmanJ. Real-time 3D surface-shape measurement using background-modulated modified Fourier transform profilometry with geometry-constraint.Opt. Lasers Eng.201911521722410.1016/j.optlaseng.2018.11.014
    [Google Scholar]
  8. ZhangH. ZhangQ. LiY. LiuY. High speed 3D shape measurement with temporal Fourier transform profilometry.Appl. Sci. (Basel)2019919412310.3390/app9194123
    [Google Scholar]
  9. ZuoC. TaoT. FengS. HuangL. AsundiA. ChenQ. Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second.Opt. Lasers Eng.2018102709110.1016/j.optlaseng.2017.10.013
    [Google Scholar]
  10. ZuoC. FengS. HuangL. TaoT. YinW. ChenQ. Phase shifting algorithms for fringe projection profilometry: A review.Opt. Lasers Eng.2018109235910.1016/j.optlaseng.2018.04.019
    [Google Scholar]
  11. LiD. LiuC. TianJ. Telecentric 3D profilometry based on phase-shifting fringe projection.Opt. Express20142226318263183510.1364/OE.22.031826 25607150
    [Google Scholar]
  12. YePengL. GuangliangD. ChaoRuiZ. CanLinZ. ShuChunS. zhenkunL. An improved two-step phase-shifting profilometryOptik (Stuttg.)2016127128829110.1016/j.ijleo.2015.10.074
    [Google Scholar]
  13. SuX. ChenW. Reliability-guided phase unwrapping algorithm: a review.Opt. Lasers Eng.200442324526110.1016/j.optlaseng.2003.11.002
    [Google Scholar]
  14. HeX. KemaoQ. A comparative study on temporal phase unwrapping methods in high-speed fringe projection profilometry.Opt. Lasers Eng.202114210661310.1016/j.optlaseng.2021.106613
    [Google Scholar]
  15. WeiZ. CaoY. WuH. XuC. RuanG. WuF. LiC. Dynamic phase-differencing profilometry with number-theoretical phase unwrapping and interleaved projection.Opt. Express20243211195781959310.1364/OE.527192 38859090
    [Google Scholar]
  16. LuM. SuX. CaoY. YouZ. ZhongM. Modulation measuring profilometry with cross grating projection and single shot for dynamic 3D shape measurement.Opt. Lasers Eng.20168710311010.1016/j.optlaseng.2015.12.011
    [Google Scholar]
  17. TakasakiH. Moiré Topography.Appl. Opt.1970961467147210.1364/AO.9.001467 20076401
    [Google Scholar]
  18. LiC. CaoY. ChenC. WanY. FuG. WangY. Computer-generated Moiré profilometry.Opt. Express20172522268152682410.1364/OE.25.026815 29092166
    [Google Scholar]
  19. LiC. CaoY. WangL. WanY. FuG. WangY. ChenC. High precision computer-generated moiré profilometry.Sci. Rep.201991780410.1038/s41598‑019‑44186‑3 31127160
    [Google Scholar]
  20. ZhangH. CaoY. LiC. WangL. LiH. XuC. WanY. Color-encoded single-shot computer-generated Moiré profilometry.Sci. Rep.20211111102010.1038/s41598‑021‑90522‑x 34040120
    [Google Scholar]
  21. FengS. ZuoC. YinW. GuG. ChenQ. Micro deep learning profilometry for high-speed 3D surface imaging.Opt. Lasers Eng.201912141642710.1016/j.optlaseng.2019.04.020
    [Google Scholar]
  22. FengS. ChenQ. GuG. TaoT. ZhangL. HuY. YinW. ZuoC. Fringe pattern analysis using deep learning.Adv. Photonics201912110.1117/1.AP.1.2.025001
    [Google Scholar]
  23. Van der JeughtS. DirckxJ.J.J. Deep neural networks for single shot structured light profilometry.Opt. Express20192712170911710110.1364/OE.27.017091 31252926
    [Google Scholar]
  24. NguyenH. WangY. WangZ. Single-shot 3D shape reconstruction using structured light and deep convolutional neural networks.Sensors (Basel)20202013371810.3390/s20133718 32635144
    [Google Scholar]
  25. YuH. ChenX. HuangR. BaiL. ZhengD. HanJ. Untrained deep learning-based phase retrieval for fringe projection profilometry.Opt. Lasers Eng.202316410748310.1016/j.optlaseng.2023.107483
    [Google Scholar]
  26. Fuentes-JimenezD. PizarroD. Casillas-PerezD. CollinsT. BartoliA. Texture-generic deep shape-from-template.IEEE Access20219752117523010.1109/ACCESS.2021.3082011
    [Google Scholar]
  27. BarronJ.T. MalikJ. Shape, illumination, and reflectance from shading.IEEE Trans. Pattern Anal. Mach. Intell.20153781670168710.1109/TPAMI.2014.2377712 26353003
    [Google Scholar]
  28. WangX. WangC. LiuB. ZhouX. ZhangL. ZhengJ. BaiX. Multi-view stereo in the Deep Learning Era: A comprehensive review.Displays20217010210210.1016/j.displa.2021.102102
    [Google Scholar]
  29. CaiW. LiuD. NingX. WangC. XieG. Voxel-based three-view hybrid parallel network for 3D object classification.Displays20216910207610.1016/j.displa.2021.102076
    [Google Scholar]
  30. KarA. HäneC. MalikJ. Learning a multi-view stereo machine.Adv. Neural Inf. Process. Syst.201730
    [Google Scholar]
  31. YaoY. LuoZ. LiS. Mvsnet: Depth inference for unstructured multi-view stereoProceedings of the European conference on computer vision (ECCV)2018767783.
    [Google Scholar]
  32. YaoY. LuoZ. LiS. Recurrent mvsnet for high-resolution multi-view stereo depth inferenceProceedings of the IEEE/CVF conference on computer vision and pattern recognition201955255534.
    [Google Scholar]
  33. GuX. FanZ. ZhuS. Cascade cost volume for high-resolution multi-view stereo and stereo matchingProceedings of the IEEE/CVF conference on computer vision and pattern recognition202024952504.
    [Google Scholar]
  34. YangJ. MaoW. AlvarezJ.M. Cost volume pyramid based depth inference for multi-view stereoProceedings of the IEEE/CVF conference on computer vision and pattern recognition202048774886.
    [Google Scholar]
  35. PengR. WangR. WangZ. Rethinking depth estimation for multi-view stereo: A unified representationProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition202286458654.
    [Google Scholar]
  36. AanæsH. JensenR.R. VogiatzisG. TolaE. DahlA.B. Large-scale data for multiple-view stereopsis.Int. J. Comput. Vis.2016120215316810.1007/s11263‑016‑0902‑9
    [Google Scholar]
  37. KnapitschA. ParkJ. ZhouQ.Y. Tanks and temples: Benchmarking large-scale scene reconstruction.ACM Transactions on Graphics (ToG)2017364113
    [Google Scholar]
  38. SchonbergerJ.L. FrahmJ.M. Structure-from-motion revisitedProceedings of the IEEE conference on computer vision and pattern recognition201641044113.
    [Google Scholar]
  39. GallianiS. LasingerK. SchindlerK. Massively parallel multi-view stereo reconstruction.Publikationen der Deutschen Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation2016252
    [Google Scholar]
  40. XuQ. TaoW. Multi-scale geometric consistency guided multi-view stereoProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition2019
    [Google Scholar]
  41. LuoK. GuanT. JuL. P-mvsnet: Learning patch-wise matching confidence aggregation for multi-view stereoProceedings of the IEEE/CVF International Conference on Computer Vision20191054210461.
    [Google Scholar]
  42. YuZ. GaoS. Fast-mvsnet: Sparse-to-dense multi-view stereo with learned propagation and gauss-newton refinementProceedings of the IEEE/CVF conference on computer vision and pattern recognition202019491958.
    [Google Scholar]
  43. ChengS. XuZ. ZhuS. Deep stereo using adaptive thin volume representation with uncertainty awarenessProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition202025242534.
    [Google Scholar]
  44. DingY. YuanW. ZhuQ. Transmvsnet: Global context-aware multi-view stereo network with transformersProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition202285858594.
    [Google Scholar]
  45. ZhangX. YangF. ChangM. QinX. MG-MVSNet: Multiple granularities feature fusion network for multi-view stereo.Neurocomputing2023528354710.1016/j.neucom.2023.01.062
    [Google Scholar]
  46. LiJ. LuZ. WangY. XiaoJ. WangY. NR-MVSNet: Learning multi-view stereo based on normal consistency and depth refinement.IEEE Trans. Image Process.2023322649266210.1109/TIP.2023.3272170 37145946
    [Google Scholar]
  47. RonnebergerO. FischerP. BroxT. U-net: Convolutional networks for biomedical image segmentationMedical image computing and computer-assisted intervention–MICCAI 2015 18th international conference Munich, Germany, October 5-9, 2015 pp. 234-241.
    [Google Scholar]
  48. ZhouZ. Rahman SiddiqueeM.M. TajbakhshN. Unet++: A nested u-net architecture for medical image segmentationDeep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018 Granada, Spain, September 20, 2018, pp. 3-11.
    [Google Scholar]
  49. HuangH. LinL. TongR. Unet 3+: A full-scale connected unet for medical image segmentationICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP)202010551059
    [Google Scholar]
  50. CampbellN.D.F. VogiatzisG. HernándezC. Using multiple hypotheses to improve depth-maps for multi-view stereo[C]//Computer Vision–ECCV 200810th European Conference on Computer Vision Marseille, France, October 12-18, 2008, pp.766-779.
    [Google Scholar]
  51. FurukawaY. PonceJ. Accurate, dense, and robust multiview stereopsis.IEEE Trans. Pattern Anal. Mach. Intell.20103281362137610.1109/TPAMI.2009.161 20558871
    [Google Scholar]
  52. WangF. GallianiS. VogelC. Patchmatchnet: Learned multi-view patchmatch stereoProceedings of the IEEE/CVF conference on computer vision and pattern recognition20211419414203.
    [Google Scholar]
  53. ChenR. HanS. XuJ. Point-based multi-view stereo networkProceedings of the IEEE/CVF international conference on computer vision201915381547.
    [Google Scholar]
/content/journals/raeeng/10.2174/0123520965330361241007061452
Loading
/content/journals/raeeng/10.2174/0123520965330361241007061452
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test