Full text loading...
Multi-view stereo matching is the reconstruction of a three-dimensional point cloud model from multiple views. Although the learn-based method achieves excellent results compared with the traditional method, the existing multi-view stereo matching method will lose the underlying details when extracting features due to the deepening of the number of convolutional layers, which will affect the quality of subsequent reconstruction.
The objective of this approach is to improve the integrity and accuracy of 3D reconstruction, and obtain a 3D point cloud model with richer texture and more complete structure.
Firstly, a context-semantic information fusion module is constructed in the feature extraction network FPN, and the feature maps containing rich context information can be obtained by using multi-scale dense connections.Subsequently, a full-scale jump connection is introduced in the regularization process to capture the shallow level of detail information and deep level of semantic information at the full scale, and capture the texture features of the scene more accurately, so as to carry out reliable depth estimation.
The experimental results on DTU dataset show that the proposed CU-MVSNet reduces the completeness error by 3.58%, the accuracy error by 3.7%, and the overall error by 3.51% compared with the benchmark network. It also shows good generalization on TnT dataset.
The CU-MVSNet method proposed in this paper can improve the completeness and accuracy of 3D reconstruction, and obtain a 3D point cloud model with more detailed texture and more complete structure.