Mega-NeRF++: An Improved Scalable NeRFs for
High-resolution Photogrammetric Images

YiWei Xu; Xin Wang; TengFei Wang; ZongQian Zhan


Introduction


Currently, the emergence of NeRF has already made the neuron-based implicit representation of any 3D scene become possible, and this technology has already been demonstrated to be able to achieve good rendering results when scene is controlled and image resolution is not that high.

But for the photogrammetric dataset, it usually contains large number of high-resolution UAV images and it usually covers wide ground. Under the limitations of training time cost and computational resources, it is hard for origin NeRF to learn 3D information from the photogrammetric dataset directly.

Mega-NeRF presented partitioning method to solve the problem. However, Mega-NeRF adopts independent parallel training strategy so that the overlapping information between any two adjacent sub-model are not considered. It is not difficult to notice that different sub-model tends to give different rendering results in the overlapping region. as shown in the figure below.

Therefore, Mega-NeRF++ hope to improve the training strategy and loss function used in the Mega-NeRF to achieve better rendering results based on the consistency of rendering results of any adjacent sub-models in the overlapping region.

The workflow



1. Data Pre-Processing:
2. Mega-NeRF Training:
3. Mega-NeRF++ Optimization:

Data Downloading


For a more intuitive comparison between Mega-NeRF++ and the original Mega-NeRF, several datasets previously employed in Mega-NeRF are leveraged, including Mill 19 dataset which consists of two scenes (buildings and rubble) and Quad 6k image dataset captured from a large-scale scene for SfM.

pictures below show part of images:


Mill 19 - Rubble is a photogrammetric image dataset captured by UAV, and QUAD 6K is a close-range photogrammetric image dataset. We will conduct experiments on these two datasets separately to verify the validity of our approach.

If you want to download these datasets, you can go to the corresponding page by following the URL below:


Methodology


Here we show how we train each sub-model during the experiments. We first individually train each sub-model for a certain iteration using original Mega-NeRF method, and then we adopt alternative training method: individual training and joint training are alternated applied until pre-set iterations are achieved. Both of them will be used at least 20 times in our project. Finally, appearance matching method, which is proposed by Block-NeRF, is also applied for comparison.

Figure shown below explains how we train a Mega-NeRF++ model:


We also use hybrid rendering strategy for boosting the rendering result. For any images containing both overlapping region and non-overlapping region, the rendering result of Mega-NeRF++ model will be used for the part of overlapping region, while the rendering result of original Mega-NeRF model will be used for the part of non-overlapping region.

Figure shown below explains how we render an image:




Experiments


We first performed the ablation experiment, it contains three parts:
Figures and tables below show the experiment results:

We also performed the comparison experiment, we mainly compared our Mega-NeRF++ with the appearance matching method proposed in Block-NeRF. In the experiment, Mega-NeRF_am/Mega-NeRF++_am means using appearance matching method to additionally optimize the Mega-NeRF/Mega-NeRF++ model for another 10000 iterations.
The results of comparison experiment are shown below:


We also show the rendering results of different models for comparison. At the same time, we have marked the areas where there is significant improvement in order to show the advantages of our method compared to other methods.


According to our experiment, we can draw the following conclusions:



Conclusion


This paper presents an improved method, Mega-NeRF++, for boosting the original large-scale Mega-NeRF based on the consistency of overlapping regions between adjacent sub-models. This method successfully minimizes deviations between Mega-NeRF++ predicted rendering results and ground truth, while mitigating color inconsistency errors that may arise during rendering in overlapping regions of adjacent sub-models. Our Mega-NeRF++ can qualitatively render better images with higher fidelity and quantitively have higher PNSR and SSIM compare to original Mega-NeRF.

If you want to get more details about our project, Please view our full paper via the URL below.

Mega-NeRF++: An Improved Scalable NeRFs for High-resolution Photogrammetric Images


About us

If you have any questions or advice, you can contact us through following address: In addition, this project was collaboratively completed by multiple individuals, and we are deeply grateful for the contributions and support from the following members and organizations:

Y.W. Xu
Wu Han University, China
,
W. Xin*
Wu Han University, China
,
T.F. Wang
Wu Han University, China
,
Z.Q. Zhan
Wu Han University, China