# Associative3D: Volumetric Reconstruction from

Sparse Views

###### Abstract

This paper studies the problem of 3D volumetric reconstruction from two views of a scene with an unknown camera. While seemingly easy for humans, this problem poses many challenges for computers since it requires simultaneously reconstructing objects in the two views while also figuring out their relationship. We propose a new approach that estimates reconstructions, distributions over the camera/object and camera/camera transformations, as well as an inter-view object affinity matrix. This information is then jointly reasoned over to produce the most likely explanation of the scene. We train and test our approach on a dataset of indoor scenes, and rigorously evaluate the merits of our joint reasoning approach. Our experiments show that it is able to recover reasonable scenes from sparse views, while the problem is still challenging. Project site: https://jasonqsy.github.io/Associative3D.

###### Keywords:

3D Reconstruction^{†}

^{†}footnotetext: equal contribution

## 1 Introduction

How would you make sense of the scene in Fig. 1? After rapidly understanding the individual pictures, one can fairly quickly attempt to match the objects in each: the TV on the left in image A must go with the TV on the right in image B, and similarly with the couch. Therefore, the two chairs, while similar, are not actually the same object. Having pieced this together, we can then reason that the two images depict the same scene, but seen with a 180 change of view and infer the 3D structure of the scene. Humans have an amazing ability to reason about the 3D structure of scenes, even with as little as two sparse views with an unknown relationship. We routinely use this ability to understand images taken at an event, look for a new apartment on the Internet, or evaluate possible hotels (e.g., for ECCV). The goal of this paper is to give the same ability to computers.

Unfortunately, current techniques are not up to this challenge of volumetric reconstruction given two views from unknown cameras: this approach requires both reconstruction and pose estimation. Classic methods based on correspondence [20, 9] require many more views in practice and cannot make inferences about unseen parts of the scene (i.e., what the chair looks like from behind) since this requires some form of learning. While there has been success in learning-based techniques for this sort of object reconstructions [7, 17, 50, 27], it is unknown how to reliably stitch together the set of reconstructions into a single coherent story. Certainly there are systems that can identify pose with respect to a fixed scene [26] or a pair of views [15]; these approaches, however cannot reconstruct.

This paper presents a learning-based approach to this problem, whose results are shown in Fig. 1. The system can take two views with unknown relationship, and produce a 3D scene reconstruction for both images jointly. This 3D scene reconstruction comprises a set of per-object reconstructions rigidly placed in the scene with a pose as in [50, 27, 30]. Since the 3D scene reconstruction is the union of the posed objects, getting the 3D scene reconstruction correct requires getting both the 3D object reconstruction right as well as correctly identifying 3D object pose. Our key insight is that jointly reasoning about objects and poses improves the results. Our method, described in Section 3, predicts evidence including: (a) voxel reconstructions for each object; (b) distributions over rigid body transformations between cameras and objects; and (c) an inter-object affinity for stitching. Given this evidence, our system can stitch them together to find the most likely reconstruction. As we empirically demonstrate in Section 4, this joint reasoning is crucial – understanding each image independently and then estimating a relative pose performs substantially worse compared to our approach. These are conducted on a challenging and large dataset of indoor scenes. We also show some common failure modes and demonstrate transfer to NYUv2 [45] dataset.

Our primary contributions are: (1) Introducing a novel problem – volumetric scene reconstruction from two unknown sparse views; (2) Learning an inter-view object affinity to find correspondence between images; (3) Our joint system, including the stitching stage, is better than adding individual components.

## 2 Related Work

The goal of this work is to take two views from cameras related by an unknown transformation and produce a single volumetric understanding of the scene. This touches on a number of important problems in computer vision ranging from the estimation of the pose of objects and cameras, full shape of objects, and correspondence across views. Our approach deliberately builds heavily on these works and, as we show empirically, our success depends crucially on their fusion.

This problem poses severe challenges for classic correspondence-based approaches [20]. From a purely geometric perspective, we are totally out of luck: even if we can identify the position of the camera via epipolar geometry and wide baseline stereo [40, 37], we have no correspondence for most objects in Fig. 1 that would permit depth given known baseline, let alone another view that would help lead to the understanding of the full shape of the chair.

Recent work has tackled identifying this full volumetric reconstruction via learning. Learning-based 3D has made significant progress recently, including 2.5D representations [14, 52, 5, 29], single object reconstruction [53, 56, 19, 42, 8], and scene understanding [6, 23, 33, 32, 12]. Especially, researchers have developed increasingly detailed volumetric reconstructions beginning with objects [7, 17, 18] and then moving to scenes [50, 27, 30, 38] as a composition of object reconstructions that have a pose with respect to the camera. Focusing on full volumetric reconstruction, our approach builds on this progression, and creates an understanding that is built upon jointly reasoning over parses of two scenes, affinities, and relative poses; as we empirically show, this produces improvements in results. Of these works, we are most inspired by Kulkarni et al. [27] in that it also reasons over a series of relative poses; our work builds on top of this as a base inference unit and handles multiple images. We note that while we build on a particular approach to scenes [27] and objects [17], our approach is general.

While much of this reconstruction work is single-image, some is multiview, although usually in the case of an isolated object [25, 7, 24] or with hundreds of views [22]. Our work aims at the particular task of as little as two views, and reasons over multiple objects. While traditional local features [34] are insufficient to support reasoning over objects, semantic features are useful [13, 51, 2].

At the same time, there has been considerable progress in identifying the relative pose from images [36, 26, 15, 1], RGB-D Scans [54, 55] or video sequences [58, 43, 47]. Of these, our work is most related to learning-based approaches to identifying relative pose from RGB images, and semantic Structure-from-Motion [1] and SLAM [43], which make use of semantic elements to improve the estimation of camera pose. We build upon this work in our approach, especially work like RPNet [15] that directly predicts relative pose, although we do so with a regression-by-classification formulation that provides uncertainty. As we show empirically, propagating this uncertainty forward lets us reason about objects and produce superior results to only focusing on pose.

## 3 Approach

The goal of the system is to map a pair of sparse views of a room to a full 3D reconstruction. As input, we assume a pair of images of a room. As output, we produce a set of objects represented as voxels, which are rigidly transformed and anisotropically scaled into the scene in a single coordinate frame. We achieve this with an approach, summarized in Fig. 2, that consists of three main parts: an object branch, a camera branch, and a stitching stage.

The output space is a factored representation of a 3D scene, similar to [50, 27, 30]. Specifically, in contrast to using a single voxel-grid or mesh, the scene is represented as a set of per-object voxel-grids with a scale and pose that are placed in the scene. These can be converted to a single 3D reconstruction by taking their union, and so improving the 3D reconstruction can be done by either improving the per-object voxel grid or improving its placement in the scene.

The first two parts of our approach are two neural networks. An object branch examines each image and detects and produces single-view 3D reconstructions for objects in the camera’s coordinate frame, as well as a per-object embedding that helps find the object in the other image. At the same time, an camera branch predicts relative pose between images, represented as a distribution over a discrete set of rigid transformations between the cameras. These networks are trained separately to minimize complexity.

The final step, a stitching stage, combines these together. The output of the two networks gives: a collection of objects per image in the image’s coordinate frame; a cross-image affinity which predicts object correspondence in two views; and a set of likely transformations from one camera to the other. The stitching stage aims to select a final set of predictions minimizing an objective function that aims to ensure that similar objects are in the same location, the camera pose is likely, etc. Unlike the first two stages, this is an optimization rather than a feedforward network.

### 3.1 Object Branch

The goal of our object branch is to take an image and produce a set of reconstructed objects in the camera’s coordinate frame as well as an embedding that lets us match across views. We achieve this by extending 3D-RelNet [27] and adjust it as little as possible to ensure fair comparisons. We refer the reader for a fuller explanation in [27, 50], but briefly, these networks act akin to an object detector like Faster-RCNN [41] with additional outputs. As input, 3D-RelNet takes as input an image and a set of 2D bounding box proposals, and maps the image through convolutional layers to a feature map, from which it extracts per-bounding box convolutional features. These features pass through fully connected layers to predict: a detection score (to suppress bad proposals), voxels (to represent the object), and a transformation to the world frame (represented by rotation, scale, and translation and calculated via both per-object and pairwise poses). We extend this to also produce an n-dimensional embedding on the unit sphere (i.e., ) that helps associate objects across images.

We use and train the embedding by creating a cross-image affinity matrix between objects. Suppose the first and second images have and objects each with embeddings and respectively. We then define our affinity matrix as

(1) |

where is the sigmoid/logistic function and where scales the output. Ideally, should indicate whether objects and are the same object seen from a different view, where is high if this is true and low otherwise.

We train this embedding network using ground-truth bounding box proposals so that we can easily calculate a ground-truth affinity matrix . We then minimize , a balanced mean-square loss between and : if all positive labels are , and all negative labels are , then the loss is

(2) |

which balances positive and negative labels (since affinity is imbalanced).

### 3.2 Camera Branch

Our camera branch aims to identify or narrow down the possible relationship between the two images. We approach this by building a siamese network [3] that predicts the relative camera pose between the two images. We use ResNet-50 [21] to extract features from two input images. We concatenate the output features and then use two linear layers to predict the translation and rotation.

We formulate prediction of rotation and translation as a classification problem to help manage the uncertainty in the problem. We found that propagating uncertainty (via top predictions) was helpful: a single feedforward network suggests likely rotations and a subsequent stage can make a more detailed assessment in light of the object branch’s predictions. Additionally, even if we care about only one output, we found regression-by-classification to be helpful since the output tended to have multiple modes (e.g., being fairly certain of the rotation modulo by recognizing that both images depict a cuboidal room). Regression tends to split the difference, producing predictions which satisfy neither mode, while classification picks one, as observed in [50, 28].

We cluster the rotation and translation vectors into 30 and 60 bins respectively, and predict two multinomial distributions over them. Then we minimize the cross entropy loss. At test time, we select the cartesian product of the top 3 most likely bins for rotation and top 10 most likely bins for translation as the final prediction results. The results are treated as proposals in the next section.

### 3.3 Stitching Object and Camera Branches

Once we have run the object and camera branches, our goal is to then produce a single stitched result. As input to this step, our object branch gives: for view 1, with objects, the voxels and transformations ; and similarly, for objects in view 2, the voxels and transformations ; and a cross-view affinity matrix . Additionally, we have a set of potential camera transformations between two views.

The goal of this section is to integrate this evidence to find a final cross-camera pose and correspondence from view to view . This correspondence is one-to-one and has the option to ignore an object (i.e., if and only if and are in correspondence and for all , , and similarly for ).

We cast this as a minimization problem over and including terms in the objective function that incorporate the above evidence. The cornerstone term is one that integrates all the evidence to examine the quality of the stitch, akin to trying and seeing how well things match up under a camera hypothesis. We implement this by computing the distance between corresponding object voxels according to , once the transformations are applied, or:

(3) |

Here, is the chamfer distance between points on the edges of each shape, as defined in [39, 44], or for two point clouds and :

(4) |

Additionally, we have terms that reward making likely according to our object and image networks, or: the sum of similarities between corresponding objects according to the affinity matrix , ; as well as the probability of the camera pose transformation from the image network . Finally, to preclude trivial solutions, we include a term rewarding minimizing the number of un-matched objects, or . In total, our objective function is the sum of these terms, or:

(5) |

The search space is intractably large, so we optimize the objective function by RANSAC-like search over the top hypotheses for and feasible object correspondences. For each top hypothesis of , we randomly sample object correspondence proposals. Here we use . It is generally sufficient since the correspondence between two objects is feasible only if the similarity of them is higher than a threshold according to the affinity matrix. We use random search over object correspondences because the search space increases factorially between the number of objects in correspondence. Once complete, we average the translation and scale, and randomly pick one rotation and shape from corresponding objects. Averaging performs poorly for rotation since there are typically multiple rotation modes that cannot be averaged: a symmetric table is correct at either 0 or 180 but not at 90. Averaging voxel grids does not make sense since there are partially observable objects. We therefore pick one mode at random for rotation and shape. Details are available in the appendix.

## 4 Experiments

We now describe a set of experiments that aim to address the following questions: (1) how well does the proposed method work and are there simpler approaches that would solve the problem? and (2) how does the method solve the problem? We first address question (1) by evaluating the proposed approach compared to alternate approaches both qualitatively and by evaluating the full reconstruction quantitatively. We then address question (2) by evaluating individual components of the system. We focus on what the affinity matrix learns and whether the stitching stage can jointly improve object correspondence and relative camera pose estimation. Throughout, we test our approach on the SUNCG dataset [46, 57], following previous work [50, 27, 30, 54, 57]. To demonstrate transfer to other data, we also show qualitative results on NYUv2 [45].

Input Images | Camera 1 | Camera 2 | Birdview | ||||

Image 1 | Image 2 | Prediction | GT | Prediction | GT | Prediction | GT |

### 4.1 Experimental Setup

We train and do extensive evaluation on SUNCG [46] since it provides 3D scene ground truth including voxel representation of objects. There are realistic datasets such as ScanNet [10] and Matterport3D [4], but they only provide non-watertight meshes. Producing filled object voxel representation from non-watertight meshes remains an open problem. For example, Pix3D [48] aligns IKEA furniture models with images, but not all objects are labeled.

Datasets. We follow the 70%/10%/20% training, validation and test split of houses from [27]. For each house, we randomly sample up to ten rooms; for each room, we randomly sample one pair of views. Furthermore, we filter the validation and test set: we eliminate pairs where there is no overlapping object between views, and pairs in which all of one image’s objects are in the other view (i.e., one is a proper subset of the other). We do not filter the training set since learning relative pose requires a large and diverse training set. Overall, we have 247532/1970/2964 image pairs for training, validation and testing, respectively. Following [50], we use six object classes - bed, chair, desk, sofa, table and tv.

Full-Scene Evaluation: Our output is a full-scene reconstruction, represented as a set of per-object voxel grids that are posed and scaled in the scene. A scene prediction can be totally wrong if one of the objects has correct shape while its translation is off by 2 meters. Therefore, we quantify performance by treating the problem as a 3D detection problem in which we predict a series of 3D boxes and voxel grids. This lets us evaluate which aspect of the problem currently hold methods back. Similar to [27], for each object, we define error metrics as follows:

• Translation (): Euclidean distance, or , thresholded at m.

• Scale (): Average log difference in scaling factors, or . , thresholded at

• Rotation (): Geodesic rotation distance, or , thresholded at .

• Shape (): Following [49], we use F to measure the difference between prediction and ground truth, thresholded at .

A prediction is a true positive only if all errors are lower than our thresholds. We calculate the precision-recall curve based on that and report average precision (AP). We also report AP for each single error metric.

Baselines. Since there is no prior work on this task, our experiments compare to ablations and alternate forms of our method. We use the following baseline methods, each of which tests a concrete hypothesis. (Feedforward): This method uses the object branch to recover single-view 3D scenes, and our camera branch to estimate the relative pose between different views. We ignore the affinity matrix and pick the top-1 relative pose predicted by the camera branch. There can be many duplicate objects in the output of this approach. This tests if a simple feedforward method is sufficient. (NMS): In addition to the feedforward approach, we perform non-maximum suppression on the final predictions. If two objects are close to each other, we merge them. This tests if a simple policy to merge objects would work. (Raw Affinity): Here, we use the predicted affinity matrix to merge objects based on top-1 similarity from the affinity matrix. This tests whether our stitching stage is necessary. (Associative3D): This is our complete method. We optimize the objective function by searching possible rotations, translations and object correspondence.

Image 1 | Image 2 | Feedforward | NMS | Raw Affinity | Ours | GT |

### 4.2 Full Scene Evaluation

We begin by evaluating our full scene reconstruction. Our output is a set of per-object voxels that are posed and scaled in the scene. The quality of reconstruction of a single object is decided by both the voxel grids and the object pose.

First, we show qualitative examples from the proposed method in Fig. 3 as well as a comparison with alternate approaches in Fig. 11 on the SUNCG test set. The Feedforward approach tends to have duplicate objects since it does not know object correspondence. However, figuring out the camera pose and common objects is a non-trivial task. Raw Affinity does not work since it may merge objects based on their similarity, regardless of possible global conflicts. NMS works when the relative camera pose is accurate but cannot work when many objects are close to each other. Instead, Associative3D demonstrates the ability to jointly reason over reconstructions, object pose and camera pose to produce a reasonable explanation of the scene. More qualitative examples are available in the supplementary material.

We then evaluate our proposed approach quantitatively. In a factored representation [50], both object poses and shapes are equally important to the full scene reconstruction. For instance, the voxel reconstruction of a scene may have no overlap if all the shapes are right, but they are in the wrong place. Therefore, we formulate it as a 3D detection problem, as a prediction is a true positive only if all of translation, scale, rotation and shape are correct. However, 3D detection is a very strict metric. If the whole scene is slightly off in one aspect, we may have a very low AP. But the predicted scene may still be reasonable. We mainly use it quantify our performance.

All Examples | Top 25% | Top 50% | Top 75% | |||||
---|---|---|---|---|---|---|---|---|

Methods | All | Shape | Trans | Rot | Scale | All | All | All |

Feedforward | 21.2 | 22.5 | 31.7 | 28.5 | 26.9 | 41.6 | 34.6 | 28.6 |

NMS | 21.1 | 23.5 | 31.9 | 29.0 | 27.2 | 42.0 | 34.7 | 28.7 |

Raw Affinity | 15.0 | 24.4 | 26.3 | 28.2 | 25.9 | 28.6 | 23.5 | 18.9 |

Associative3D | 23.3 | 24.5 | 38.4 | 29.5 | 27.3 | 48.3 | 38.8 | 31.4 |

Table 1 shows our performance compared with all three baseline methods. Our approach outperforms all of them, which verifies what we see in the qualitative examples. Moreover, the improvement mainly comes from that on translation. The translation-only AP is around 7 points better than Feedforward. Meanwhile, the improvement of NMS over Feedforward is limited. As we see in qualitative examples, it cannot work when many objects are close to each other. Finally, raw affinity is even worse than Feedforward, since raw affinity may merge objects incorrectly. We will discuss why the affinity is informative, but top-1 similarity is not a good choice in Sec. 4.3.

We notice our performance gain over Feedforward and NMS is especially large when single-view predictions are reasonable. On top 25% examples which single-view prediction does a good job, Associative3D outperforms Feedforward and NMS by over 6 points. On top 50% examples, the improvement is around 4 points. It is still significant but slightly lower than that of top 25% examples. When single-view prediction is bad, our performance gain is limited since Associative3D is built upon it. We will discuss this in Sec. 4.5 as failure cases.

### 4.3 Inter-view Object Affinity Matrix

Category | Model Category | Shape Category | Instance Model | |
---|---|---|---|---|

AUROC | 0.92 | 0.73 | - | 0.59 |

Correlation | 0.72 | 0.33 | 0.34 | 0.14 |

We then turn to evaluating how the method works by analyzing individual components. We start with the affinity matrix and study what it learns.

We have three non-mutually exclusive hypotheses: (1) Semantic labels. The affinity is essentially doing object recognition. After detecting the category of the object, it simply matches objects with the same category. (2) Object shapes. The affinity matches objects with similar shapes since it is constructed from the embedding vectors which are also used to generate shape voxels and the object pose. (3) Correspondence. Ideally, the affinity matrix should give us ground truth correspondence. It is challenging given duplicate objects in the scene. For example, people can have three identical chairs in their office. These hypotheses are three different levels the affinity matrix may learn, but they are not in conflict. Learning semantic labels do not mean the affinity does not learn anything about shapes.

We study this by examining a large number of pairs of objects and testing the relationship between affinity and known relationships (e.g., categories, model ids) using ground truth bounding boxes. We specifically construct three binary labels (same category, same model, same instance) and a continuous label shape similarity (namely F-score @ 0.05 [49]). When we evaluate shape similarity, we condition on the category to test if affinity distinguishes between different models of the same category, (e.g. chair). Similarly, we condition on the model when we evaluate instance similarity.

We compute two metrics: a binary classification metric that treats the affinity as a predictor of the label as well as a correlation that tests if a monotonic relationship exists between the affinity and the label. For binary classification, we use AUROC to evaluate the performance since it is invariant to class imbalance and has a natural interpretation. For correlation, we compute Spearman’s rank correlation coefficient [60] between the affinity predictors and labels. This tests how well the relationship between affinity and each label (e.g., shape overlap) fits a monotonic function (1 is perfect agreement, 0 no agreement).

The results are shown in Table 2. Both the binary classification and the rank correlation show that the affinity matrix is able to distinguish different categories and objects of different shapes, but is sub-optimal in distinguishing the same instance. These results justify our stitching stage, which addresses the problem based on joint reasoning. It also explains why Raw Affinity underperforms all other baselines by a large margin in the full-scene evaluation. Additionally, the ability to distinguish categories and shapes provides important guidance to the stitching stage. For example, a sofa and bed are similar in 3D shapes. It is infeasible to distinguish them by simply looking at the chamfer distance, which can be distinguished by the affinity matrix.

### 4.4 Stitching Stage

We evaluate the stitching stage by studying two questions: (1) How well can it predict object correspondence? (2) Can it improve relative camera pose estimation? For example, if the top-1 relative pose is incorrect, could the stitching stage fix it by considering common objects in two views?

Before | After | Before | After | Before | After |

Object Correspondence. To answer the first question, we begin with qualitative examples in Fig. 12, which illustrate object correspondence before and after the stitching stage. Before our stitching stage, our affinity matrix has generated correspondence proposals based on their similarity. However, there are outliers since the affinity is sub-optimal in distinguishing the same instance. The stitching stage removes these outliers.

We evaluate object correspondence in the same setting as Sec 4.3. Suppose the first and second images have and objects respectively. We then have pairs. The pair is a positive example if and only if they are corresponding. We use average precision (AP) to measure the performance since AP pays more attention to the low recall [11, 16]. For object in view 1 and object in view 2, we produce a confidence score by where if the pair is predicted to be corresponding and otherwise. This term updates the confidence based on stitching stage to penalize pairs which have a high affinity score but are not corresponding.

We compare Associative3D with 3 baselines. (All Negative): The prediction is always negative (the most frequent label). This serves as a lower bound. (Affinity): This simply uses the affinity matrix as the confidence. (Affinity Top1): Rather than using the raw affinity matrix, it uses affinity top-1 similarity as the correspondence and the same strategy to decide confidence as Associative3D. Table 3 shows that our stitching stage improves AP by 10% compared to using the affinity matrix only as correspondence.

All Negative | Affinity | Affinity Top1 | Associative3D | |

AP | 10.1 | 38.8 | 49.4 | 60.0 |

Translation (meters) | Rotation (degrees) | |||||
---|---|---|---|---|---|---|

Method | Median | Mean | (Err 1m)% | Median | Mean | (Err 30)% |

Top-1 | 1.24 | 1.80 | 41.26 | 6.96 | 29.90 | 77.56 |

Associative3D | 0.88 | 1.44 | 54.89 | 6.97 | 29.02 | 78.31 |

Relative Camera Pose Estimation. We next evaluate the performance of relative camera pose (i.e., camera translation and rotation) estimation and see if the stitching stage improves the relative camera pose jointly. We compare the camera pose picked by the stitching stage and top-1 camera pose predicted by the camera branch. We follow the rotation and translation metrics in our full-scene evaluation to measure the error of our predicted camera poses. We summarize results in Table 4. There is a substantial improvement in translations, with the percentage of camera poses within m of the ground truth being boosted from to . The improvement in rotation is smaller and we believe this is because the network already starts out working well and can exploit the fact that scenes tend to have three orthogonal directions. In conclusion, the stitching stage can mainly improve the prediction of camera translation.

Input Images | Camera 1 | Camera 2 | Birdview | ||||
---|---|---|---|---|---|---|---|

Image 1 | Image 2 | Prediction | GT | Prediction | GT | Prediction | GT |

### 4.5 Failure Cases

To understand the problem of reconstruction from sparse views better, we identify some representative failure cases and show them in Fig. 6. While our method is able to generate reasonable results on SUNCG, it cannot solve some common failure cases: (1) The image pair is ambiguous. (2) The single-view backbone does not produce reasonable predictions as we discuss in Sec. 4.2. (3) There are too many similar objects in the scene. The affinity matrix is then not able to distinguish them since it is sub-optimal in distinguishing the same instance. Our stitching stage is also limited by the random search over object correspondence. Due to factorial growth of search space, we cannot search all possible correspondences. The balancing of our sub-losses can also be sensitive.

Image 1 | Image 2 | Sideview | Birdview | Image 1 | Image 2 | Sideview | Birdview |
---|---|---|---|---|---|---|---|

### 4.6 Results on NYU Dataset

To test generalization, we also test our approach on images from NYUv2 [45]. Our only change is using proposals from Faster-RCNN [41] trained on COCO [31], since Faster-RCNN trained on SUNCG cannot generalize to NYUv2 well. We do not finetune any models and show qualitative results in Fig. 7. Despite training on synthetic data, our model can often obtain a reasonable interpretation.

## 5 Conclusion

We have presented Associative3D, which explores 3D volumetric reconstruction from sparse views. While the output is reasonable, failure modes indicate the problem is challenging to current techniques. Directions for future work include joint learning of object affinity and relative camera pose, and extending the approach to many views and more natural datasets other than SUNCG.

Acknowledgments We thank Nilesh Kulkarni and Shubham Tulsiani for their help of 3D-RelNet; Zhengyuan Dong for his help of visualization; Tianning Zhu for his help of video; Richard Higgins, Dandan Shan, Chris Rockwell and Tongan Cai for their feedback on the draft. Toyota Research Institute (“TRI”) provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.

## References

- [1] Bao, S.Y., Bagra, M., Chao, Y.W., Savarese, S.: Semantic structure from motion with points, regions, and objects. In: CVPR (2012)
- [2] Bowman, S.L., Atanasov, N., Daniilidis, K., Pappas, G.J.: Probabilistic data association for semantic slam. In: ICRA (2017)
- [3] Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., Shah, R.: Signature verification using a “siamese” time delay neural network. In: Advances in neural information processing systems. pp. 737–744 (1994)
- [4] Chang, A., Dai, A., Funkhouser, T., Halber, M., Niessner, M., Savva, M., Song, S., Zeng, A., Zhang, Y.: Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158 (2017)
- [5] Chen, W., Qian, S., Deng, J.: Learning single-image depth from videos using quality assessment networks. In: CVPR (2019)
- [6] Chen, Y., Huang, S., Yuan, T., Qi, S., Zhu, Y., Zhu, S.C.: Holistic++ scene understanding: Single-view 3D holistic scene parsing and human pose estimation with human-object interaction and physical commonsense. In: The IEEE International Conference on Computer Vision (ICCV) (2019)
- [7] Choy, C.B., Gwak, J., Savarese, S., Chandraker, M.: Universal correspondence network. In: NeurIPS. pp. 2414–2422 (2016)
- [8] Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3D-R2N2: A unified approach for single and multi-view 3d object reconstruction. In: ECCV (2016)
- [9] Crandall, D., Owens, A., Snavely, N., Huttenlocher, D.: SfM with MRFs: Discrete-continuous optimization for large-scale structure from motion. Transactions on Pattern Analysis and Machine Intelligence (PAMI) (2013)
- [10] Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: Scannet: Richly-annotated 3d reconstructions of indoor scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5828–5839 (2017)
- [11] Davis, J., Goadrich, M.: The relationship between Precision-Recall and ROC curves. In: ICML (2006)
- [12] Du, Y., Liu, Z., Basevi, H., Leonardis, A., Freeman, B., Tenenbaum, J., Wu, J.: Learning to exploit stability for 3D scene parsing. In: Advances in Neural Information Processing Systems. pp. 1726–1736 (2018)
- [13] Duggal, S., Wang, S., Ma, W.C., Hu, R., Urtasun, R.: DeepPruner: Learning efficient stereo matching via differentiable patchmatch. In: ICCV (2019)
- [14] Eigen, D., Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: ICCV (2015)
- [15] En, S., Lechervy, A., Jurie, F.: RPNet: an end-to-end network for relative camera pose estimation. In: ECCV (2018)
- [16] Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. International journal of computer vision 88(2), 303–338 (2010)
- [17] Girdhar, R., Fouhey, D., Rodriguez, M., Gupta, A.: Learning a predictable and generative vector representation for objects. In: ECCV (2016)
- [18] Gkioxari, G., Malik, J., Johnson, J.: Mesh r-cnn. In: ICCV (2019)
- [19] Groueix, T., Fisher, M., Kim, V.G., Russell, B., Aubry, M.: AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation. In: CVPR (2018)
- [20] Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edn. (2004)
- [21] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
- [22] Huang, P.H., Matzen, K., Kopf, J., Ahuja, N., Huang, J.B.: DeepMVS: Learning multi-view stereopsis. In: CVPR (2018)
- [23] Huang, S., Qi, S., Zhu, Y., Xiao, Y., Xu, Y., Zhu, S.C.: Holistic 3D scene parsing and reconstruction from a single rgb image. In: ECCV (2018)
- [24] Huang, Z., Li, T., Chen, W., Zhao, Y., Xing, J., LeGendre, C., Luo, L., Ma, C., Li, H.: Deep volumetric video from very sparse multi-view performance capture. In: ECCV (2018)
- [25] Kar, A., Häne, C., Malik, J.: Learning a multi-view stereo machine. In: Advances in neural information processing systems. pp. 365–376 (2017)
- [26] Kendall, A., Grimes, M., Cipolla, R.: Posenet: A convolutional network for real-time 6-dof camera relocalization. In: ICCV (2015)
- [27] Kulkarni, N., Misra, I., Tulsiani, S., Gupta, A.: 3D-RelNet: Joint object and relational network for 3D prediction. In: ICCV (2019)
- [28] Ladický, L., Zeisl, B., Pollefeys, M.: Discriminatively trained dense surface normal estimation. In: ECCV (2014)
- [29] Lasinger, K., Ranftl, R., Schindler, K., Koltun, V.: Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. arXiv preprint arXiv:1907.01341 (2019)
- [30] Li, L., Khan, S., Barnes, N.: Silhouette-assisted 3D object instance reconstruction from a cluttered scene. In: ICCV Workshops (2019)
- [31] Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV (2014)
- [32] Liu, C., Kim, K., Gu, J., Furukawa, Y., Kautz, J.: PlaneRCNN: 3D plane detection and reconstruction from a single image. In: CVPR (2019)
- [33] Liu, C., Wu, J., Furukawa, Y.: Floornet: A unified framework for floorplan reconstruction from 3d scans. In: ECCV (2018)
- [34] Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International journal of computer vision 60(2), 91–110 (2004)
- [35] Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(Nov), 2579–2605 (2008)
- [36] Melekhov, I., Ylioinas, J., Kannala, J., Rahtu, E.: Relative camera pose estimation using convolutional neural networks. In: International Conference on Advanced Concepts for Intelligent Vision Systems. pp. 675–687. Springer (2017)
- [37] Mishkin, D., Perdoch, M., Matas, J.: Mods: Fast and robust method for two-view matching. CVIU 1(141), 81–93 (2015)
- [38] Nie, Y., Han, X., Guo, S., Zheng, Y., Chang, J., Zhang, J.J.: Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image. In: CVPR (2020)
- [39] Price, A., Jin, L., Berenson, D.: Inferring occluded geometry improves performance when retrieving an object from dense clutter. International Symposium on Robotics Research (ISRR) (2019)
- [40] Pritchett, P., Zisserman, A.: Wide baseline stereo matching. In: ICCV (1998)
- [41] Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp. 91–99 (2015)
- [42] Richter, S.R., Roth, S.: Matryoshka networks: Predicting 3D geometry via nested shape layers. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1936–1944 (2018)
- [43] Salas-Moreno, R.F., Newcombe, R.A., Strasdat, H., Kelly, P.H., Davison, A.J.: Slam++: Simultaneous localisation and mapping at the level of objects. In: CVPR (2013)
- [44] Sharma, G., Goyal, R., Liu, D., Kalogerakis, E., Maji, S.: Csgnet: Neural shape parser for constructive solid geometry. In: CVPR (2018)
- [45] Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD images. In: ECCV (2012)
- [46] Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic scene completion from a single depth image. In: CVPR (2017)
- [47] Sui, Z., Chang, H., Xu, N., Jenkins, O.C.: Geofusion: Geometric consistency informed scene estimation in dense clutter. arXiv:2003.12610 (2020)
- [48] Sun, X., Wu, J., Zhang, X., Zhang, Z., Zhang, C., Xue, T., Tenenbaum, J.B., Freeman, W.T.: Pix3d: Dataset and methods for single-image 3d shape modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2974–2983 (2018)
- [49] Tatarchenko, M., Richter, S.R., Ranftl, R., Li, Z., Koltun, V., Brox, T.: What do single-view 3d reconstruction networks learn? In: CVPR (2019)
- [50] Tulsiani, S., Gupta, S., Fouhey, D.F., Efros, A.A., Malik, J.: Factoring shape, pose, and layout from the 2D image of a 3D scene. In: CVPR (2018)
- [51] Wang, Q., Zhou, X., Daniilidis, K.: Multi-image semantic matching by mining consistent features. In: CVPR (2018)
- [52] Wang, X., Fouhey, D., Gupta, A.: Designing deep networks for surface normal estimation. In: CVPR (2015)
- [53] Wu, J., Wang, Y., Xue, T., Sun, X., Freeman, B., Tenenbaum, J.: Marrnet: 3D shape reconstruction via 2.5D sketches. In: Advances in neural information processing systems. pp. 540–550 (2017)
- [54] Yang, Z., Pan, J.Z., Luo, L., Zhou, X., Grauman, K., Huang, Q.: Extreme relative pose estimation for rgb-d scans via scene completion. In: CVPR (2019)
- [55] Yang, Z., Yan, S., Huang, Q.: Extreme relative pose network under hybrid representations. In: CVPR (2020)
- [56] Zhang, X., Zhang, Z., Zhang, C., Tenenbaum, J., Freeman, B., Wu, J.: Learning to reconstruct shapes from unseen classes. In: Advances in Neural Information Processing Systems. pp. 2257–2268 (2018)
- [57] Zhang, Y., Song, S., Yumer, E., Savva, M., Lee, J.Y., Jin, H., Funkhouser, T.: Physically-based rendering for indoor scene understanding using convolutional neural networks. In: CVPR (2017)
- [58] Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: CVPR (2017)
- [59] Zitnick, C.L., Dollár, P.: Edge boxes: Locating object proposals from edges. In: ECCV (2014)
- [60] Zwillinger, D., Kokoska, S.: CRC standard probability and statistics tables and formulae. Crc Press (1999)

## Appendix 0.A Implementation

Detection proposals. We use more advanced object proposals compared to prior works [27, 50], which used edge boxes [59]. We found that edge boxes were often the limiting factor. Instead, we train a class-agnostic Faster-RCNN [41] to generate proposals, treating all objects as the foreground.

Object Branch. For each object, our object branch will predict a 300-dimensional vector, which represents its 3D properties. Linear layers are used to predict its shape embedding, translation, rotation, scale and object embedding. For the object embeddings, we use three linear layers. The size of outputs is 256, 128, 64, respectively. These linear layers predict a 64-dimensional embedding finally.

We train the object branch in two stages. In the first stage, we follow the training of 3D-RelNet [27] with ground truth bounding boxes. The loss of affinity matrix is ignored in this stage. In the second stage, we freeze all layers except the linear layers to predict the object embeddings. We only apply the affinity loss in this stage. For all two stages, we use Adam with learning rate to optimize the model, with momentum 0.9. The batch size is 24. Although 3D-RelNet is finetuned on detection proposals, we only use the intermediate model trained with ground truth bounding box because (1) 3D-RelNet is finetuned on edgebox proposals and our Faster-RCNN proposals are good enough; (2) ground truth affinity is only available with ground truth bounding box.

Camera Branch. The object and camera branches are trained independently. The translation is represented as 3D vectors, and the rotation is represented as quaternions. We run k-means clustering on the training set to produce 60 and 30 bins for translation and rotation. For rotation, we use spherical k-means to ensure the centroids are unit vectors.

The input image pairs are resized to 224x224. They are passed through a siamese network with ResNet-50 pre-trained on ImageNet [21] as the backbone. The outputs from each instance of the siamese network are concatenated, and passed through a linear layer, producing a 128-dimensional vector. The vector is then passed through a translation branch and a rotation branch. Each branch is a linear layer which outputs 60 and 30 dimensional vectors for translation and rotation bins.

Our loss function is the cross entropy loss. The loss for the translation prediction and the rotation prediction are weighted equally. We use stochastic gradient descent with learning rate and momentum . The batch size is 32. We also augment the data by reversing the order of image pairs.

Tuning the stitching stage. The search space contains top-3 rotation, top-10 translation and top-128 object correspondence hypotheses. The threshold of affinity is 0.5. , , and are tuned as hyperparameters on the validation set to preclude the trivial solution. We use , . For , we use 5 for rotation and 1 for translation.

## Appendix 0.B Visualization of the Object Embedding Space

We use t-SNE [35] to visualize the object embedding space to check what the affinity matrix learns visually. We show our results in Fig. 8.

Without using semantic labels as supervision, objects in the same category are closer to each other. We also notice that table and desk have similar embeddings. The object embeddings can distinguish 3D models partially, but not as well as semantic labels.

## Appendix 0.C Proposals of Camera Pose Transformation

In the stitching stage, we select the top 3 most likely bins for rotation and top 10 most likely bins for translation. We demonstrate our motivation for choosing the number. On the validation set, we evaluate the top-K classification accuracy. We show the top-K accuracy for translation and rotation in Fig. 9. We notice that the top-1 accuracy is not high. However, the top 3 most likely bins for rotation and top 10 most likely bins for translation can ensure a high accuracy in a relatively small searching space. Therefore, we select these bins in the stitching stage. During test time, the top-3 rotation accuracy is 88.7% and top-10 translation accuracy is 83.6%.

## Appendix 0.D Comparion with Single-view Baselines

In full scene evaluation, we are also interested in whether the multi-view setting helps, since we only add another sparse view. We address the question by comparing with single-view baselines. We take a prediction from 3D-RelNet [27] on one view of the pair, selected randomly. On the whole test set, the AP is 13.7, which significantly underperforms all baselines. It shows our proposed approach has significant improvement built upon single-view baselines, and multi-view helps reconstruct the scene.

## Appendix 0.E Merging Corresponding Objects

When our approach finds corresponding objects in two views, we average the translation and scale, but pick up the shape and rotation at random. Here we study alternative options. We use top 50% examples in the test set ranked by the performance of single-view predictions, so that the difference is more obvious.

First, we empirically show the rotation cannot be averaged, since there are typically multiple rotation modes. In Table 5, we compare the peformance between averaging rotation and pick up one at random. Averaging rotation dramatically hurts the performance.

All | Shape | Trans | Rot | Scale | |
---|---|---|---|---|---|

random rot | 38.8 | 27.3 | 39.6 | 33.2 | 35.1 |

average rot | 31.4 | 27.3 | 40.2 | 29.2 | 35.1 |

improvement | -7.4 | +0.0 | +0.6 | -4.0 | +0.0 |

Moreover, a reasonable way to average the shape is to average the vector representation of the object [17]. In Table 6, we compare their performance and they are almost the same. Therefore, we choose the simplest approach, picking up one shape at random.

All | Shape | Trans | Rot | Scale | |
---|---|---|---|---|---|

average shape | 38.8 | 27.3 | 39.6 | 33.2 | 35.1 |

random shape | 38.8 | 27.2 | 39.6 | 33.2 | 35.1 |

improvement | +0.0 | -0.1 | +0.0 | +0.0 | +0.0 |

## Appendix 0.F Additional Qualitative Results

We show additional qualitative examples in Fig. 10, 11, and 12 which follows the same format as Fig. 3, 4 and 5 of the main paper. For better visualization, we also put a video into our supplementary material.

Input Images | Camera 1 | Camera 2 | Birdview | ||||

Image 1 | Image 2 | Prediction | GT | Prediction | GT | Prediction | GT |

Image 1 | Image 2 | Feedforward | NMS | Raw Affinity | Associative3D | GT |

Before | After | Before | After | Before | After |