Translational registration proceeds in several stages. First, the direction of translation (baseline) between each adjacent pair of cameras is determined. Next, these pair-wise direction estimates are assembled into a global set of constraints to find the relative positions of all cameras. Finally, the relative positions are aligned to absolute (earth-relative) positions by an optimal Euclidean transformation. Point features are extracted from each image, and every possible correspondence (match) between points is evaluated. Correspondences are ruled out according to hard geometric constraints, such as positive depth and consistency of constituent edge directions. What remains is a much smaller set of putative matches between features, which is then used to refine the initial translation direction obtained from the instrumentation. When rotations are known, the epipolar geometry for a given camera pair is simplified considerably. In particular, epipolar lines become arcs of great circles on the sphere, and the epipole, or intersection of all arcs, is the motion direction. This allows us to find the most likely direction of motion by accumulating intersections between all possible epipolar lines in a Hough transform space and finding the peak, or point of maximum incidence. The HT produces a fairly accurate (within a half degree or so) estimate of the translation direction. A subsequent step refines this direction by sampling from the distribution of all possible correspondence sets, thus producing probabilistic correspondence. A Monte Carlo expectation maximization (MCEM) algorithm is utilized which alternates between estimating this correspondence distribution (E step), and finding the direction given the current distribution estimate (M step). Note that for a given pair of adjacent cameras, the translation can be determined only up to scale; that is, we can only find the _direction_ of the motion, not its magnitude. Once this direction is found for each camera pair, its [relative] magnitude can be determined by optimizing over a set of linear constraints - namely, the set of all inter-camera motion directions. This is achieved using constrained linear least squares, the result of which is a set of mutually consistent camera positions relative to an arbitrary frame of reference. The final step is to register the resulting camera pose with respect to an earth-relative coordinate frame. The entire set of cameras is translated, rotated, and scaled according to the global transformation that best aligns the estimated positions with the originally acquired positions.