Edit parameters

Figure 3.8: Edit tools; (i) Edit settings, (ii) Focal length, (iii) Rectify, (iv) generate point cloud, (v) Set coordinate reference system
Click the Pen edit button or select Settings->Edit or press keyboard E to edit parameters.
This opens the project settings dialog.
There are several experimental parameters that are marked as Advanced.
These should in general not be changed.
The most typical parameters to e.g. improve number of matched points or make the process faster are (see detailed description below):
- Feature threshold
- Match threshold
- Search space
- Match window size
Project settings dialog
Set start frame
Focal length
Point cloud rectification
Redo
Coordinate reference system
Dense matching
Change depth of stereo video
Time synchronization
Project settings dialog
When you have created and new project or opened an existing project you can edit most parameters by selecting
Settings->Edit or press 'E'.

Figure 3.9: Settings dialogs for videos and images
- Last pyramid level. To maintain real time performance, it may not be possible to do image matching all the way to the original resolution while the videos are playing.
The last pyramid level defines at which image pyramid level image matching stops while running a project. processing will continue to the full resolution when a project is paused, see more here.
- Feature threshold. Edges are extracted in each video frame with sub pixel accuracy. The edge threshold defines the minimum strength of edges. Lowering this
threshold will thus create more edges (but in general also more noise).
- Match threshold. The match threshold defines the minimum similarity between matched edges for the match to be accepted and used for sensor modelling. Lowering
this threshold will accept more matches (and increase the risk to include more incorrect matches).
- Search space. Search space defines the neighbourhood of pixels to examine for matches. For most parts this is automatically set, but this threshold defines
the search space at each pyramid level when a stable sensor orientation has been found. A larger search space is automatically used when the solution is unstable.
- Image registration. A course image registration is performed at project startup and whenever matching is automatically identified as unsuccessful. If there are certain
metadata, such as camera position and orientation, then a module for metadata based image registration is automatically selected. If such metadata are missing, then a less accurate
translation based image registration is selected. You can select the translation based image registration although there are metadata, but not the opposite.
- Refine sensor model. Leave this on unless you prefer slightly faster processing at the cost of a decreased accuracy.
- Reuse cameras. Leave this unchecked. When checked, it will start estimation of sensor parameters by using the camera parameters of the last stereo pair instead of estimating parameters from scratch, independent of previous parameters.
Even when this works it will often degrade the 3D model and its rectification incrementely for each new stereo pair.
- Median variance. To eliminate outliers a variance of unit weight needs to be estimated. The estimated variance will itself be biased by the outliers with the default behaviour.
When this is checked, the variance will be computed with a more robust median estimation.
However, the effect is in general neglible, so the recommendation is to keep this unchecked.
- Edge focusing. When checked, matched edges at one pyramid level are propagated to the next level.
When unchecked, edge pixels in one level will search for successful matches at a previous level.
Recommendation is to leave this checked.
- Compute 3D accuracy. When this is checked, the plane and height accuracy of the 3D coordinates are estimated.
They are not used internally, but will be exported to ibt files.
To avoid unnecessary processing it is recommended to keep this unchecked unless you want to export plane and height accuracies to file.
- Rectify. See the rectify sub section.
- Y parallax threshold. The vertical search space in the epipolar images is based on the accuracy of the last sensor parameter estimation.
This value defines how many standard deviations the search space should be.
Default is three.
- Z filter, Z low and high thresholds. Extreme high and/or low height values are eliminated from the 3D point cloud when this is checked.
- Percent outliers. If set to any other value than zero, then sensor parameters are estimated with RANSAC.
Percent outliers is not a correct naming since you need to set this to values around 25-50Default and recommended is to not rely on RANSAC and instead eliminate outliers integrated with the iterative sensor parameter estimation.
- Feature type. Select if points or edges should be used as matching features.
Points as feature type is currently unstable, so the recommendation is to use edges.
- Interpolation. Cubic convolution (default)improves the quality of generated 3D points compared to bilinear interpolation.
- Video stabilization. Individual sensor orientations may be uncertain, but assuming a smooth trajectory we can use previous sensor orientations to stabilize
the resulting stereo video. This is thus not used for video stabilization of the video sources.
- Match window. The size of the square window that is used as neighborhood for each image point in template matching. In general, a larger window will
result in more robust results, but details may be lost and processing becomes slower.
- Noise filter. Kind of noise filtering applied before edge extraction and matching takes place. Larger filters give less details, but may be more robust. Less edges
are created when you increase filter size, so you may want to decrease the edge threshold when you increase filter size.
The filter is only applied on internal image for processing, not the the displayed image/frame.
- Point cloud format. Defines point cloud file format when exporting 3D point clouds.
PCD is significantly faster to save, but can not handle coordinate reference systems.
Set start frame
If you repeatedly run a project and want to start at a particular frame each time,
then you can play to this frame and then select Settings->Set start frame and save the project file.
The next time you open the project, it will fast-forward to the selected start frame.
Focal length
When an uncalibrated camera is used, you can manually set the focal length by clicking the Focal length button or select Stereo->Focal length or press Ctrl+Alt+F.
The focal length is defined in a normalized coordinate system with a default value of 0.67 (percent of width or height depending on which is larger).
A new point cloud will be generated with the defined focal length when ok is clicked and the new focal lenght will be applied in the following processing.
Point cloud rectification
Initially, the point cloud is defined in a 3D projective space, where e.g. coordinate axises not necessarily are perpendicular.
This 3D projective space is upgraded to a 3D euclidian space when the Rectify button is checked.
This process includes a simplified self calibration, which is not guaranteed to succeed.
However, you are recommended to keep the Rectify button checked unless you find the resulting 3D space distorted in which case it may help to uncheck rectification.
Redo
The Redo button will regenerate a 3D point cloud from the current images, possibly after you have changes some parameters.
To enable a complete refresh, the image registration module is always run when redo is clicked, while it in general is not run while playing or at pausing.
Coordinate reference system
The globe button opens a dialog where you can select coordinate reference system by name or EPSG code.
The dialog has a spell correction so that you easily find the desired coordinate reference system even if you do not know the exact name.
For example entering "sewerf" will suggest the "SWEREF" reference system at the top of the dialog. TODO: The coordinate reference system is not applied when exporting point clouds.
Dense matching
The menu option Settings->Dense matching->Dense point cloud or Ctrl+Shift+D toggles dense matching on or off.
Default is on and means that processing proceeds all the way to the original resolution when a project is paused.
Uncheck this option to only generate sparse point clouds also when pausing.
Settings->Dense matching->Settings opens a dialog with settings for generation of a dense point cloud.
All settings in this dialog are experimental and not intended to be changed!
In particular, the check box Dense matching should be unchecked.
Eventually this will generate truely dense point clouds with approximatelly one height estimate per pixel, but this is work in progress.
Change depth of stereo video
For projects with a single video, the preceived stereo depth is indirectly defined by the frame offset. You can interactively increase or decrease the frame offset to change the perceived
3D effect.
Select Settings->Increase offset (Ctrl+Alt+I) or Settings->Increase offset (Ctrl+Alt+D). Take a look at the status bar at the bottom left to see whether frame offset
is increased or decreased, since this depends on which frame that is set as left and right.
Time synchronization
For projects with two videos and moving objects, you must ensure that the frame paris processed from the two videos are recorded at the same time.
You can interactively increase or decrease the frame offset to synchronize two videos using the same procedure as when changing the stereo depth.
Select Settings->Increase offset (Ctrl+Alt+I) or Settings->Increase offset (Ctrl+Alt+D). Take a look at the status bar at the bottom left to see whether frame offset
is increased or decreased, since this depends on which frame that is set as left and right.