Large Datasets

  • currently, PCV needs to load all points from source file and keep them in system memory
  • during loading, system memory usage will peak quite high, 4 (and more in some cases) times the runtime usage
  • for display, PCV need to upload all points that are going to be displayed (you can control amount with PCV > Display > Percentage slider) to gpu memory
  • to determine approximate maximum number of points your gpu can display at once you can use calculator in PCV preferences (for example 8GB gpu can display ~300M points with default shader and disabled scalars), calculator formula is simple:
ram = 8192  # MB
b = (1024*1024) * ram  # bytes
# default shader uses 3x float32 for point location and 4x float32 for color (rgba)
# float32 takes 4 bytes, hence 3*4 + 4*4 bytes
n = int(b / (12 + 16))
print(n)  # 306783378
  • trying to display more points (i.e. uploading more data than gpu memory can contain) will result in Blender crash or freeze
  • when extremely big data need to be loaded and is not required to work with exactly all points, alternative loading methods Every Nth or Slice can be used to reduce number of points during loading

Fast Navigation

Draw low resolution point cloud during viewport navigation, it is enabled in PCV Preferences globally and will apply on all PCV instances in scene. During viewport navigation, normals and bounding box drawing is skipped, there rest of shading options is kept, selection is always drawn in full. Each PCV instance can be excluded by unchecking PCV > Display > Options > Use Fast Navigation.

Note: Blender API has no way to tell if user is navigating viewport (well, at least without using modal operator and that is not applicable in this case) so solution involves timers and because of that, when you are still navigating, but pause any movement a bit, high resolution will be drawn after delay and also first redraw when you just started navigating might lag a bit, API does not provide mouse events so i cannot react before view is really changed.