Visualizing the Void

The challenges of visualizing black holes.

Aritro Paul
VisUMD

--

Black Holes are fascinating, and the most difficult to understand out of all universal phenomena. They are also very difficult to visualize — both in our mind’s eye as well as a picture. This is because black holes, unlike a golf hole, are essentially three-dimensional holes: you can throw anything, from any direction, and it’ll go in and disappear! Furthermore, because they’re so massive, they bend the laws of physics and space-time. The most accurate picture of a black hole is around 4.5 Petabytes, and the black hole in the movie Interstellar — Gargantua — was almost 800 Terabytes.

A black hole is the result of a very massive star that collapsed at the end of its lifetime. Any black hole can be completely described by three numbers: its mass, angular momentum, and electric charge. Their rapid rotation induces a phenomenon that warps the surrounding spacetime, reducing the spherical symmetry to axis symmetry about the rotation axis. The spin is expressed indirectly by its ratio to the mass with a constant that cannot exceed 1.

A common approach to render an object in 3D space is to use something known as ray tracing. Ray tracing works by calculating paths of light towards or away from an object based on mathematical equations that determine said paths. However, black holes bend the fabric of space-time, and tremendously distorts the surroundings, allowing us to see the stars and sky behind and the back of the black hole all from one side. The light paths bend and rotate around the black hole, sometimes even doing multiple rotations. This requires a special ray tracer, and even then it leads to a blurry and inaccurate render and fuzzy visuals while also being extremely taxing on GPUs and CPUs. To visualize the distorted space around a black hole, typically a ray tracing method follows the geodesics back in time. A system of differential equations describes these geodesics. The equations define photon travel directions at any point in space.

A simulated black hole. The upper and lower parts are the images of the back of the ring.

A new method by Annemieke Verbraeck and Elmar Eisemann is pretty interesting. What they plan is a simple, adaptive grid system. The grid is the representation of a celestial sky map, a 3D spherical space represented as a rectangular 2D area. The grid is based on a similarity grid, so similar pixels are grouped together. The grid changes every second, based on the physics of the black hole, and is computed real time to generate the bending of the gridlines using a linear or spline method, as well as dynamic shadows and curvature areas. This is merged with an actual star catalog of the proposed space in the universe. Using the GPU, the approach maps the grid onto the star map, defining the position and spin of the black hole to the accuracy of a pixel.

The adaptive grid.

Gravitational lensing is the bending of light around a massive object in space. A black hole visualization of the accretion disk works by passing a ray through every pixel, making ray polygons around a black hole. To counter the aliasing caused by a single ray, a better solution is to pass multiple rays, which is very computationally costly. The new method works by using a small tree grid split of the grid, and parses depth-first and based on the position on the celestial sky, determines the distortion by calculating increase in area using the corners of a pixel. These are saved and interpolated for a much faster and quicker calculation, and all the interpolations are saved as a lookup table for faster recall.

For an interactive visualization, the black hole grid changes, so the same concept of the adaptive grid is used for more axes and movement direction. Some of the grids are pre-computed for faster rendering. The curvature of the new grids is determined by an equation to relate shadow sizes corresponding to the grids.

To finish it up, the sky map is added to the grid and based on the edges of the pixels and ray polygons, are averaged out. The distortion becomes more as we go closer to the black hole’s shadow, and some of the pixels in this area contain the whole sky map, so these are averaged out differently, with simpler calculations to reduce costs of averaging out every pixel on the map.

The final render.

As post processing, different attributes can be added to the render, like star trails and redshift, especially for moving cameras. The star map is changed accordingly based on the redshift caused by the distortion using the interpolations at the point and more equations. The star trails are usually calculated by joining the pixels where the star appears when the camera is moving, however this becomes ineffectual when it is very close to the distortion, not allowing us to see the old and new positions of the star. This is countered by calculating a vector field of the star’s movement, and trails are generated for the field, avoiding the false trails generated by inaccurate or small movements.

Redshift calculations for the sky map

The new techniques allows for extremely accurate pictures of a black hole for general usage and interactive visualizations possible even on standard hardware.

More Details

For more details, see the full IEEE TVCG research paper Interactive Black-Hole Visualization Annemieke Verbraeck and Elmar Eisemann.

--

--