If the emergence of 3D glasses in the local cinemas are an indication, we all want a little more, ahem, our depth of experience film. Unfortunately, the style of glasses are not exactly suitable for an immersive experience. What would be really cool animated holograms. Although holograms are not the easiest things in the world, it is possible to make a 3D computer model and data necessary to calculate a hologram that can be used to project a 3D image from one generation to screen. Given the fact that the animation is now largely computer-generated anyway, where are my holographic animated films?
One problem seems to be efficient rendering. A recent article appeared in Optics Express, although an enormous speed holographic display this shows the difficulty of the problem. The basic animation is now within reach of modern rendering farms, unfortunately, who have no power left on important issues, such as shadows, lighting and shadows (much more character and plot ).
As usual, we take a step backwards in this case to see how a real hologram is made. Take the light of a laser, split into two beams. Shine a light on the object you want from a hologram, and then let it hit a landslide photographic paper. The other beam goes directly on photographic paper. Accordingly, the photographic paper records the interference between two beams instead of an image of the object.
The reason this is useful is that of the laser light is coherent in space and time, and only one interference pattern is recorded, rather than the average of millions of such models.
(For those who care about the details: if we have a piece by the laser beam, the electric field are two places in this part stuck in their relationship with each other. Even if we know that the electric field at a point in time and space, while the temporal coherence means that we know how it will change over time in this space. This type of consistency is why we have a single interference pattern. )
To project the image, the process is reversed. Half of the laser light reflected by the interference pattern, then recombined with a beam that has not changed. The combined beam is then projected to the viewer using a lens system. Viewers can see the 3D object, instead of the interference pattern.
Thus, a holographic computer sounds pretty simple: just restart the interference pattern. But there is a gotcha. For example, if you cut an image into two, get two partial images. Cutting a hologram in half, however, results in two pictures full of less quality information on each pixel encodes the whole image. And therein lies the difficulty: to show a hologram calculated intensity of each pixel are calculated from the interference pattern, not only the local contribution to the subject.
This can not be so difficult? Well, Japanese scientists have created a graphics card, called Horn-6, which can make for you. It consists of four Xilinx field programmable gate arrays (FPGAs), each about 7 million gates and a little memory (less than 1 MB). Each FPGA is connected to 256 MB of DDR RAM, then fifth, smaller FPGA is used to manage the PCI bus.
These FPGA divide the surface of an LCD screen 1920 x 1080 and calculates the intensity of each pixel using an algorithm for ray-tracing the footsteps of the phase of the phase of light, the interference must be calculated. In a beautiful piece of technology, as the block size that each FPGA can handle (eg, limited local storage) is completed in a little less time it takes for the next block to retrieve from memory. This allows researchers to load FPGA almost constantly maintaining data prefetching.
When the record is complete, the resulting interference pattern can be displayed on an LCD. This model can then be projected with a laser to create a hologram that is 1m in one dimension (the document does not say which, if I suspect that the diagonal) and has five degrees angle of vision.
Of course, gamers among us want the framerate. Well, it will Blow Your Socks Off: A peak performance of 0.08fps (full size). But if we dig deep and Shell, for the model fully clothed, the yield increases. To do this, stabbed your computer as many cards in your PCI slots. Researchers have shown a four-board that had 0.25fps up. A distributed system of four computers, each with four plates, clocks full 1.0fps. Awesome stuff.
It helps to put that into perspective by comparing the rendering times with different hardware. Unfortunately, researchers do not have a very good job here. I suspect they could not use the GPU standard, because they are limited in how they can be programmed (but I'm no expert, so this would be a mistake). So instead they used the CPU-time rendering.
Even then they could have done better, because these figures compared to a Intel P4 clocked at 3.4 GHz, which is not exactly cutting edge equipment. However, a single image are displayed in 1 hours, 16 minutes and 14 seconds by a P4, so it's probably not as much their card is still faster than current equipment.
The PDF is freely available and contains a number of films.
0 comments:
Post a Comment