Monday 2 March 2015

Isis rises from the clouds: converting 3D meshes into point clouds

Sneak peek of my model of the Iseum in 3DSMax.
Untextured
As the Generic Viewer only accepts point clouds as inputs, we had to find a way to convert my 3D model of the Iseum into a point cloud.
I know. The idea would make many modellers (including me) cringe. And I can already hear the puzzled voice of my colleagues asking “just WHY?”. All the painstaking work I did on labelling the different elements would be lost in the process. All the layers would probably be smashed together. And what would happen to the details?
I am not going to ignore this issue, just to leave it aside for the moment. We took some time just to make experiments. It may not lead anywhere. It may not be useful to my research. The process of converting a 3D mesh into a point cloud may involve such a loss of information that it is unreasonable to pursue it. But we just wanted to find out.

How do you do that?
Here you can see the difference between people coming from a humanities or technology background. I started being fascinated by the idea of making a digital imaging of a virtual space. Is that possible? Can you do a photogrammetry of a space that is already digital? While I was pondering these slightly surreal thoughts, and getting lost in their philosophical implications, Alexandra had already found an option in Meshlab that transforms meshes into point clouds. As simple as that.

The Iseum as point cloud in the GV. The software generated
the map and calculated the areas.
The pink dots are the viewpoints
So we exported the Iseum in collada (.dae) from 3DSMax (the only Max exporting format that can be opened in Meshlab), we imported it into Meshlab and generated a point cloud.
We tried out different point densities, and then we settled for 5 millions. And there it was, my temple as a cloud!
I was happy to see that the conversion had worked less problematically than I was expecting. But, at the same time, it can’t be ignored that, even in a quite simple and regular untextured model like mine, everything looked pretty much simplified. The question if the benefits of the semantic annotations and the spatial calculations are worth the information loss is still open.

So, we managed to overcome the first major issue: we had a point cloud version of the virtual space. But, still, it was not suitable for the GV because in a CAD model there are not fixed viewpoints. 
We thought that the key was to keep treating the virtual space as a real one. And do, virtually, all the things we would do in a material space to make the GV work. So, we placed some hypothetical view points in the model and recorded the coordinates in the 3D space. 

screenshots of the virtual panorama
in the texturised ekklesiasterion
Again, I was glad for the success of the experiment. It gives a certain pleasure to push the boundaries of digital tools and software, and to see how much you can stretch them and make them do things they were never supposed to. On the other hand, I was losing information again. I had just turned a virtual space that is entirely explorable, 360 degrees, into a space whose view is constrained by the position of viewpoints.

We missed one last element to simulate the imaging of a material space in the GV: photographic panoramas, taken from the positions of our artificial viewpoints to texture the point cloud.
Again, we tried to think of the digital space as it was a real one. So I placed a (virtual) camera at the same coordinates of one of our viewpoints, and took sequential pictures of the space, with at least a 40% overlap (so, in the end, my idea of a photogrammetry of a virtual space wasn’t that crazy…).
To be fair, our viewpoints weren’t very strategically placed. But, luckily, I thought of putting at least one in a sensible place: the ekklesiasterion. The chronological layer of my model I’m working on (the hypothetical reconstruction of the Iseum as it might have looked like before 79) is meant to be untextured.
I could have left it untextured in our experiments with the GV as well, and just render the camera view of the elements in the not-too-bad 3DSMax solid colour palette (which is a feast of purple, lime green and turquoise, not too dissimilar from my summer wardrobe).
However, I thought that the best use I could make of the GV and its feature was to use it to express the complex relationships between the walls of the ekklesiasterion, the frescoes that have been found there and are now exhibited in the Museum of Naples, and the documentation of those frescoes that have been produced at the time of the excavations.

So, I textured (quite quickly, I’m afraid) the north, west and south wall of the ekklesiasterion with a digital copy of the graphic documentation of the walls commissioned at the time by the Borbons. On the black and white engravings (a not-good-enough picture from a copy of Elia's book) I have superimposed colour pictures of the fragments now exhibited in Naples (if you think that was a straightforward task, you have never had anything to do with Pompeian documentations…).
I didn’t have an equivalent texture for the east wall (the entrance one, with the arches). I could have left it untextured but, just to simulate a minimum of homogeneity, I applied a quick black and white masonry texture to it. 

The ekklesiasterion in the GV, (almost) ready to be annotated
I froze the camera at the exact coordinates we had given the GV for that viewpoint, and moved the target of the camera along the walls, capturing the panorama, as I was doing a photogrammetry. Move the target. Render the camera view. Print the screen. Repeat.
I thought of removing the roof from the model of the ekklesiasterion, to handle better the movements along the walls, and I twitched the light a little bit to get a better illumination.
Actually, I was a bit worried about the light. My model is not finished yet, so I haven’t spent much time working on a well balanced and realistic lighting. I just put some standard 3DSMax omnilights where I need them when I’m modelling. I wondered if the artificiality and inconsistency of lights in the renderings we used to build the panorama might bother the system. But I was over worrying and the system was definitely smarter than I thought.

So, while Alexandra is still working on the last details, it seems that my Iseum is ready to be annotated in the Generic Viewer

1 comment:

  1. I high appreciate this post. It’s hard to find the good from the bad sometimes, but I think you’ve nailed it! would you mind updating your blog with more information?
    http://en.forrender.com/

    ReplyDelete