Well here we are again, in the throws of another lock down. Travel from your local area is frowned upon, all venues (apart from the sea) are closed for diving so to keep stimulated both physically and mentally I decided to experiment with some underground (local) photogrammetry.
My experience of photogrammetry is limited to mostly following others around underwater whilst they photograph things, a few work projects and some failed underwater attempts in a cave (perhaps a separate post on this later).
For anyone thinking of taking it up I have had very good results with mobile phone cameras, Gopro’s and cheap lights, the process is very simple to perform on a computer so I would encourage people to have a go, the software can be trialed for free so expensive equipment is not necessary.
So back to the lockdown; fortunately I have a site I can access within walking distance of my house in which I can experiment, it is an old water conduit known as ‘Raven’s Well’
Armed with a set of cheap waders from Ebay (its waist deep in some places), a Gopro hero 3+black and two cheap video lights I set off to capture some photos to see how well I could model a part of the site.
I set the Gopro to take a still image every 0.5 seconds, put the lights on full and set off walking slowly around the passages near the entrance with the camera pointing forwards. Care at turns was taken to ensure that lots of overlap was achieved. There is a loop that can be traversed so I walked around to see if the software was able to accurately ‘close the loop’ a fundamental part of survey data assessment.
I went round the loop twice in an anti clock wise direction before heading downstream to the low section before returning to the loop and completing the loop again twice in a clockwise direction, this amounted to 1237 photos, just over ten minutes of photo capture. I have collated them into a short video so the quality and coverage can be seen.
This amounted to just over 4 GB of data, the details can be seen be below for the Jpeg images for the photographically minded.
Images can be harvested from video but they lack the metadata that comes from still images so I find this approach easier provided you take enough images first time around, with video you can extract more frames without revisiting the site if required.
Photogrammetry is a computer intensive exercise so before I pressed the ‘Go’ button on the whole set of images I tested a single loops worth to see if what I had captured was going to be worth the wait for processing, this took about two hours to go from raw images to dense cloud, I was happy with the result, it failed to close the loop but had modeled the shape and course of the passages very well, see the below image.
The above image is a plan view of the dense point cloud created from one walk round the loop. The areas circled in red are the same physical areas and should join up however at the area highlighted with a blue line (the first corner) it has failed to adjust for the camera heading change properly which can be seen by the ghost walls, if this piece is manually cut and swung round it allows the areas in red to overlap. I was encouraged enough by this to select all the images and pressed the ‘go button’. After all I had 3 more traverses of the loop and hopefully the addition of more images would help it close properly.
This was a much longer process, which took around two days (Macbook Pro running Windows 7, 64bit, 16 Gb Ram, i7 2.9 Ghz). Waking up to a silent laptop (the fans goes into over drive when its processing) on the second morning I was pleased to see all images had aligned and it had finished so I loaded the dense cloud and started to inspect it. I was very happy with the results, the loop had closed and the passages appeared as they should. The image alignment was run on ‘Medium’ and the Dense Cloud was set to ‘Low’. More detail could be processed at the expense of processing time but for me this is good enough.
The first job once the initial overview had been completed was ‘cleaning’ the water out of the floor, most of the areas have a wet floor and its unsurprising that it struggles to model a constantly moving, colour changing body of water so these points were manually selected and removed. Once this had been completed the mesh and texture were computed, taking just a few hours. Below are some selected views from inside the model, I am working on some sort of video or fly through to be posted when available.
Future work will involve covering the rest of the site and geo-referencing the data to the real world as an arbitrary scale and alignment is applied straight from the software.