Triangulation / measuring of objects within images

As I’ve written in an other post I built a POC that enhanced PSV with the ability to add/navigate with images via GPS data

Building that POC in 2020 was a (very) serious challenge and when I got it working it was a nice personal victory (and was very happy to have read later on that it got implemented within PSV, which is used with in Panoramax also :slight_smile: )

I was pointed to this youtube video of the presentation of Panoramax at ‘State of the Map Europe 2023’ (which is worth watching!) and I triggered on a question at the end of the presentation (46:30) about triangulation of objects within imagery.

Building the mentioned POC was a challenge… and in figuring that one out I thought about this also and I’m sure it can been done, but hadn’t put in the time then to really do it… The calculations for GPS positions and placing them in the right spot within a sphere is directly related to this triangulation question.

I did some initial work, where the primary assumption was a flat surrounding (no hills, this makes things even more of a challenge) then do a first click at the base of the desired object (for instance where the pole of a street sign hits the ground), then click on the top of that streetsign. Assuming I know how many meters above groundlevel the image is taken (in my case a bit above “face level”, say 2 meters) Then I could calculate:
a. the lat/lon of the street sign
b. the height of the street sign

As you can see I did the calculations within JS and within the structure of PSV. In theorie it should also be possible to work directly with the original 2:1 panoramic image… but that will take some more calculating :wink:

Again, I didn’t actually make the calculations, but I’m certain that if I would set my mind to it I could figure it out. Is there still interest within Panoramax to add this feature? In what manner?

My POC can be used as a bases for the calculations…

This is a hot topic right now within IGN.

We currently are able to:

  • detect road signs in pictures
  • separate sign and « subsign »
  • classify signs
  • start classify subsigns

What we miss right now is approximate their 2D location and orientation (where are they facing)

To answer orientation is relatively simple? (I assume you mean the heading, as in 0 degrees is North)

To make it a bit more simple, I’ll take a ‘normal’ image as a bases, not a 360 one
When you detect a road sign in an image, and that image was taken at an orientation of 20 degrees (NNE) then the road sign must be approximately facing in the opposite direction (20+180 = ) 200 degrees (SSW). When you know a sign is square shaped or a circle you can factor in the difference in width/height and calculate a more accurate result.

When you’re able to calculate the latitude/longitude (I assume that is what you mean with ‘2D location’ ?) Combine that with orentation and then with relative accuracy you can identify the same sign in an other image!

(I’m now sort of explaining the SfM technology Mapillary built :wink: )

SfM = Structure from Motion

PS: I’ll be quiet now for some time, things need to “land” here and I really need to do some work I’m getting paid for :wink:


About triangulation, some people like me are coupling picture acquisition with RTK to get precise pictures location.

Preliminary results with Mapillary are showing interesting results. The same triangle is located in two different position.

With classical GPS (iPhone):


There are still so issues with RTK for time synchronisation between pictures and RTK track, that can give longitudinal errors. So we are working on how to get the synchronization but also orientation and accuracy. See:

There is this Dutch saying, when one throws the ball, expect it to bounce back…
In this case, when one starts a topic (like this one) I “have to” respond :wink:

The “triangulation” question enters the area of quite a few topics.

The usage would solve / enhance the data in quite a few areas (even tectonic drift, wow… hadn’t thought of that one…).

Because this topic enters quite a few areas I think wise to start with one section that we are sure of will be a good first step and then go to the next one.

My thoughts:
Base material:

  • When possible we should start out with source data that is as reliable as possible. RTK data is probably as good as we can get it, so that would be the best place to start.
  • Assuming that the imagery that comes along with this RTK data is properly oriented (tilt, yaw, heading) we can start out with the best possible source data (I think/hope most images will be 360 images in this case?)

Fase one: orientation:
If I’ve understood correctly, Panoramax is already able to detect road signs. This is an ideal start as they have a “single point of entry” on the ground. The first step would be to calculate the relative direction of the sign to the position of the image. Factoring in the orientation of the image itself we can draw “the first line”. We could also simply factor in that a sign wil be faced in the opposite direction we are traveling. So taking into account the location of the next and previous measurement we have a second measurement to perfect the first calculation.
In this manner we have a reliable orientation.

Fase two: triangulation:
When we do the same on the next (and previous?) image we can draw a second (and third) line. The point where they intersect will be the location of the sign!

Fase three: triangulation 2.0
I calculated the basics for placing the location of an image at location B within the sphere projection of PSV image at location A. The reverse is also possible! In other words: When we know how far above ground a sign is positioned we can calculate the latitude and longitude based upon just one image (accompanied with accurate lat/lon/pitch/yaw/heading!)

Averaging these calculations would most likely give quite reliable result!

And that would be on just one (good) set of images!

Fase four: improving data quality overall
We could most likely set the assumption that a particular road sign (facing in a certain direction) would not see a second one within several dozens of meters. Well outside the INaccuracy of ‘standard’ GPS. So when we combine the positions of the various calculations we most likely can calculate the position of a sign reliably! An with that perfect the locations of the images!

It took some 20 minutes to write this down, I do fear actually realizing this will take some more time :wink: When you could find some budget for me I would love to sink my teeth into this (or when this write-down is inspiring enough I wish you the best of coding :wink: )

Yes, and that’s what we did with students at ENSG (IGN school) by working on Strasbourg pictures that are professional 360 pictures with high precision location.

This true for non 360 pictures… for 360 ones, you have signs in all directions.

What I did so far is compute the relative location of the sign in the horizontal axis of the picture to get an offset angle (proportional to the field of view) to the direction of the center of the picture.

The sign detected bbox can help by itself, with its width/height ratio. Most sign in France fit in a square, so for these the width/height ratio can help compute their orientation too. For non square ones, we can take into account their width/height ratio.