# [Solved] Translate Pixelpoints to World Coordinates with Beamer and OpenCV

I have a problem making a translation from pixels to world coordinates. I have a setup with a beamer that hangs over a conveyer belt. The beamer projects an image of a point on the conveyer belt. Now i need to translate that point to a coordinate that a UR10 can use.

I made 2 refference points (viewpoint == beamer):
Beamer: P1(312, 138), P2(212, 38)
Robot: P1(1401, -514), P2(1429, -462)

As you can see the movement was 100 pixels and the Robot movement was dX = 28 and dY = 52
When trying to make a formula we used this info but didn’t get the result we were looking for. Does anybody have an idea to tackle this problem?

Thank you!

Problem is I have no camera, so all the methods using cameraMatrices and distortionMatrices will have no effect.

This is the setup how it looks like, the components get projected on the conveyer, so the pixel coordinates are known. I just need to translate these so that the robot can use them

i think this is your formula:
x_robot = 1488 - 0.28x_beamer
y_robot = 0.77
y_beamer - 68

With the following labels:
Beamer coordinate: P1(X_beamer, Y_beamer)
robot coordinate: P1(X_robot, Y_robot)

I calculated this on Wolframalpha, using this input:
linear fit {312, 1401},{212,1429}
linear fit {138, 38},{-514,-462}

If you could provide me a third datapoint I can check the results

2 Likes

If you want to automate the calibration process you can use 1d linear interpolation, explanation here

You basically give in the two sets of points and now you can put in the beamer x and y value to get the robot coordinates.

``````import numpy as np

xpx = [212, 312]
fpx = [-462, -514]

xpy = [38, 138]
fpy = [1429, 1401]

print(np.interp(312, xpy, fpy))
print(np.interp(138, xpx, fpx))
``````

If you have problems with accuracy you can try to enter more than two data points.

1 Like

Thank you all for the input, the problem is solved. We have made a lineaer equation via Roel’s reply