Let me explain to you how this might not be a bug from the perspective of RenderWare developers. Each texture can be one of the following types: normal, zbuf, camera, texture or camera texture. By loading a texture you are supposed to put pixels into the destination of a texture. In the normal or texture case, the target is a simple color buffer that can be mapped to triangles. In the camera texture case it is what we call render-targets inside MTA. But for the camera case, the destination is the camera itself. So you can see how loading a "camera" texture loads pixels into the backbuffer, directly.
But I think that MTA might not have anticipated this. Thank you for sharing your thought!