Drag and Drop import

  • Dragging an image into the 3D view (but not onto an object) creates a plane with a default material using that image and the size where 1 unit is 1 px
  • Dragging an image onto an object with no textured material creates default material using this texture
  • Dragging a 3d model (e.g. obj) into the scene will do the same as the import feature
  • Dragging a 3D model onto a node in the scene graph will import the model as above but parent it to the node it was dragged on
Thanks for consideration :)


Well-known member
I agree that this would be a very nice thing to have. Also, drag and drop import of SVG / PDF would be very helpful as well.
This all sounds good to me, except this may need to be re-thought.

--shift studio.
Well… it has to be some correlation that makes sense. Maybe the scale factor could be set in the file import preferences. I work on pixel-perfect graphics where the camera is set up to have 1px match 1 unit on the pixel-perfect plane, though.

Well… it has to be some correlation that makes sense. Maybe the scale factor could be set in the file import preferences. I work on pixel-perfect graphics where the camera is set up to have 1px match 1 unit on the pixel-perfect plane, though.
I’m curious to see the pixel perfect work. I’m unsure what you’re getting at.

For me most images I work with (and augment with CGI) are in the 4K to 8k range. So the proposed correlation wouldn’t be appropriate for me.
—shift studio.
Last edited:
I fully agree with shift studio. As 1 unit is thought of being 1 m in Cheetah (at least as long as you stay out of space and the microcosm), 1 pixel = 1 unit would be insane for most people. So something more convenient for the majority would be more reasonable. After such an import you still could change the size, so no problem even if you need this pixel=unit-ratio.

Again like shift, I fail to grasp 'pixel-perfect-ratio'. For a background-image? For a blueprint? I don't mean any disrespect, but I'm used to speak as I think, and for me it doesn't make any sense. In the first case it's dependent only from the render size, in the second case it makes even less sense.

Pixels are only important for the render and for textures in 3d. The pixel-size of a blueprint is not important inside Cheetah (of course it should be big enough for you to model after, i. e. it should show the necessary details) as we try to simulate things that do exist in a real or fantasy world. If I would model myself for example, I would use my size in units, as the resolution of a picture of me doesn't have any relation to me. Say, my photographed self is half as wide as the full 4k-picture, a 'pixel-perfect' model of my person would be 2 km wide... See, what I mean? And even if I would use a small picture, model after that, I'd scale the picture or, later on, the model. If the pic is too small in Cheetah's window, I just zoom in. Problem solved.

And just for the fun of it. Think about a small picture, say 200 px wide, of the Eiffel tower, I'd want to use for my self-portrait as background. I'd be 10 times as wide as that thing (which isn't 200 m by the way). If you do use pixels in 3d instead of real world units like cm, m etc. you get all kind of weird problems and nothing will fit together.

So, I really would like to know any scenario where this concept is useful.
I work on automotive user interfaces. :) You can find some of my work here. http://somian.net/portfolio

If I create a plane that shows a button that's 64x32 pixels, I'll set up the camera position and FOV so 1 unit = 1 px on the 0 plane of an axis and a plane of 64x32 units will then show the texture pixel-perfectly, meaning that one pixel of the texture will be rendered to exactly one pixel on the screen.

Sometimes I also work with render pipelines that skip the projection matrix multiplication and directly contain gl_position in the VBO. For this, I'll build the scene with pixels and then scale it by 1/resolution and translate -0.5 to get from the screen pixel coordinates into OpenGL screen space.


Active member
To be honest, to me it seems unneccessary complicated (see above). But it's your workflow, and obviously it works for you very well as seen in your portfolio. I like it very much, by the way.

On the other hand you probably are the only person with such a need. All others shouldn't care that much about the pixel size in the viewport (only in the render. I'm not even sure if you do use the render for this). So for everybody else 1 unit = 1 pixel would be utterly wrong. So you will have to change the size of the objects in the future.
Thanks :)

This seems complicated until you want to do intricate transitions in the UI that use perspective and temporarily allow objects to get out of their pixel-perfect plane. Or shape-morphs. Or building geometry that supplements your images on a pixel scale (for example, drawing connecting lines).

I do use the render preview in such cases sometimes for rendering out layers to be combined later, or for reference.

Anyway, I think if it would show up in the import settings where users can choose their own scale for every file type, it wouldn't be an issue. The default could be 1:1000 or something and I'd just change it to 1:1 for my special needs.