Convolution Kernel node?

convo k node.png


I guess the title says it all. Sure would be a wonderful addition; it would be great to bring any Image texture in and put a convolution kernel node in between the Image and the Material. How hard would it be, on a scale of 1 to 10 (ten being 'are you nuts')? If it would be "easy", I can think of a few addiitonal parameters to include - but I don't want to get ahead of myself ...
 
Hello,

For those of us who may be unaware of the perceived value of having in-line convolution kernel image processing built into the C3D material node system, perhaps you could provide some background for the request.

Thanks and cheers,
gsb
 
Hi, gsb.
Being able to do a non-destructive, stack-able in-application blur (e.g.) of an image as a texture (or part of a texture) would be nice. (Actually, doing this for an HDRI environment would be great, too, but that opens up cans of worms I don't think we need to discuss). Convolution kernels are pretty limited but the ability to do even a little processing to pixels would be an interesting feature, esp. if such a thing could be created simply. I realize C3D isn't compositing software, but some things have a difficulty vs. usefulness ratio; it may be that this is one of those "just not worth the trouble" things.

-cg
 
Hello cg,

Thanks for your insight… some follow-on questions/observations…

I did recall very early versions of Photoshop (late 1980s) having a convolution kernel filter and haven’t thought much about it since then. Just checked and PS still has the filter, but it is now called Custom… and is found in the Other filter submenu. Hmmm, it may have always been called Custom, but what I recognized was the matrix for defining the manipulation of pixels.

Is there any added value to processing an image inline with a convolution kernel node within C3D vs pre-processing the image first in PS (or other pixel manipulation application).

Your mockup seems to suggests that you are looking to process more than just images… which seems to be more interesting/valuable than just working on imported images. But then there is a lot about node processing that I have yet to grasp. Perhaps Instance &/or State with an inline convolution kernel node would provide desired control of the manipulation of an imported image in context of the model & scene,

Your two inputs “mix color” and “mix” seem to suggest the functionality of the Math > Mix node is part of your proposed node vs just using the existing node to provide the mixing.

thanks & cheers,
gsb
 
Hi, gsb.
Sorry about the ridiculous delay. I basically got distracted by life things and completely forgot about this issue.
There are a number of ways to preprocess images before bringing them into Cheetah, including scripting solutions - I was just looking for a way to have it all in Cheetah for efficiency's sake.
Don't read too much into the mockup - it was just a quickie thing to suggest an image would be connected to the left, an interface would allow the manipulating the kernel, and the output could be any rgb-input-supporting node.
 
  • Like
Reactions: gsb
Back
Top