Conversation
How would I grab then a color image, with three separate calls? |
I think @matze had some convincing arguments why to put I'd say if not strictly necessary leave it where it is. |
I would suggest that the 'client' calls grab and then splits the (multichannel) buffer regarding the num_channels using the |
It would add zmq as a dependency to libuca and not only to uca-net. I use "local" uca-cameras with remote writers (using the grab_send of the concert base camera). Did you ever compare the performace? I could imagine, that it would be faster if the plugin splits the data insted of first copy it to python and then sending it to the receivers. |
I would think of the downstream tasks and they should not need to do anything special. E.g. a writer should get an RGB image and write it like usual, same for viewer, at least for the "standard" color images. |
Unless your cameras go with like 10 GB/s (note the capital B) I don't think this will be an issue. But I see your point and whereas from software engineering point of view this may feel wrong, from the practical usage it definitely makes sense. |
Yes, for the standard way I would directly implement something in uca-grab and concert. If you use (nx, ny, 3)-dim numpy arrays, most of our libraries are already assuming rgb. But I also have much more crazy cameras with 8 channels. I just read that tiff supports 2^16 channels per pixel 🫤 . |
I will put here all my changes in libuca im currently working on. So far nothing will break anything existing.
Question: