You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
There are currently two ways to store 2.5D data (points, images, labels) in a SpatialData object (as far as I can tell):
Option 1. one 3D Element (e.g., image with XYZ axes and points with XYZ coordinates)
Option 2. multiple 2D Elements (e.g., image with XY axes and points with XY coordinates)
Which of these options should be preferred/suggested? What are the limitations?
There are different limitations in each of these cases.
Limitation of Option 1: Distinguishing 2.5D vs. 3D semantics. For a downstream application such as Vitessce, if a user points to a 3D image, should this be visualized as a volume? or as 2D slices in 3D space? It can be challenging to distinguish if they are stored on-disk in the same manner in both cases (2.5D and 3D).
One solution here is to check whether the Z-dimension contains a transform, such as scale+translate, which can help a downstream application to infer that it is the 2.5D case.
Limitation of Option 2: Could make it much more difficult to perform 3D operations or render in 3D if needed.
Limitation of Option 2: No way to specify a Z-dimension transform if the Spatial Element lacks a Z dimension.
One solution here would be to store the 2D data as 3D data with a Z-dimension of size 1. Once the data has a Z dimension, transform parameters for this axis can be specified in coordinateTransformations
Limitation of Option 2: Challenging to infer which 2D Spatial Elements are slices/members of the same 2.5D dataset/group.
Solution 1: Use a naming convention, like images/image_slice_0, images/image_slice_1, etc.
Solution 2: If all Elements can be mapped into the same coordinate system, check whether their positions in this target coordinate system would result in being 2.5D slices
Solution 3: (If it is known that a SpatialData object stores a 2.5D dataset) assume one SpatialData object only contains 2.5D slices from a single 2.5D dataset.
Describe the solution you'd like
Document recommended way to store 2.5D data in a SpatialData object.
Describe alternatives you've considered
Do nothing differently, assume 2.5D data will be stored as 3D. Downstream tools such as Vitessce should always allow rendering 2D slices of a 3D volume, and vice-versa. Perhaps I am overthinking this.
Add Image2DSlicesModel, Labels2DSlicesModel, Points2DSlicesModel, etc. so that semantics are more clear upon creation of Spatial Elements. Downstream tools should use metadata produced by these Model classes to distinguish 2.5D from 3D.
Additional context
Add any other context or screenshots about the feature request here.
Is your feature request related to a problem? Please describe.
There are currently two ways to store 2.5D data (points, images, labels) in a SpatialData object (as far as I can tell):
Which of these options should be preferred/suggested? What are the limitations?
There are different limitations in each of these cases.
delay_z_scaling(or similar) option Add delayed Z-dimension scaling option for 3D image multiscales #9551. Once the data has a Z dimension, transform parameters for this axis can be specified incoordinateTransformationsimages/image_slice_0,images/image_slice_1, etc.Describe the solution you'd like
Describe alternatives you've considered
Image2DSlicesModel,Labels2DSlicesModel,Points2DSlicesModel, etc. so that semantics are more clear upon creation of Spatial Elements. Downstream tools should use metadata produced by these Model classes to distinguish 2.5D from 3D.Additional context
Add any other context or screenshots about the feature request here.