Webhooks for notification events has been added! You can read more about how to use them here.
Visible Bounding Box Class Names
The class name of the bounding box can now be rendered alongside the box to better visualize your bounding box data.
This can be controlled using a toggle in the display settings:
Highlighted Similar Elements
The 2D embedding view is best for understanding the overall topology and structure of your dataset—but the distance between points can be misleading.
To help you better understand the notion of "similarity", you now have the option of highlighting the most "similar elements" to a selected datapoint in the embedding view:
The "similar elements" will appear brighter than other lasso-ed elements. In the upper left preview pane, you can toggle this option on and off.
Class Visibility Toggles
In the display settings, you can now control class visibility in the embeddings view:
The embeddings view will filter out the classes you have disabled in the display settings (and it works on all visualization types):
Comments and Mentions
You can now make comments on issues or issue elements:
You can also mention others in your organization to notify them of an issue or issue element that you would like them to look at:
To enable/disable in-app notifications or emails for mentions, there is a new option added to your user settings:
Instead of downloading issues as JSONs and CSVs and then scripting around them, users can now directly integrate Aquarium issues with their labeling providers. By setting up a webhook integration, users can now click a button in the Aquarium UI and directly submit data to their labeling system. See our docs for more details!
Within Dataset Similarity Search
If you've identified a few problematic elements in an Issue, you may want an easy way to "grow" the issue by finding other similar elements within that same dataset.
Once an issue is created, you can generate similar elements by going to the Similar Dataset Elements tab and clicking the Calculate Similar Dataset Elements button as follows:
This info would be attached to each issue element, and look something like the following:
Now whenever you download issue elements from an inference set, we automatically attempt to match ground truth labels and inference labels, so that we can attach the associated metadata to each element.
Specify Target Campaign by Issue UUID
Now you can run your collection client for specific active target campaign(s) using thetarget_issue_uuids argument as follows:
# Source issue UUIDs for the campaigns you care about
This will upload up to target_sample_count samples, prioritized by similarity score.
If you want to see what your collected samples look like before actually uploading them, there is a dry_runflag that you can specify:
It will (1) display basic stats and (2) link out to a "preview frame", where a single sample frame is uploaded so you can make sure it looks how you expect (similar to the one used in dataset uploads).