This dataset consists of 50 images, ground-truth data and two sets of scribbles, and it has been made publicly available. As can be seen, the images contain an object that could be unambiguously extracted by users.

Segmentation image Segmentation image

For this compilation, the publicly available GrabCut and Geodesic Star convexity dataset were taken as starting point. Original images and ground-truth data are from the GrabCut database.

Segmentation image Segmentation image

User inputs are provided by means of two sets of scribbles which indicate foreground and background regions. For the first set, we use the scribbles for initializing robot user from the Geodesic Star Convexit dataset. These employ on average about 4 strokes per image, yet they mark a small area of the foreground object. Finally, a new set of scribbles was created in order to extend this dataset. In this set, the scribbles indicate and mark in more detail the foreground region.

These sets reflect two degrees of user effort: the second set marks in more detail foreground regions when compared to the first set of scribbles.

NOTE: This note is the second of two notes. See the presentation entry for more information.