文章目录
- 1. clicks / scribbles
- 2. bounding box
- 3. control points
1. clicks / scribbles
[1] Interactive graph cuts for optimal boundary & region segmentation of objects in N-D images
ICCV 2001
被引量: 4519(2019/6)
作者: Yuri Y. Boykov, Marie-Pierre Jolly
交互方式: click / scribble
Intro: Graph-cuts [1.1] marks some seed pixels belong to background or foreground, then uses max-flow/min-cut algorithm to provide a global optimal solution for an N-dimensional segmentation.
[2] Extreme clicking for efficient object annotation
ICCV 2017
作者: Dim P. Papadopoulos, Jasper R. R. Uijlings, Frank Keller, Vittorio Ferrari
被引量: 49(2019/7)
交互方法: click -> bounding box
Intro: [1.2] proposes extreme clicking strategy to replace the traditional drawing bounding box method, which lets user to click on the top, bottom, left and right-most points of an object and then incorporates them into GrabCut to obtain segmentation result.
[3] Deep extreme cut: From extreme points to object segmentation
CVPR 2018
作者: Kevis-Kokitsi Maninis, Sergi Caelles, Jordi Pont-Tuset, Luc Van Gool
被引量: 44(2019/7)
交互方式: click / scribble
Intro: Based on [1.2], [1.3] proposes a CNN network DEXTR, that turns extreme clicking annotations (left-most, right-most, top, bottom points) into object masks.
[4] Large-scale interactive object segmentation with human annotators
CVPR 2019
作者: Rodrigo Benenson, Stefan Popov, Vittorio Ferrari
被引量: 1(2019/7)
交互方式: click / scribble
Intro: [1.4] allows user to focus on correcting the outputs for multiple rounds, which are generated by automatic segmentation model and then the model incorporates all scribbles to refine the segmentation result.
[5] Fast User-Guided Video Object Segmentation by Interaction-and-Propagation Networks
CVPR 2019
作者: Seoung Wug Oh, Joon-Young Lee, Ning Xu, Seon Joo Kim
被引次数: 1(2019/7)
交互方式: click / scribble
Intro: [1.5] proposes a multi-round training strategy for interactive video objects segmentation, lets the model understand the users’ intention and refine mis-segmentation regions during training phase.
[6] Interactive Full Image Segmentation by Considering All Regions Jointly
CVPR 2019
作者: Eirikur Agustsson, Jasper R. R. Uijlings, Vittorio Ferrari
被引量: 1(2019/7)
交互方式: scribble
Intro: [1.6] proposes to derive an initial segmentation result for the whole image based on [1.2], then use the initial prediction result as the input of the annotator, and iterate between the annotator make a refinement on the mis-segmentaion area and update the segmentation results accordingly.
[7] Interactive Image Segmentation via Backpropagating Refinement Scheme
CVPR 2019
作者: Won-Dong Jang,Chang-Su Kim
被引量: (2019/7)
交互方式: click / scribble
Intro: [1.7] converts user annotations into interaction maps by measuring distances of each pixel to the annotated locations, at the same time, it develops the backpropagating refinement scheme which corrects the mislabeled pixels.
[8] DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation
PAMI 2019(医学图像)
作者: Guotai Wang , Maria A. Zuluaga , Wenqi Li , Rosalind Pratt, Premal A. Patel, Michael Aertsen, Tom Doel , Anna L. David , Jan Deprest, Sebastien Ourselin, and Tom Vercauteren
被引量: 25 (2019/7)
交互方式: click / scribble
Intro: DeepIGeoS [1.8] proposes an interactive method for 2D and 3D medical image segmentation that combines user interactions with the CNNs through geodesic distance transforms and minimize user interactions while improving segmentation results.
2. bounding box
[1] Grabcut: Interactive foreground extraction using iterated graph cuts
ACM 2004
作者: C. Rother, V. Kolmogorov, and A. Blake
被引量: 5613(2019/7)
交互方法: bounding box + scribbles
Intro: GrabCut[2.1] is based on the discrete graph-cut algorithm[*], and only requires user to draw a rectangle loosely around an object, then the segmentation result is obtained automatically.
[2] Image segmentation with a bounding box prior
ICCV 2009
作者: Victor Lempitsky, Pushmeet Kohli, Carsten Rother, and Toby Sharp
被引量: 339(2019/6)
交互方法: bounding box
Intro: [2.2] mentions that the bounding box can not only be used to exclude background information, but also as a topology prior, thus preventing the segmentation result from shrinking.
[3] MILCut: A Sweeping Line Multiple Instance Learning Paradigm for Interactive Image Segmentation
CVPR 2014
作者: Jiajun Wu, Yibiao Zhao, Jun-Yan Zhu, Siwei Luo, and Zhuowen Tu
被引量: 69(2019/7)
交互方法: bounding box
Intro: [2.3] proposes sweeping-line strategy to perform segmentation task within the bounding box provided by user, which convert the interactive image segmentation into a multiple instance learning problem.
[4] Deep grabcut for object selection
CVPR 2017
作者: Ning Xu, Brian Price, Scott Cohen, Jimei Yang, Thomas Huang
被引量: 14(2019/6)
交互方法: bounding box
Intro: [2.4] takes the bounding box as soft constraint
[5] LooseCut: Interactive Image Segmentation with Loosely Bounded Boxes
CVPR 2015
作者: Hongkai Yu, Youjie Zhou, Hui Qian, Min Xian, Yuewei Lin, Dazhou Guo, Kang Zheng, Kareem Abdelfatah, Song Wang
被引量: 15(2019/6)
交互方法: bounding box
Intro: [2.5] draw some loosely bounded boxes
[6] Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning
TMI 2018(医学图像)
作者: Guotai Wang , Wenqi Li , Maria A. Zuluaga , Rosalind Pratt, Premal A. Patel, Michael Aertsen, Tom Doel, Anna L. David, Jan Deprest, Sébastien Ourselin, and Tom Vercauteren
被引量: 60(2019/7)
交互方式: bounding box and optional scribbles
Intro: BIFSeg[2.6] design a 2D and a 3D CNN and combine with bounding box and optional user scribbles to achieve higher precision, and propose image-specific fine-tuning to address the current problem that CNNs can not generalize well to object classes that do not exist in training sets.
3. control points
[1] Annotating object instances with a polygon-rnn
CVPR 2017
作者: Lluis Castrejon, Kaustav Kundu, Raquel Urtasun, Sanja Fidler
被引量: 59(2019/7)
交互方式: control points
Intro: Polygon-RNN [3.1] treats the segmentation task as a polygon prediction problem, that is, predicts the vertices of a polygon that outlines the object, and allow user to interfere at any time and correct a vertex if needed.
[2] Efficient interactive annotation of segmentation datasets with polygon-rnn++
CVPR 2018
作者: David Acuna, Huan Ling, Amlan Kar, Sanja Fidler
被引量: 43(2019/7)
交互方式: control points
Intro: Polygon-RNN ++[3.2] is based on polygon-RNN, but it further proposes to use reinforcemet learning to train the network and uses Graph Neural Network to increase the segmentation result resolution.
[3] Fast Interactive Object Annotation with Curve-GCN
CVPR 2019
作者: Huan Ling, Jun Gao, Amlan Kar, Wenzheng Chen, Sanja Fidler
被引量: 2(2019/7)
交互方式: control points
Intro: Different from the sequential nature of Polygon RNN, [3.3] regards the object annotation problem as a regression problem, this model can predicting all vertices simultaneously by using Graph Convolutional Network(GCN).