Please use this identifier to cite or link to this item:
http://hdl.handle.net/10071/23659
Author(s): | Hamad, M. Conti, C. Almeida, A. M. de. Nunes, P. Soares, L. D. |
Date: | 2021 |
Title: | SLFS: Semi-supervised light-field foreground-background segmentation |
Event title: | 2021 Telecoms Conference, ConfTELE 2021 |
ISBN: | 978-1-6654-1588-0 |
DOI (Digital Object Identifier): | 10.1109/ConfTELE50222.2021.9435461 |
Keywords: | Light field segmentation Foreground-background segmentation Superpixels Graph-cut Semi-supervised segmentation |
Abstract: | Efficient segmentation is a fundamental problem in computer vision and image processing. Achieving accurate segmentation for 4D light field images is a challenging task due to the huge amount of data involved and the intrinsic redundancy in this type of images. While automatic image segmentation is usually challenging, and because regions of interest are different for different users or tasks, this paper proposes an improved semi-supervised segmentation approach for 4D light field images based on an efficient graph structure and user's scribbles. The recent view-consistent 4D light field superpixels algorithm proposed by Khan et al. is used as an automatic pre-processing step to ensure spatio-angular consistency and to represent the image graph efficiently. Then, segmentation is achieved via graph-cut optimization. Experimental results for synthetic and real light field images indicate that the proposed approach can extract objects consistently across views, and thus it can be used in applications such as augmented reality applications or object-based coding with few user interactions. |
Peerreviewed: | yes |
Access type: | Open Access |
Appears in Collections: | ISTAR-CRI - Comunicações a conferências internacionais IT-CRI - Comunicações a conferências internacionais |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
conferenceobject_82263.pdf | Versão Aceite | 19,07 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.