Integrated Foreground Segmentation and Boundary Matting for Live Videos
Rs3,500.00
10000 in stock
SupportDescription
This process is mainly focus to segment the fore ground and the background for extracting the region of interest. This process is used not only the segmentation of the region of interest and also track the activity of the person. This process is done using the datasets, like UCLA and VIRAT Datasets. Hence the dataset video is processed to segment as well as to track the activity. This model provides a generic representation for an activity sequence that can extend to any number of objects and interactions in a video. We show that the recognition of activities in a video can be posed as an inference problem on the graph. In this paper, rather than modeling activities in videos individually, we jointly model and recognize related activities in a scene using both motion and context features. This is motivated from the observations that activities related in space and time rarely occur independently and can serve as the context for each other. We propose a two-layer conditional random field model that represents the action segments and activities in a hierarchical manner. The model allows the integration of both motion and various context features at different levels and automatically learns the statistics that capture the patterns of the features.
Only logged in customers who have purchased this product may leave a review.
Reviews
There are no reviews yet.