Sketch Generation and Applications

 

 

Joint Stroke Tracing and Correspondence for 2D Animation

Haoran Mo, Chengying Gao* and Ruomei Wang

Intro: We for the first time propose a joint stroke tracing and correspondence approach. Given consecutive raster keyframes along with a single vector image of the starting frame as a guidance, the approach generates vector drawings for the remaining keyframes while ensuring one-to-one stroke correspondence. Our framework trained on clean line drawings generalizes to rough sketches and the generated results can be imported into inbetweening systems to produce inbetween sequences. Hence, the method is compatible with standard 2D animation workflow. An adaptive spatial transformation module (ASTM) is introduced to handle non-rigid motions and stroke distortion. We collect a dataset for training, with 10k+ pairs of raster frames and their vector drawings with stroke correspondence.

ACM Transactions on Graphics (Presented at SIGGRAPH 2024)  (CCF-A)
[Paper] [Code] [Project Page]

Text-based Vector Sketch Editing with Image Editing Diffusion Prior

Haoran Mo, Xusheng Lin, Chengying Gao* and Ruomei Wang

Intro: We present a framework for text-based vector sketch editing to improve the efficiency of graphic design. The key idea behind the approach is to transfer the prior information from raster-level diffusion models, especially those from image editing methods, into the vector sketch-oriented task. The framework presents three editing modes and allows iterative editing. To meet the editing requirement of modifying the intended parts only while avoiding changing the other strokes, we introduce a stroke-level local editing scheme that automatically produces an editing mask reflecting locally editable regions and modifies strokes within the regions only.

International Conference on Multimedia & Expo (ICME, 2024)  (CCF-B)
[Paper] [Code]

Video-Driven Sketch Animation via Cyclic Reconstruction Mechanism

Zhuo Xie, Haoran Mo and Chengying Gao*

Intro: Considering the time-consuming manual workflow in 2D sketch animation production, we present an automatic solution by using videos as reference to animate the static sketch images. This includes motion extraction from the videos and injection into the sketches to produce animated sketch sequences in which appearance properties from the source sketches should be preserved. To reduce blurry artifact caused by complex motions and maintain stroke line continuity, we propose to incorporate inner masks of the sketches as an explicit guidance to indicate inner regions and ensure component integrality. Moreover, to bridge the domain gap between the video frames and the sketches when modelling the motions, we introduce a cyclic reconstruction mechanism to increase compatibility with different domains and improve motion consistency between the sketch animation and the driving video.

International Conference on Multimedia & Expo (ICME, 2024)  (CCF-B)
[Paper]

Multi-instance Referring Image Segmentation of Scene Sketches based on Global Reference Mechanism

Peng Ling, Haoran Mo and Chengying Gao*

Intro: We propose GRM-Net, a one-stage framework tailored for multi-instance referring image segmentation of scene sketches. We extract the language features from the expression and fuse it into a conventional instance segmentation pipeline for filtering out the undesired instances in a coarse-to-fine manner and keeping the matched ones. To model the relative arrangement of the objects and the relationship among them from a global view, we propose a global reference mechanism (GRM) to assign references to each detected candidate to identify its position.

Pacific Graphics (PG 2022)  (CCF-B)
[Paper] [Code]

Line Art Colorization Based on Explicit Region Segmentation

Ruizhi Cao, Haoran Mo and Chengying Gao*

Intro: We introduce an explicit segmentation fusion mechanism to aid colorization frameworks in avoiding color bleeding artifacts. This mechanism is able to provide region segmentation information for the colorization process explicitly so that the colorization model can learn to avoid assigning the same color across regions with different semantics or inconsistent colors inside an individual region. The proposed mechanism is designed in a plug-and-play manner, so it can be applied to a diversity of line art colorization frameworks with various kinds of user guidances.

Computer Graphics Forum (Pacific Graphics 2021) (*oral)  (CCF-B)
[Paper] [Code]

General Virtual Sketching Framework for Vector Line Art

Haoran Mo, Edgar Simo-Serra, Chengying Gao*, Changqing Zou and Ruomei Wang

Intro: Vector line art plays an important role in graphic design, however, it is tedious to manually create. We introduce a general framework to produce line drawings from a wide variety of images, by learning a mapping from raster image space to vector image space. Our approach is based on a recurrent neural network that draws the lines one by one. A differentiable rasterization module allows for training with only supervised raster data. We use a dynamic window around a virtual pen while drawing lines, implemented with a proposed aligned cropping and differentiable pasting modules. Furthermore, we develop a stroke regularization loss that encourages the model to use fewer and longer strokes to simplify the resulting vector image. Ablation studies and comparisons with existing methods corroborate the efficiency of our approach which is able to generate visually better results in less computation time, while generalizing better to a diversity of images and applications.

ACM Transactions on Graphics (SIGGRAPH 2021, Journal track) (*oral)  (CCF-A)
[Paper] [Code] [Project Page]

SketchyCOCO: Image Generation from Freehand Scene Sketches

Chengying Gao, Qi Liu, Qi Xu, Jianzhuang Liu, Limin Wang, Changqing Zou*

Intro: We introduce the first method for automatic image generation from scene-level freehand sketches. Our model allows for controllable image generation by specifying the synthesis goal via freehand sketches. The key contribution is an attribute vector bridged generative adversarial network called edgeGAN which supports high visual-quality image content generation without using freehand sketches as training data. We build a large-scale composite dataset called SketchyCOCO to comprehensively evaluate our solution. We validate our approach on the task of both objectlevel and scene-level image generation on SketchyCOCO. We demonstrate the method’s capacity to generate realistic complex scene-level images from a variety of freehand sketches by quantitative, qualitative results, and ablation studies.

Computer Vision and Pattern Recognition (CVPR, 2020) (*oral)  (CCF-A)
[Paper] [Code]

Language-based Colorization of Scene Sketches

Changqing Zou#, Haoran Mo#(joint first author), Chengying Gao*, Ruofei Du and Hongbo Fu

Intro: This paper for the first time presents a language-based system for interactive colorization of scene sketches, based on semantic comprehension. The proposed system is built upon deep neural networks trained on a large-scale repository of scene sketches and cartoonstyle color images with text descriptions. Given a scene sketch, our system allows users, via language-based instructions, to interactively localize and colorize specific foreground object instances to meet various colorization requirements in a progressive way. We demonstrate the effectiveness of our approach via comprehensive experimental results including alternative studies, comparison with the state-of-the-art methods, and generalization user studies. Given the unique characteristics of language-based inputs, we envision a combination of our interface with a traditional scribble-based interface for a practical multimodal colorization system, benefiting various applications.

ACM Transactions on Graphics (SIGGRAPH Asia 2019, Journal track) (*oral)  (CCF-A)
[Paper] [Code]

SketchyScene: Richly-Annotated Scene Sketches

Changqing Zou#, Qian Yu#, Ruofei Du, Haoran Mo, Yi-Zhe Song, Tao Xiang, Chengying Gao, Baoquan Chen*, and Hao Zhang

Intro: This paper constructed the first large-scale dataset of scene sketches called SketchyScene. We demonstrate the potential impact of SketchyScene by training new computational models for semantic segmentation of scene sketches.

European Conference on Computer Vision (ECCV, 2018)  (CCF-B)
[Paper] [Code]

Back to top

Image Editing and Synthesis

Including: image inpainting, color restoration, color transfer and non-photorealistic rendering.


Controllable Anime Image Editing via Probability of Attribute Tags

Zhenghao Song, Haoran Mo, and Chengying Gao*

Intro: Editing anime images via probabilities of attribute tags allows controlling the degree of the manipulation in an intuitive and convenient manner. Existing methods fall short in the progressive modification and preservation of unintended regions in the input image. We propose a controllable anime image editing framework based on adjusting the tag probabilities, in which a probability encoding network (PEN) is developed to encode the probabilities into features that capture continuous characteristic of the probabilities. Thus, the encoded features are able to direct the generative process of a pre-trained diffusion model and facilitate the linear manipulation. We also introduce a local editing module that automatically identifies the intended regions and constrains the edits to be applied to those regions only, which preserves the others unchanged. Comprehensive comparisons with existing methods indicate the effectiveness of our framework in both one-shot and linear editing modes. Results in additional applications further demonstrate the generalization ability of our approach.

Pacific Graphics (PG, 2024)  (CCF-B)
[Paper] [Code]

CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer

Linfeng Wen, Chengying Gao*, Changqing Zou

Intro: Content affinity loss including feature and pixel affinity is a main problem which leads to artifacts in photorealistic and video style transfer. This paper proposes a new framework named CAP-VSTNet, which consists of a new reversible residual network and an unbiased linear transform module, for versatile style transfer. This reversible residual network can not only preserve content affinity but not introduce redundant information as traditional reversible networks, and hence facilitate better stylization. Empowered by Matting Laplacian training loss which can address the pixel affinity loss problem led by the linear transform, the proposed framework is applicable and effective on versatile style transfer.

Computer Vision and Pattern Recognition (CVPR, 2023) (CCF-A)
[Paper] [Code]

Structural Prior Guided Image Inpainting for Complex Scene

Shuxin Wei, Chengying Gao

Intro: Existing deep-learning based image inpainting methods have reach plausible results for small corrupted regions with rich context information. However, these methods fail to generate semantically reasonable results and clear boundaries. In this paper, we disentangle inpainting for complex scene into two stages: semantic segmentation map inpainting and segmentation guided texture inpainting. We use feature correspondence matrix to find correlation between segmentation maps and known region of corrupted images and realize texture generation of corrupted region.

International Conference on Multimedia & Expo (ICME, 2021) (*oral)  (CCF-B)
[Paper]

基于稀疏结构的复杂物体修复

高成英,徐仙儿,罗燕媚,王栋

计算机学报,2019

An edge-refined vectorized deep colorization model for grayscale-to-color images

Zhuo Su, Xiangguo Liang, Jiaming Guo, Chengying Gao, Xiaonan Luo

Neurocomputing, 2018
[Paper]

PencilArt: A Chromatic Penciling Style Generation Framework

Chengying Gao, Mengyue Tang, Xiangguo Liang, Zhou Su, Changqing Zou

Computer Graphics Forum (CGF), 2018  (CCF-B)
[Paper]

Back to top

3D Pose Estimation and Motion Generation

Unpaired Motion Style Transfer with Motion-oriented Projection Flow Network

Yue Huang, Haoran Mo, Xiao Liang, Chengying Gao*

Intro: In this paper, we propose a novel unpaired motion style transfer framework that generates complete stylized motions with consistent content. We introduce a motion-oriented projection flow network (M-PFN) designed for temporal motion data, which encodes the content and style motions into latent codes and decodes the stylized features produced by adaptive instance normalization (AdaIN) into stylized motions. The M-PFN contains dedicated operations and modules, e.g., Transformer, to process the temporal information of motions, which help to improve the continuity of the generated motions.

International Conference on Multimedia & Expo (ICME, 2022) (*oral)  (CCF-B)
[Paper]

3D interacting hand pose and shape estimation from a single RGB image

Chengying Gao*, Yujia Yang, Wensheng Li

Intro: This paper proposes a network called GroupPoseNet using a grouping strategy to address this problem. GroupPoseNet extracts the left- and right-hand features respectively and thus avoids the mutual affection between the interacting hands. Empowered by a novel up-sampling block called MF-Block predicting 2D heat-maps in a progressive way by fusing image features, hand pose features, and multi-scale features, GroupPoseNet is effective and robust to severe occlusions. To achieve an effective 3D hand reconstruction, we design a transformer mechanism based inverse kinematics module(termed TikNet) to generate 3D hand mesh.

Neurocomputing, 2022
[Paper]

Back to top

Garment Modeling and Virtual Try-on

Controllable Garment Image Synthesis Integrated with Frequency Domain Features

Xinru Liang, Haoran Mo, Chengying Gao*

Intro: We propose a controllable garment image synthesis framework that takes as inputs an outline sketch and a texture patch and generates garment images with complicated and diverse texture patterns. To improve the performance of global texture expansion, we exploit the frequency domain features in the generative process, which are from a Fast Fourier Transform (FFT) and able to represent the periodic information of the patterns. We also introduce a perceptual loss in the frequency domain to measure the similarity of two texture pattern patches in terms of their intrinsic periodicity and regularity.

Computer Graphics Forum (Pacific Graphics, 2023) (*oral)  (CCF-B)
[Paper]

FashionGAN: Display your fashion design using Conditional Generative Adversarial Nets

Yirui Cui, Qi Liu, Chengying Gao*, Zhuo Su

Computer Graphics Forum (Pacific Graphics, 2018) (*oral)  (CCF-B)
[Paper] [Code] [Dataset]

Automatic 3D Garment Fitting Based on Skeleton Driving

Haozhong Cai, Guangyuan Shi, Chengying Gao*, Dong Wang

Pacific-Rim Conference on Multimedia (PCM, 2018) (*oral)  (CCF-C)
[Paper]

Back to top

Multimedia Processing & 3D Rendering and Modeling

Multimedia Processing: generation and understanding of music and dance.
3D Rendering and Modeling: dynamic human reconstruction and nrural rendering, fast fluid surface reconstruction based on narrow band method and fabric modeling and rendering.


Efficient Integration of Neural Representations for Dynamic Humans

Wensheng Li, Lingzhe Zeng, Chengying Gao, Ning Liu*

Intro: While numerous studies have explored NeRF-based novel view synthesis for dynamic humans, they often require training that exceeds several hours. In this work, we introduce an innovative approach for efficiently learning and integrating neural human representations. Specifically, we initially propose decomposing high-dimensional multi-space feature volume into several feature planes, subsequently utilizing matrix multiplication to explicitly establish the correlations between different planes. This enables the simultaneous optimization of their counterparts across all dimensions by optimizing interpolated features, efficiently integrating associated details, and accelerating the rate of convergence. Additionally, we use the proposed collaborative refinement process to iteratively enhance the canonical representation. By integrating multi-space representations, we further facilitate the co-optimization of multiple frames' time-dependent observations. Experiments demonstrate that our method can achieve high-quality free-viewpoint renderings within nearly 5 minutes of optimization.

IEEE Transactions on Visualization and Computer Graphics (TVCG, 2024)  (中科院 1区/CCF-A)
[Paper]

DanceComposer: Dance-to-Music Generation Using a Progressive Conditional Music Generator

Xiao Liang, Wensheng Li, Lifeng Huang and Chengying Gao

Intro: A wonderful piece of music is the essence and soul of dance, which motivates the study of automatic music generation for dance. To create appropriate music from dance, cross-modal correlations between dance and music such as rhythm and style, should be considered. However, existing dance-to-music methods have difficulties in achieving rhythmic alignment and stylistic matching simultaneously. Additionally, the diversity of generated samples is limited due to the lack of available paired data. To address these issues, we propose DanceComposer, a novel dance-to-music framework, which generates rhythmically and stylistically consistent multi-track music from dance videos. DanceComposer features a Progressive Conditional Music Generator (PCMG) that gradually incorporates rhythm and style constraints, enabling both rhythmic alignment and stylistic matching. To enhance style control, we introduce a Shared Style Module (SSM) that learns cross-modal features as stylistic constraints. This allows the PCMG can be trained on extensive music-only data and diversifies generated pieces.

IEEE Transactions on Multimedia (TMM, 2024)  (中科院 1区/CCF-B)
[Paper]

PianoBART: Symbolic Piano Music Generation and Understanding with Large-Scale Pre-Training

Xiao Liang, Zijian Zhao, Weichao Zeng, Yutong He, Fupeng He, Yiyi Wang and Chengying Gao

Intro: Learning musical structures and composition patterns is necessary for both music generation and understanding, but current methods do not make uniform use of learned features to generate and comprehend music simultaneously. In this paper, we propose PianoBART, a pre-trained model that uses BART for both symbolic piano music generation and understanding. We devise a multi-level object selection strategy for different pre-training tasks of PianoBART, which can prevent information leakage or loss and enhance learning ability. The musical semantics captured in pre-training are fine-tuned for music generation and understanding tasks. Experiments demonstrate that PianoBART efficiently learns musical patterns and achieves outstanding performance in generating high-quality coherent pieces and comprehending music.

International Conference on Multimedia & Expo (ICME, 2024)  (CCF-B)

A Completely Parallel Surface Reconstruction Method for Particle-Based Fluids

Wencong Yang, Chengying Gao

Intro: In this paper, a fast, simple and extremely accurate narrow-band method of fluid surface is proposed firstly, which makes the surface reconstruction algorithm (such as marching cube) accurately process the valid fluid surface area, which greatly avoids the useless calculation process. At the same time, we analyze the potential race conditions and conditional branching in the reconstruction process, by using mutual exclusive prefix sum algorithm, the whole process of fluid surface reconstruction is completely parallelized, which greatly speeds up the efficiency of surface reconstruction.

Computer Graphics International (CGI, 2020)  (CCF-C)
[Paper]

Fully automatic algorithm on yarn model generation

Zekun Zhang

[Introduction (PPT)]

Microscopic model based real time algorithm on fabric rendering

Xingrong Luo

[Introduction (PPT)]

Back to top