Geometry- and appearance-controlled full-body human image generation is an interesting but challenging task. Existing solutions are either unconditional or dependent on coarse conditions (e.g., pose, text), thus lacking explicit geometry and appearance control of body and garment. Sketching offers such editing ability and has been adopted in various sketch-based face generation and editing solutions. However, directly adapting sketch-based face generation to full-body generation often fails to produce high-fidelity and diverse results due to the high complexity and diversity in the pose, body shape, and garment shape and texture. Recent geometrically controllable diffusion-based methods mainly rely on prompts to generate appearance. It is hard to balance the realism and the faithfulness of their results to the sketch when the input is coarse. This work presents Sketch2Human, the first system for controllable full-body human image generation guided by a semantic sketch (for geometry control) and a reference image (for appearance control). Our solution is based on the latent space of StyleGAN-Human with inverted geometry and appearance latent codes as input. Specifically, we present a sketch encoder trained with a large synthetic dataset sampled from StyleGAN-Human's latent space and directly supervised by sketches rather than real images. Considering the entangled information of partial geometry and texture in StyleGAN-Human and the absence of disentangled datasets, we design a novel training scheme that creates geometry-preserved and appearance-transferred training data to tune a generator to achieve disentangled geometry and appearance control. Although our method is trained with synthetic data, it can also handle hand-drawn sketches. Qualitative and quantitative evaluations demonstrate the superior performance of our method to state-of-the-art methods
An illustration about the training (right) and inference (left) pipelines of our method for full-body human image generation conditioned on a semantic sketch \(PS_g\) and a reference image \(I_a\). The training pipeline consists of two main modules: Sketch Image Inversion (right-top) and Body Generator Tuning (right-bottom). In the Sketch Image Inversion module, we first sample a latent code \(w_g\) to generate the training triplet (semantic sketch \(PS_g\), parsing map \(P_g\), sketch \(S_g\)). Then, we use these data to train a sketch encoder. In the Body Generator Tuning module, given an appearance code \(w_a\), we also sample a latent code \(w_g\) to prepare the training appearance-transferred \(I_{mix6}\) and geometry-preserved \(I_{mix10}\) samples via style mixing at different layers. Then, we use them to fine-tune the generator \(G(w;\theta^{'})\). During inference, the sketch encoder first embeds \(PS_g\) into a latent code and mixes it with the appearance code derived from \(I_{a}\) to form \(w_{mix8}\). Given \(w_{mix8}\), \(G(w;\theta^{'})\) produces the final result \(I_{syn}\).
@article{qu2024sketch2human,
title={Sketch2human: Deep human generation with disentangled geometry and appearance constraints},
author={Qu, Linzi and Shang, Jiaxiang and Ye, Hui and Han, Xiaoguang and Fu, Hongbo},
journal={IEEE Transactions on Visualization and Computer Graphics},
year={2024},
publisher={IEEE}
}