Datasheet for SynthForge: Synthesizing High-Quality Face Dataset with Controllable 3D Generative Models

1Mercedes-Benz Research & Development India,
2IIIT Hyderabad, India, 3 Max-Planck Institute for Intelligent Systems, Tuebingen, Germany
* denotes equal contribution

Datasheet for SynthForge Dataset

Motivation

  1. For what purpose was the dataset created?

    The dataset was created to support the training of advanced generative models for facial analysis tasks, including facial recognition, expression analysis, and biometric authentication. The specific gap it aimed to fill was the lack of high-quality, annotated synthetic datasets that closely mimic real human facial features with minimal domain gaps. This was essential for developing models capable of performing accurately on real-world data using only synthetic training data.

  2. Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?

    This dataset was created by a collaboration of researchers from various institutions (Mercedes Benz R&D India; IIIT Hyderabad; Max Planck Institute for Intelligent Systems).

  3. Who funded the creation of the dataset?

    This dataset was supported by Mercedes-Benz Research & Development India through the access to resources such as GPU compute enabled servers to facilitate training and experiments.

Composition

  1. What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.

    The dataset comprises of face image data of synthetically generated human heads along with synthetically generated annotations in the form of PNG, TXT, and JSON files.

  2. How many instances are there in total (of each type, if appropriate)?

    The generated dataset which has been used for experiments comprises of 100k samples in the train set and 10k samples in the validation set. This amounts to 100k RGB images of 512x512 resolution of heads, 100k 512x512 images for semantic masks (in color), 100k 128x128 resolution depth maps, and 100k json files containing annotations related to the 68 facial keypoints.

  3. Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).

    Since the dataset is generated from the proposed pipeline presented in the paper, a significantly larger set can be generated with the accessible tools provided and there is no theoretical limit on the number of samples that could exist in the large set.

  4. What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description.

    Each frame instance is a set of the RGB image, semantic mask, depth, and keypoint annotations. There is no separate format of the data.

  5. Is there a label or target associated with each instance? If so, please provide a description.

    There are multiple labels associated with each image instance. (i) The depth map is generated from the volumetric rendering stage in the final stage of the Next3D generator, (ii) The semantic labels are pixel-level descriptors about part of face such as skin, eyes, head, upper lip, lower lip, nose etc., (iii) the 68 facial landmarks along the face edge, eyes, nose, and mouth. These can be used to train multi-task networks on downstream tasks.

  6. Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text.

    No. All information extracted from the proposed pipeline are present in the dataset and annotations.

  7. Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit.

    No such information is necessary in the proposed dataset.

  8. Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them.

    Yes, we sample a 100k sized training dataset and a 10k sized validation dataset from the data generation pipeline for facilitating the experiments reported in the paper. Furthermore, more data can be sampled using the proposed approach.

  9. Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.

    Yes, there are some minor misalignments in the annotations and the generated RGB image. These are due to the nature of the StyleGAN generator network used in the process. We have addressed the issue and steps taken to rectify these errors in the paper.

  10. Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.

    The dataset is self contained and can be used for training facial analysis tasks. However, the dataset generation framework relies of FLAME and Next3D to enable training the models. These resources are available on archival forums and widely available, as per the details outlined in the paper and the code repositories or our work.

  11. Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor– patient confidentiality, data that includes the content of individuals’ non-public communications)? If so, please provide a description.

    No, since the dataset has been synthetically generated.

  12. Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why.

    No, since the dataset has been synthetically generated and contains outputs from generative models that generate head images based on provided geometry, expression params, and latent variable alone.

  13. Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset.

    No, the dataset has been generated through a generator model trained on publically available FFHQ dataset which contains images from various groups.

  14. Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how.

    No, since the dataset has been generated synthetically from a generative model.

  15. Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals race or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? If so, please provide a description.

    No biometric level information is synthesized so there are no hallucinations towards identity specific information.

Collection Process

  1. How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If the data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how.

    The dataset was directly generated using a pre-trained generative model which we modify to be able to extract annotations and semantic level information. The code and resources to generate more data is available through the project webpage and the paper.

  2. What mechanisms or procedures were used to collect the data (e.g., hardware apparatuses or sensors, manual human curation, software programs, software APIs)? How were these mechanisms or procedures validated?

    Please see the response to the previous question.

  3. If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)?

    The data was generated by uniformly sampling in the latent variable distribution. The exact parameters are reported in the main paper with thorough discussion in the supplementary material.

  4. Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?

    Not applicable since the dataset was synthetically generated.

  5. Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created.

    The dataset was generated on a single A100 GPU with 6 parallel process over a span of 2.5 hours. The provided code can be further optimized to enable better parallelization and multi-GPU support.

  6. Were any ethical review processes conducted (e.g., by an institutional review board)? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation.

    Not applicable since the dataset was synthetically generated.

  7. Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?

    Not applicable since the dataset was synthetically generated.

  8. Were the individuals in question notified about the data collection? If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself.

    Not applicable since the dataset was synthetically generated.

  9. Did the individuals in question consent to the collection and use of their data? If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented.

    Not applicable since the dataset was synthetically generated.

  10. If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate).

    Not applicable since the dataset was synthetically generated.

  11. Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation.

    The potential impact and use cases of the generated dataset have been thoroughly discussed with experiments supporting such examples in the paper.

Preprocessing/Cleaning/Labeling

  1. Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remaining questions in this section.

    The dataset was directly generated using a pre-trained generative model which we modify to be able to extract annotations and semantic level information. The code and resources to generate data are available through the project webpage and the paper, along with thorough discussions about pre- and post-processing of the dataset.

  2. Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data.

    Please refer to the previous response.

  3. Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point.

    Please refer to the previous response.

Uses

  1. Has the dataset been used for any tasks already? If so, please provide a description.

    Yes, the dataset is used to benchmark performance on downstream facial analysis tasks such as semantic segmentation, depth estimation, and facial landmark estimation in both single task and multi-task frameworks. The code to reproduce the experiments is available on the project page along with detailed analysis in the paper.

  2. Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point.

    The dataset usage via Kaggle is available on the dataset and project page.

  3. What (other) tasks could the dataset be used for?

    The dataset can be used towards human head re-enactment, face parsing, and 3D reconstruction tasks on head avatar models.

  4. Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a dataset consumer might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other risks or harms (e.g., legal risks, financial harms)? If so, please provide a description. Is there anything a dataset consumer could do to mitigate these risks or harms?

    Not applicable since the dataset was generated synthetically. However, the generative model used in the pipeline can be replaced with different approaches which adhere to the guidelines mentioned in the method section of the paper.

  5. Are there tasks for which the dataset should not be used? If so, please provide a description.

    Since the dataset was generated fairly and synthetically, we allow full use of the dataset and generative methods for academic non-commercial research as per the license guidelines.

Distribution

  1. Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description.

    No, the dataset is available for academic non-commercial usage and hosted on Kaggle as a public dataset.

  2. How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)?

    Yes, the dataset is hosted on Kaggle at https://www.kaggle.com/datasets/shubhamdokania/synthforgedata/ and has an associated DOI 10.34740/kaggle/dsv/8660031

  3. When will the dataset be distributed?

    Yes, the dataset is available to access and distribute on Kaggle.

  4. Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions.

    The dataset is licensed under CC BY-NC-SA 4.0 which allows non-commercial use of the dataset and we encourage further research using the proposed pipeline.

  5. Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions.

    No such restrictions are imposed.

  6. Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation.

    No such restrictions are imposed.

Maintenance

  1. Who will be supporting/hosting/maintaining the dataset?

    The dataset will continue to be hosted and maintained by the authors on the Kaggle platform. In the case of any changes to the terms of the hosting platform, the dataset will still be publicly available with changes updated on all project pages, repositories, etc.

  2. How can the owner/curator/manager of the dataset be contacted (e.g., email address)?

    Yes, please find the contact information of the authors in the project webpage, paper, and the code repositories.

  3. Is there an erratum? If so, please provide a link or other access point.

    Not available and not required. The dataset is documented in the proposed paper.

  4. Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to dataset consumers (e.g., mailing list, GitHub).

    The dataset which has been generated to facilitate the experiments reported in the paper will not be updated/changed. However, in the case of changes on the code for generation of the dataset, information will be duly updated on the Github repository.

  5. If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were the individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced.

    Only synthetically generated information is available in the dataset so no such constraints are applicable.

  6. Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers.

    Please refer to the previous responses for clarification.

  7. If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to dataset consumers? If so, please provide a description.

    Yes, the dataset is released in the open domain allowing academic non-commercial research.