Skip to content

denizZz009/DatasetCreator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

Dataset Creator with GPT-Neo1.3B

Q&A and Model Fine-Tuning with GPT-Neo

This project aims to develop a text generation and question and answer system using the GPT-Neo 1.3B model. The project will collect data by generating questions and answers around a specific topic and then use this data to perform fine-tuning to improve the performance of the model.

Features

  • Question Generation: Random questions are generated on a specific topic.
  • Answer Generation: Meaningful answers are generated by the model to the generated questions.
  • Saving Data: Questions, answers and related timestamps are saved to files.
  • Model Fine-Tuning: The model is fine-tuned every certain intervals with the data received from the user.
  • GPU Support: Support for GPU usage is offered in the training processes of the model.
  • Data Saving: The learned data is saved at regular intervals and can be used for subsequent fine-tuning.

Requirements

The following Python libraries are needed for the project to work:

  • transformers (Hugging Face)
  • torch (PyTorch)
  • os, json, random, time (Python standard libraries)

You can use the following command to install these libraries:

pip install transformers torch

About

This project is an NLP application that aims at automatic question-answer generation and model fine-tuning using the GPT-Neo 1.3D model. The accuracy of the model is increased by continuous learning. Hugging Face was developed using Transformers and PyTorch.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages