This project aims to develop a text generation and question and answer system using the GPT-Neo 1.3B model. The project will collect data by generating questions and answers around a specific topic and then use this data to perform fine-tuning to improve the performance of the model.
- Question Generation: Random questions are generated on a specific topic.
- Answer Generation: Meaningful answers are generated by the model to the generated questions.
- Saving Data: Questions, answers and related timestamps are saved to files.
- Model Fine-Tuning: The model is fine-tuned every certain intervals with the data received from the user.
- GPU Support: Support for GPU usage is offered in the training processes of the model.
- Data Saving: The learned data is saved at regular intervals and can be used for subsequent fine-tuning.
The following Python libraries are needed for the project to work:
transformers(Hugging Face)torch(PyTorch)os,json,random,time(Python standard libraries)
You can use the following command to install these libraries:
pip install transformers torch