Dear Lal,
I took the time to look a little deeper into the matter. Among other things, I installed PrivateGPT on my personal computer and tried it out. Unfortunately, the installation was not as straightforward as I had hoped. I had to resolve various conflicts, install additional software and put in more time and effort than expected. In the end, I managed to get the model running and to train it with some test data and play around with it.
I have to admit that I was not particularly impressed. I had to wait an eternity for the model to give me an answer to a question. This is probably also because my computer’s processor is not very powerful and I simply need more processing power for a faster response. On average, I waited between 5 and 20 minutes for a response, depending on the number of words in my original question. Sometimes I even cancelled the query because it took too long. The quality of the answers was disappointing, they were always choppy and incomplete. Apart from the incompleteness, the fragments were at least alright in terms of content.
I then researched other models and tried to understand their inner workings better. What I have learnt is that it is one thing to try out such a large language model on your private computer, but quite another to make it publicly available, as ChatGPT does. Key factors to consider include:
- resource requirements (memory, processing power)
- security
- cost management
- testing and quality assurance
- maintanance and updates
LLMs are gigantic data files with billions of parameters that need a lot of memory. To process a question in a short time, one needs very powerful computers that consume a corresponding amount of energy. To realize such a project is by no means trivial and requires careful planning in order not to be stuck with the costs in the end.
The good news, however, is that there is a lot happening in the field of LLMs. Improvements are constantly being made and new approaches to more efficient models are being introduced. I am confident that in the coming years the technology will have matured to the point where it will be easier to solve the problems I have mentioned. Then you could train a puredhammaGPT with all your posts and make it available to the world. I will of course keep you updated!