We attempted to prompt a GPT system to generate a response to a specific prompt and detail the steps the system followed to produce the generated response. For this purpose, we employed the open-source toolkit Gpt4all, utilizing the opensource language model LLama 7bit-quantized. The pretrained model was specialized on a series of thematic texts using the open-source libraries Langchain. The choice of toolkit, model, and libraries was determined by two factors: 1) the intention to utilize both fully open-source toolkits and models, and 2) the necessity to use toolkits and models capable of running on consumer-grade CPUs, within our available resources. This work focuses on the concepts of prediction, bias, and explainability that motivated the aforementioned experiment.
Inclusion: A Concept Too Young for Artificial Intelligence
Di Tore, Stefano;Di Tore, Pio Alfredo;Bilotti, Umberto;Sibilio, Maurizio
2024
Abstract
We attempted to prompt a GPT system to generate a response to a specific prompt and detail the steps the system followed to produce the generated response. For this purpose, we employed the open-source toolkit Gpt4all, utilizing the opensource language model LLama 7bit-quantized. The pretrained model was specialized on a series of thematic texts using the open-source libraries Langchain. The choice of toolkit, model, and libraries was determined by two factors: 1) the intention to utilize both fully open-source toolkits and models, and 2) the necessity to use toolkits and models capable of running on consumer-grade CPUs, within our available resources. This work focuses on the concepts of prediction, bias, and explainability that motivated the aforementioned experiment.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


