Build a Free and Easy Prompter(like ChatGPT) with Hugging Face Models in Just 8 Lines of Code 🚀💬🤖

Jagroop Singh - Dec 2 '23 - - Dev Community

So, before we begin writing code, let's define Prompting. So, 🤔📝

In the context of using models such as ChatGPT, prompting refers to providing a specific input or query to the model in order for it to generate the desired response.

Users can enter a prompt or set of instructions, and the model will generate text based on its prior knowledge and understanding of language patterns.

Prompting is a common method for utilising language model capabilities for a variety of tasks such as content creation 📝, creative writing 🖋️, code generation 💻, and more! 🌐

What Tools we are using :

- Huggingface pretrained model ( more specifically falcon)
- Python
- Langchain
- Google Colab

Enter fullscreen mode Exit fullscreen mode

So, let's get started! 🚀

To begin, go to Google Colab Notebook 📒 and create a new notebook that looks something like:

Homepage of Google colab

After that,

Create new notebook

After that this page will open, here we can change our notebook name to prompter.ipynb

Remaning notebook

Now, let's dive into coding the following 8 lines of code: 💻👩‍💻🚀

Step 1:
Install relevant libraries :

!pip install langchain
!pip install huggingface_hub
Enter fullscreen mode Exit fullscreen mode

Step 2:
Let's add hugging face token to our environment variable :

import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "YOUR_TOKEN"
Enter fullscreen mode Exit fullscreen mode

One can easily get our token from hugging face website :

  1. Go to the official Hugging Face website at https://huggingface.co/ and signup or signin accordingly.

  2. Go to Settings page and select Access Token Tab, Then you will get option of Create token.

Access Token

3.Create token and replace it with YOUR_TOKEN.

Step 3:
Import HuggingFaceHub from langchain :

from langchain import HuggingFaceHub
Enter fullscreen mode Exit fullscreen mode
  1. Initialise your Large language model( llm):
llm = HuggingFaceHub(repo_id="tiiuae/falcon-7b-instruct", model_kwargs={"temperature":0.6})
Enter fullscreen mode Exit fullscreen mode

Here I used tiiuae/falcon-7b-instruct model and there are plenty of models available you can check this on this link :
https://huggingface.co/models?pipeline_tag=text-generation

The temperature is set to 0.6, which is considered mild. In the generated text, it signifies a balance of originality and focus. By adjusting the temperature, you can fine-tune the language model's output based on your individual needs for diversity and coherence.

Beside temperature there are other parameters also available for fine tuning our model.

  1. Let's test and use the model :
response  = llm("Explain Large Language Models ?")
print(response)
Enter fullscreen mode Exit fullscreen mode

And this results into :

Large Language Models are computational models that are designed to process large amounts of natural language data. These models are used to understand complex language, such as detecting subtle differences between different dialects or understanding the structure of sentences in different languages. Large Language Models are typically composed of neural networks, which are used to extract features from text and generate output. The output of these models can be used to perform tasks such as machine translation, summarization, and natural language processing.

Enter fullscreen mode Exit fullscreen mode

Finally, in just 8 lines of code, we have created our own ChatGPT! 🎉💻🤖

Checkout my results :

My Prompt one

Itaewon Class is my favorite KDrama. Also my favorite singer Kim Taehyung sing a song in this Kdrama [ "Sweet Night" ~ Kim Taehyung]

Also some more results are :

My Prompt Two

That's all in this blog. One can also fine-tune this model or even use different pre-trained models and create personalized ML/AI models. 🤖🧠✨

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .