So, before we begin writing code, let's define Prompting. So, 🤔📝
In the context of using models such as ChatGPT, prompting refers to providing a specific input or query to the model in order for it to generate the desired response.
Users can enter a prompt or set of instructions, and the model will generate text based on its prior knowledge and understanding of language patterns.
Prompting is a common method for utilising language model capabilities for a variety of tasks such as content creation 📝, creative writing 🖋️, code generation 💻, and more! 🌐
What Tools we are using :
- Huggingface pretrained model ( more specifically falcon)
- Python
- Langchain
- Google Colab
So, let's get started! 🚀
To begin, go to Google Colab Notebook 📒 and create a new notebook that looks something like:
After that,
After that this page will open, here we can change our notebook name to prompter.ipynb
Now, let's dive into coding the following 8 lines of code: 💻👩💻🚀
Step 1:
Install relevant libraries :
!pip install langchain
!pip install huggingface_hub
Step 2:
Let's add hugging face token to our environment variable :
import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "YOUR_TOKEN"
One can easily get our token from hugging face website :
Go to the official Hugging Face website at https://huggingface.co/ and signup or signin accordingly.
Go to Settings page and select Access Token Tab, Then you will get option of Create token.
3.Create token and replace it with YOUR_TOKEN
.
Step 3:
Import HuggingFaceHub
from langchain :
from langchain import HuggingFaceHub
- Initialise your Large language model( llm):
llm = HuggingFaceHub(repo_id="tiiuae/falcon-7b-instruct", model_kwargs={"temperature":0.6})
Here I used tiiuae/falcon-7b-instruct
model and there are plenty of models available you can check this on this link :
https://huggingface.co/models?pipeline_tag=text-generation
The temperature is set to 0.6, which is considered mild. In the generated text, it signifies a balance of originality and focus. By adjusting the temperature, you can fine-tune the language model's output based on your individual needs for diversity and coherence.
Beside temperature there are other parameters also available for fine tuning our model.
- Let's test and use the model :
response = llm("Explain Large Language Models ?")
print(response)
And this results into :
Large Language Models are computational models that are designed to process large amounts of natural language data. These models are used to understand complex language, such as detecting subtle differences between different dialects or understanding the structure of sentences in different languages. Large Language Models are typically composed of neural networks, which are used to extract features from text and generate output. The output of these models can be used to perform tasks such as machine translation, summarization, and natural language processing.
Finally, in just 8 lines of code, we have created our own ChatGPT! 🎉💻🤖
Checkout my results :
Itaewon Class is my favorite KDrama. Also my favorite singer Kim Taehyung sing a song in this Kdrama [ "Sweet Night" ~ Kim Taehyung]
Also some more results are :
That's all in this blog. One can also fine-tune this model or even use different pre-trained models and create personalized ML/AI models. 🤖🧠✨