Introducing Gradio 5.0
Read MoreIntroducing Gradio 5.0
Read MoreUsers expect modern chatbot UIs to let them easily interact with individual chat messages: for example, users might want to retry message generations, undo messages, or click on a like/dislike button to upvote or downvote a generated message.
Thankfully, the Gradio Chatbot exposes three events, .retry
, .undo
, and like
, to let you build this functionality into your application. As an application developer, you can attach functions to any of these event, allowing you to run arbitrary Python functions e.g. when a user interacts with a message.
In this demo, we'll build a UI that implements these events. You can see our finished demo deployed on Hugging Face spaces here:
Tip: `gr.ChatInterface` automatically uses the `retry` and `.undo` events so it's best to start there in order get a fully working application quickly.
First, we'll build the UI without handling these events and build from there. We'll use the Hugging Face InferenceClient in order to get started without setting up any API keys.
This is what the first draft of our application looks like:
from huggingface_hub import InferenceClient
import gradio as gr
client = InferenceClient()
def respond(
prompt: str,
history,
):
if not history:
history = [{"role": "system", "content": "You are a friendly chatbot"}]
history.append({"role": "user", "content": prompt})
yield history
response = {"role": "assistant", "content": ""}
for message in client.chat_completion(
history,
temperature=0.95,
top_p=0.9,
max_tokens=512,
stream=True,
model="HuggingFaceH4/zephyr-7b-beta"
):
response["content"] += message.choices[0].delta.content or ""
yield history + [response]
with gr.Blocks() as demo:
gr.Markdown("# Chat with Hugging Face Zephyr 7b 🤗")
chatbot = gr.Chatbot(
label="Agent",
type="messages",
avatar_images=(
None,
"https://em-content.zobj.net/source/twitter/376/hugging-face_1f917.png",
),
)
prompt = gr.Textbox(max_lines=1, label="Chat Message")
prompt.submit(respond, [prompt, chatbot], [chatbot])
prompt.submit(lambda: "", None, [prompt])
if __name__ == "__main__":
demo.launch()
Our undo event will populate the textbox with the previous user message and also remove all subsequent assistant responses.
In order to know the index of the last user message, we can pass gr.UndoData
to our event handler function like so:
``python def handle_undo(history, undo_data: gr.UndoData): return history[:undo_data.index], history[undo_data.index]['content']
We then pass this function to the `undo` event!
```python
chatbot.undo(handle_undo, chatbot, [chatbot, prompt])
You'll notice that every bot response will now have an "undo icon" you can use to undo the response -
Tip: You can also access the content of the user message with `undo_data.value`
The retry event will work similarly. We'll use gr.RetryData
to get the index of the previous user message and remove all the subsequent messages from the history. Then we'll use the respond
function to generate a new response. We could also get the previous prompt via the value
property of gr.RetryData
.
def handle_retry(history, retry_data: gr.RetryData):
new_history = history[:retry_data.index]
previous_prompt = history[retry_data.index]['content']
yield from respond(previous_prompt, new_history)
...
chatbot.retry(handle_retry, chatbot, [chatbot])
You'll see that the bot messages have a "retry" icon now -
Tip: The Hugging Face inference API caches responses, so in this demo, the retry button will not generate a new response.
By now you should hopefully be seeing the pattern!
To let users like a message, we'll add a .like
event to our chatbot.
We'll pass it a function that accepts a gr.LikeData
object.
In this case, we'll just print the message that was either liked or disliked.
def handle_like(data: gr.LikeData):
if data.liked:
print("You upvoted this response: ", data.value)
else:
print("You downvoted this response: ", data.value)
...
chatbot.like(vote, None, None)
That's it! You now know how you can implement the retry, undo, and like events for the Chatbot.