I wanted to show my students appropriate ways of using LLMs for and during coding, so I started building (with some LLM help) a Slidev component, LLMQuery.vue, that adds LLM interactions to slides. It feels important to actively show students how these tools can amplify human knowledge and skill building rather than replace it altogether, even if I’m far from an expert. So with a bit of LLM help , I put together a sli.dev component in Vue that integrates LLMQuery right into my Slidev presentation. Maybe it’s useful for others too, so I’m sharing it here for download and further tinkering—people who are much better at web dev (there are many!) can probably turn it into something truly polished.
![]() |
![]() |
![]() |
|---|---|---|
| Prompting the assistant alongside slide content | Choosing between available models in the session | Reviewing responses inside the Slidev component |
What is LLMQuery?
LLMQuery is a Vue.js component I’ve created specifically for Slidev that integrates multiple Large Language Models (LLMs) using OpenRouter directly into your presentation slides. With support for GPT-4, Claude, Gemini, Llama, and Grok, and many other models, you can create a truly interactive agent to be used inside your presentations, and relate it to the content on the slide. It is a quick hack, so don’t expect too much of it.
Features
-
Multiple AI Models: LLMQuery uses the OpenRouter API to access multiple AI models through a single interface.
-
Contextual Intelligence: The component captures slide content and combines it with user prompts to provide contextually relevant responses.
-
Real-time Interaction: Get instant responses during live presentations
Installation
-
Download the Component
curl -O https://www.krisluyten.net/files/software/LLMQuery.vue -
Place in Components Directory
mkdir -p components mv LLMQuery.vue components/ -
Install Dependencies
npm install axios -
Configure API Key
echo "VITE_OPENROUTER_API_KEY=your_api_key_here" > .env
Examples
Simple AI Assistant
Add an AI assistant to any slide with minimal configuration:
---
layout: default
---
# My Presentation Topic
<LLMQuery model="google/gemini-2.5-pro">
Your slide content goes here...
</LLMQuery>
This creates a floating LLM button that can be clicked to ask questions about the slide content. The content in between the <LLMQuery> open and </LLMQuery> close tags will be passed to the LLM when it is prompted
Auto-Executing Prompts
For educational content, you might want to automatically generate explanations without prompting the LLM during the lecture. You can add a prompt directly in the LLMQuery tag that will be executed automatically.
---
layout: default
---
# Complex Algorithm Explanation
<LLMQuery
model="anthropic/claude-3.5-sonnet"
prompt="Explain this algorithm in simple terms with an examples"
position="top-right"
>
```python
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
```
</LLMQuery>
The AI automatically analyzes the code that is between the tags, and provides an explanation when the slide loads.
Multi-Model Comparison
Compare how different LLMs approach the same problem, or give each LLMs a specific role.
---
layout: default
---
# AI Model Perspectives
<LLMQuery
model="openai/gpt-4.1-mini"
position="top-left"
system-prompt="Focus on practical implementation details"
/>
<LLMQuery
model="anthropic/claude-sonnet-4"
position="top-right"
system-prompt="Focus on theoretical computer science concepts"
/>
<LLMQuery
model="google/gemini-2.5-pro"
position="bottom-left"
system-prompt="Focus on creative and innovative approaches"
/>
<LLMQuery
model="x-ai/grok-4"
position="bottom-right"
system-prompt="Focus on practical, real-world applications"
/>
## Design Patterns in Software Engineering
Compare how different AI models explain the same concept!
Interactive Code Review
Perfect for programming workshops:
---
layout: default
---
# Code Review Session
<LLMQuery
model="gpt-4"
prompt="Review this code for best practices, potential bugs, and improvements"
system-prompt="You are a senior software engineer conducting a code review in max 10 lines"
>
```java
public class UserService {
private List<User> users = new ArrayList<>();
public User findUser(String email) {
for (User user : users) {
if (user.getEmail().equals(email)) {
return user;
}
}
return null;
}
public void addUser(User user) {
users.add(user);
}
}
```
</LLMQuery>
Educational Q&A
Great for classroom settings, to enable automatic explanations.
---
layout: default
---
# Object-Oriented Programming Principles
<LLMQuery
model="anthropic/claude-3.5-sonnet"
system-prompt="You are a computer science professor. Explain concepts clearly with a real-world example suitable for undergraduate students."
>
## The Four Pillars of OOP
1. **Encapsulation** - Bundling data and methods together
2. **Inheritance** - Creating new classes from existing ones
3. **Polymorphism** - Objects taking multiple forms
4. **Abstraction** - Hiding implementation complexity
*Ask the AI to explain any of these concepts with examples!*
</LLMQuery>
Configuration Options
Supported AI Models
Models that are supported by OpenRouter:
| Model ID | Provider | Description |
|---|---|---|
gpt-4 |
OpenAI | Most capable GPT model |
openai/gpt-4.1-mini |
OpenAI | Faster, cost-effective option |
anthropic/claude-sonnet-4 |
Anthropic | Latest Claude model |
anthropic/claude-3.5-sonnet |
Anthropic | Balanced performance |
google/gemini-2.5-pro |
Advanced reasoning | |
meta-llama/llama-3.3-70b-instruct |
Meta | Open-source alternative |
x-ai/grok-code-fast-1 |
xAI | Fast code understanding and generation |
x-ai/grok-4 |
xAI | Latest Grok model |
mistralai/mistral-small-3.2-24b-instruct:free |
Mistral | Fast and inexpensive |
These can be easily extended with the many models OpenRouter offers.
Component Props
interface LLMQueryProps {
model: string; // Required: AI model to use
apiKey?: string; // Optional: API key (uses .env if not provided)
systemPrompt?: string; // Optional: System prompt for AI behavior
prompt?: string; // Optional: Auto-execute this prompt
position?: 'top-right' | 'top-left' | 'bottom-right' | 'bottom-left';
autoExecute?: boolean; // Optional: Auto-run prompt (default: true)
}
Positioning Options
<!-- Top right corner (default) -->
<LLMQuery model="gpt-4" position="top-right" />
<!-- Top left corner -->
<LLMQuery model="anthropic/claude-sonnet-4" position="top-left" />
<!-- Bottom right corner -->
<LLMQuery model="google/gemini-2.5-pro" position="bottom-right" />
<!-- Bottom left corner -->
<LLMQuery model="x-ai/grok-4" position="bottom-left" />
API Key Setup
- Sign up at OpenRouter: Visit openrouter.ai
- Generate API Key: Go to openrouter.ai/keys
- Add to Environment:
echo "VITE_OPENROUTER_API_KEY=sk-or-v1-your-key-here" >> .env


