API Client
Complete guide to the frontend API client, including authentication, rate limiting, and usage patterns.
Overview
The API client module provides a robust, type-safe interface for communicating with the NotebookLLM backend. It handles authentication, automatic retries, rate limiting, and error handling.
Core API Client
The main API client functions handle all HTTP requests to the backend with built-in authentication, rate limiting, and retry logic.
apiClient Function
import { apiClient, apiUpload, ApiError, RateLimitError } from "@/lib/api/client"
// Basic GET request
const data = await apiClient<T>("/endpoint")
// POST request with body
const result = await apiClient<T>("/endpoint", {
method: "POST",
body: JSON.stringify({ key: "value" })
})
// With custom timeout and retries
const result = await apiClient<T>("/endpoint", {
method: "POST",
body: JSON.stringify(data),
timeoutMs: 60000,
maxRetries: 5,
retryOnRateLimit: true
})File Uploads
import { apiUpload } from "@/lib/api/client"
// Upload file with FormData
const formData = new FormData()
formData.append("file", fileBlob, "document.pdf")
formData.append("notebook_id", notebookId)
const result = await apiUpload<UploadResponse>("/api/v1/documents/upload", formData)Error Handling
The API client provides custom error classes for different types of failures, enabling granular error handling.
ApiError
Base error class for API failures. Contains HTTP status code and error details.
try {
const data = await apiClient("/endpoint")
} catch (error) {
if (error instanceof ApiError) {
console.log(error.status) // HTTP status code
console.log(error.detail) // Error message from server
console.log(error.rateLimitStatus) // Rate limit info if available
if (error.status === 401) {
// Redirect to login
window.location.href = "/auth/login"
}
}
}RateLimitError
Thrown when rate limits are exceeded. Contains retry information and current rate limit status.
try {
const data = await apiClient("/endpoint")
} catch (error) {
if (error instanceof RateLimitError) {
console.log(error.retryAfterSeconds) // Seconds until retry
console.log(error.rateLimitStatus) // Full rate limit status
// Wait and retry
if (error.retryAfterSeconds) {
await sleep(error.retryAfterSeconds * 1000)
// Retry the request
}
}
}Rate Limiting
The client automatically tracks rate limits from API responses and implements exponential backoff for retrying failed requests.
Rate Limit Headers
The client parses these headers from API responses:
| Header | Description |
|---|---|
x-ratelimit-limit | Maximum requests allowed in the window |
x-ratelimit-remaining | Remaining requests in current window |
x-ratelimit-reset | Unix timestamp when the limit resets |
retry-after | Seconds to wait before retrying |
Checking Rate Limit Status
import {
getRateLimitStatus,
canMakeApiRequest,
getRateLimitMessage,
subscribeToRateLimitChanges
} from "@/lib/api/client"
// Check current status
const status = getRateLimitStatus()
console.log(status.canMakeRequest) // Can we make requests?
console.log(status.remaining) // Remaining requests
console.log(status.usagePercentage) // % of limit used
console.log(status.retryAfterSeconds) // Seconds until retry
// Simple check
if (canMakeApiRequest()) {
// Make API call
}
// Subscribe to changes
const unsubscribe = subscribeToRateLimitChanges((status) => {
console.log("Rate limit changed:", status)
})
// Cleanup
unsubscribe()Backoff Strategy
The client uses exponential backoff with jitter for retries:
- First retry: ~1 second
- Second retry: ~2 seconds
- Third retry: ~4 seconds
- Maximum delay: 30 seconds
- Random jitter: 0-30% added to prevent thundering herd
API Modules
The API is organized into modular endpoints, each handling a specific feature area.
Importing API Modules
// Chat API
import { chatApi } from "@/lib/api/chat"
const messages = await chatApi.getMessages(notebookId)
const response = await chatApi.sendMessage(notebookId, message)
// Documents API
import { documentsApi } from "@/lib/api/documents"
const docs = await documentsApi.list(notebookId)
const upload = await documentsApi.upload(file, notebookId)
// Generation API
import { generationApi } from "@/lib/api/generation"
const content = await generationApi.generate(notebookId, "podcast")
// Notes API
import { notesApi } from "@/lib/api/notes"
const notes = await notesApi.list(notebookId)
// Feedback API
import { feedbackApi } from "@/lib/api/feedback"
await feedbackApi.create({ content_type: "chat_response", content_id, rating: "thumbs_up" })Server-Sent Events (Streaming)
For streaming responses (like chat), use the streaming headers for SSE connections.
Streaming Headers
import { getStreamingHeaders, getApiBaseUrl } from "@/lib/api/client"
async function* streamChat(message: string) {
const headers = await getStreamingHeaders()
const response = await fetch(`${getApiBaseUrl()}/api/v1/chat/stream`, {
method: "POST",
headers,
body: JSON.stringify({ message, stream: true })
})
const reader = response.body?.getReader()
const decoder = new TextDecoder()
while (reader) {
const { done, value } = await reader.read()
if (done) break
const chunk = decoder.decode(value)
yield chunk
}
}Configuration
The API client can be configured through environment variables.
| Variable | Description |
|---|---|
NEXT_PUBLIC_API_URL | Backend API URL (default: http://localhost:8000) |
Best Practices
Use TypeScript
All API responses are typed. Import types from the appropriate module for full type safety.
Handle Errors
Always wrap API calls in try/catch and handle ApiError and RateLimitError appropriately.
Respect Rate Limits
The client handles retries automatically, but avoid making excessive concurrent requests.
Use Modules
Import specific API modules (chatApi, documentsApi, etc.) rather than making raw requests.