Prompt Lab

The sandbox for every
AI search hypothesis

Your core report gives you the weekly baseline. Prompt Lab is where you test the extra ideas: new pages, launch copy, competitor angles, local prompts, and anything your team wants to validate before the next report.

Start Testing Prompts7-day trial / no credit card / first prompt in minutes
Requests20 / 1000
Reserved300
Available680
Monitored2
Avg Visibility61%
Search prompts...
All
PromptModelsVisPosSentDivTimeActions
platform for collaborative wiki and project tracking
ChatGPTClaudeGeminiPerplexityGrok
Running...
compare productivity platforms for marketing teams
ChatGPTClaudeGeminiPerplexityGrok
Running...
AI-powered productivity tool with content generation
ChatGPTClaudeGeminiPerplexityGrok
53%#4.27182%22m ago
best workspace for product documentation and tasks
ChatGPTClaudeGeminiPerplexityGrok
68%#3.68474%1h ago

Demo data for illustration.

Why it exists

When the report is not enough
open the lab

Prompt Lab is the place for experiments, not another static dashboard. If a sales call, launch plan, city page, or competitor claim raises a question, you can test it immediately and decide whether it deserves monitoring.

Weekly truth

Core report

100

Stable measurement across the prompts Rankry builds for your market.

semantic prompts
Daily sandbox

Prompt Lab

Daily

Custom experiments for every question you want to test outside the core report.

sandbox
Always-on tests

Monitoring

24/7

Promote a prompt into daily or weekly tracking and let Rankry keep it alive.

watch mode
Constructor

Build prompts like assets
not one-off guesses

Use variables, suggestions, model selection, clone, edit, and budget checks to turn rough ideas into repeatable AI-search tests.

1

Variables

Drop in {brand}, {city}, {country}, {niche}, and more from your project profile.

2

Suggestions

Start from Rankry-generated prompt ideas when you do not want a blank textarea.

3

Clone & edit

Duplicate a working prompt, change the angle, switch models, and re-run cleanly.

4

Request control

Every selected model and monitoring cadence affects how many requests the test will reserve.

Edit Prompt
Variables, model selection, monitoring, and budget impact
Quick Variables
{brand}= Notion{business_type}= Productivity Software{city}= San Francisco{country}= United States{niche}= All-in-one Workspace{zip_code}= 94110
Prompt

Platform for collaborative wiki and project tracking

Select Models
ChatGPTChatGPT
GeminiGemini
PerplexityPerplexity
GrokGrok
ClaudeClaude
Add to monitoring
Frequency:

Demo data for illustration.

Result room

Every model gets
its own evidence

A prompt result is not one averaged number. You see mention status, position, sentiment, response excerpts, and cited sources model by model, so weak spots are obvious.

PromptModelsVisPosSentTime
AI-powered productivity tool with content generationGemini53%#4.27122m ago
Prompt

AI-powered productivity tool with content generation

ModelMentionedPositionSentimentResponse
ChatGPTChatGPT✓ Yes#2Positive
ClaudeClaude✓ Yes#1Positive
GeminiGemini✓ Yes#8Positive

Notion AI is integrated directly into the Notion workspace, making it ideal for content generation alongside task and project management. It assists with writing, summarizing, organizing, and streamlining content creation.

getblend.combuffer.comzapier.comreddit.comg2.comsemrush.comforbes.comcontentstack.commedium.comcoursera.orgaveri.aigwi.com
PerplexityPerplexityNo--
GrokGrok✓ Yes#3Positive

Demo data for illustration.

Lab operations

You control the sandbox
from idea to evidence

Prompt Lab gives power users room to work. Create prompts, organize them, clone what works, archive what is done, and spend request budget where the next insight matters.

1

Search

Find any prompt by phrase, city, competitor, model, or outcome.

2

Clone

Duplicate a useful test and change one variable instead of starting over.

3

Edit

Change text, models, frequency, and let Rankry reset history when it should.

4

Archive

Clean the active list without losing old experiments.

5

Budget

Spend requests intentionally across manual runs and monitoring holds.

6

Sources

Open the response and citations behind every model-level result.

Request budget

Spend requests
like experiment capital

One prompt across five models costs five requests. Daily monitoring reserves the future runs before they happen, so the lab stays flexible without surprise overages.

  • Run cost is visible before you save.
  • Monitoring hold shows the future commitment.
  • Available requests update as experiments run.
Used this month
20/ 1,000
Reserved by monitoring
300requests
Available now
680requests
1,000 limit20 used300 reserved680 available

Demo data for illustration.

FAQ

Quick answers

How is Prompt Lab different from the core report?
The core report is your weekly baseline. Prompt Lab is where you test the extra questions your team cannot wait a week to answer.
What should I test first?
Start with prompts your buyers already ask: competitor comparisons, category recommendations, city-specific searches, launch positioning, and sales objections.
What counts as a request?
One request is one model running one prompt. A prompt across five models costs five requests per run.
When should I turn on monitoring?
Use monitoring when a prompt represents a real buying path, campaign, market, or competitor claim that you want to track beyond one manual run.
What do I get after a run finishes?
You get per-model visibility, position, sentiment, cited sources, and the actual response context behind the score.
Start testing

Every hypothesis deserves
a clean AI test

Open the lab, build the prompt, pick the models, spend the requests, and monitor the results that matter between reports.

Test a Custom Prompt

7-day free trial / no credit card / first prompt running in minutes

ChatGPTChatGPT
ClaudeClaude
GeminiGemini
PerplexityPerplexity
GrokGrok